Grok AI Controversy: AI may feel increasingly convenient, but it’s proving to be just as risky. A troubling trend has emerged on the social media platform X, where some users are tagging Grok AI in photos of ordinary women and issuing disturbing commands, such as altering or removing their clothing. Shockingly, this AI from Elon Musk’s company is reportedly executing these instructions without any restrictions, putting women’s privacy and dignity in serious jeopardy.
Grok AI Controversy: Fewer Safeguards, Greater Risk
When Elon Musk introduced Grok AI, he positioned it as a less restricted alternative to platforms like ChatGPT and Gemini, arguing that fewer limitations would make the AI more authentic and entertaining. However, this openness has turned into a serious concern. Users are reportedly misusing Grok’s flexibility to manipulate images of women, openly tagging the AI with prompts such as “remove your clothes,” which the system is allegedly carrying out.
Grok AI Viral Trend On X: Obscene images appearing in public feeds
The most troubling part of this issue is that the images generated by Grok are not kept private. Unlike AI tools such as ChatGPT or Gemini, where outputs remain visible only to the user, Grok is directly integrated with the X platform. As a result, every image it creates is published in its public media feed, leaving thousands of explicit and deepfake images openly accessible to anyone on X.
When Grok AI was asked directly about the controversy, its response was even more unexpected. The system admitted that users were making what it described as “funny and mischievous” requests, but rather than treating the matter as serious, it framed it as a demonstration of its image-editing capabilities. Although Grok also mentioned the importance of maintaining decorum, in practice, such restraint appears largely absent on X.
Grok has been involved in controversies before
This is not the first controversy surrounding Grok AI. Earlier reports claimed that Grok employees were compelled to view disturbing child sexual abuse (CSAM) material during training. Additionally, Grok’s “Companion Mode” has drawn widespread criticism for its offensive design and conduct.