Elon Musk’s artificial intelligence company, xAI, is facing mounting criticism following reports that its chatbot, Grok, has been widely used to generate non-consensual sexualised images through a practice commonly known as “digital undressing.”
Users have reportedly prompted the AI tool to manipulate photographs—mostly of women—by removing clothing or placing individuals in sexually suggestive poses without consent. In several cases highlighted by researchers and online safety groups, some of the generated images appeared to depict individuals who looked under the age of 18, triggering serious concerns over the possible creation of material that may violate child-protection and sexual exploitation laws.
The controversy has intensified long-standing fears about the misuse of generative AI, particularly when such tools are closely integrated with social media platforms. Critics warn that weak safeguards allow harmful content to spread rapidly, placing real people—especially women and children—at risk of harassment, reputational damage and psychological harm.
xAI and Musk have said they are taking steps to combat illegal content on X, including removing offending material, suspending accounts and cooperating with law enforcement agencies. Despite these assurances, watchdogs and researchers say Grok has continued to respond to prompts in ways that produce sexualised imagery, raising questions about the effectiveness of its safety controls.
Unlike rival AI models such as Google’s Gemini or OpenAI’s ChatGPT, Grok is embedded directly into X, enabling users to publicly tag the chatbot and receive visible responses. Analysts say this design choice has contributed to the rapid spread of manipulated images, as content can be generated and shared instantly within public conversations.
Research indicates that the trend began in late December with relatively mild prompts, such as requests to place people in swimwear, before escalating into more explicit and non-consensual scenarios. Studies found that a significant proportion of Grok-generated images depicting people showed them in minimal clothing, with women overwhelmingly represented. A smaller but deeply troubling fraction appeared to involve individuals who seemed to be minors.
Although xAI’s acceptable use policy explicitly bans sexualised content involving real people or children, enforcement has been described as inconsistent. Grok has since acknowledged failures in its safeguards, stating that such content is illegal and prohibited, while urging users to report violations. Musk has also warned that anyone using the tool to generate illegal material would face consequences.
However, critics argue that Musk’s long-standing opposition to strict content moderation and what he labels as “excessive censorship” has contributed to weaker guardrails. Reports suggest internal resistance to tighter restrictions on Grok’s image-generation features, even as the company’s safety team reportedly shrank in size.
The issue has attracted the attention of regulators across multiple regions. Authorities in parts of Europe, India and Malaysia have opened inquiries, while the UK’s media regulator has confirmed urgent engagement with Musk’s companies over concerns relating to sexually explicit and child-related content.
Experts say the technical solutions to prevent such misuse already exist, but require stronger filtering systems and acceptance of slower response times. Without those measures, they warn, AI platforms risk enabling serious and lasting harm while falling foul of national and international laws.
