
Elon Musk-owned X says its artificial intelligence tool, Grok, has been restricted from generating or editing images of real people in bikinis, underwear, or similar revealing clothing in countries where such content is illegal.
The decision follows mounting criticism over the spread of sexualised AI deepfakes and increased regulatory pressure in the United Kingdom and the United States.
The change was announced by X, formerly Twitter, just hours after California authorities disclosed a probe into the circulation of sexualised AI-generated images, including those involving children.
What X said
In a statement posted on X, the company said it has introduced technical restrictions and location-based controls to stop Grok from being used to create sexualised images of real people
“We have implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing.”
Don’t Miss This: NAFDAC says Nestlé infant formula in Nigeria not affected by UK recall
“We now geoblock the ability of all users to generate images of real people in bikinis, underwear, and similar attire via the Grok account and in Grok in X in those jurisdictions where it’s illegal.”
The company also reiterated that only paid users will have access to image editing features on Grok.
According to X, this restriction is meant to improve accountability, noting that it would add an extra layer of protection by helping to ensure that those who try and abuse Grok to violate the law or X’s policies are held accountable.
Don’t Miss This:NAFDAC says Nestlé infant formula in Nigeria not affected by UK recall
The UK government welcome move
The announcement drew immediate reactions from UK authorities, where the issue has triggered political and regulatory scrutiny.
Reacting to the development, the UK government said it felt “vindication” after Prime Minister Sir Keir Starmer publicly urged X to rein in its AI tools.
A spokesperson for the UK media regulator Ofcom described the decision as a welcome development but stressed that regulatory scrutiny was not over.
“We are working around the clock to progress this and get answers into what went wrong and what’s being done to fix it,” the spokesperson said, adding that the investigation into whether X breached UK laws remains ongoing.
Musk defends content rules and free speech
Musk, who owns X, had earlier defended the platform against criticism. With NSFW settings enabled, he said Grok permits limited content involving fictional characters only.
“With NSFW (not safe for work) settings enabled, Grok is supposed to allow ‘upper body nudity of imaginary adult humans (not real ones)’ consistent with what can be seen in R-rated films,” Musk wrote.
“That is the de facto standard in America. This will vary in other regions according to the laws on a country by country basis,” he added.
However, Musk also drew backlash after posting that critics “just want to suppress free speech” alongside AI-generated images of UK Prime Minister Sir Keir Starmer in a bikini.
Possible sanctions
Ofcom confirmed earlier this week that it is investigating whether X failed to comply with UK law over the distribution of sexual images. If the platform is found to be in breach and fails to comply, the regulator could seek a court order compelling internet service providers to block access to X in the UK.
Prime Minister Sir Keir Starmer had warned that X could lose the “right to self-regulate” over the controversy, though he later said he welcomed reports that the platform was taking steps to address the issue. He added that he would “take the necessary measures” and strengthen legislation if X failed to act.
What you should know
Concerns over Grok image generation erupted late in 2025 when users on X began using Grok to produce sexually explicit or revealing images of real people without their consent.
- Investigations by independent analysts found that the AI was complying with prompts to generate imagery that could include undressed figures or suggestive poses, often of women and in some cases appearing to involve minors.
- This raised alarms about non-consensual intimate image abuse and potential child sexual abuse material.
- The UK government and regulator Ofcom responded quickly, opening a formal investigation under the Online Safety Act to determine whether X failed to prevent illegal content. X later restricted Grok from creating sexualised images of real people, but Ofcom said its probe will continue to determine if the platform broke UK law.
- In the United States, California’s attorney general opened a state investigation into whether Grok’s deepfakes, including those involving children, violated local law.
Several countries moved beyond investigation to action. Malaysia and Indonesia both temporarily blocked access to Grok over concerns about sexually explicit AI content and its potential to harm citizens, especially minors.


