I have no other words than “what is the world actually coming to?” Horrifyingly, we are entering 2026 with a frankly disgusting scandal involving Elon Musk's apparently totally unregulated X AI tool Grok, generating a slew of non-consensual fake sexualised images of real female X users – including some minors.
Here is a breakdown of the scandal and everything you need to know.
How the Grok ‘bikini trend’ exposed the men weaponising AI technology to silence and scare women.

What is Grok?
Elon Musk created Grok as an AI tool for his platform X, formerly known as Twitter. The AI chatbot allows users to create images using prompts posted on X. The tool is allegedly designed as an AI assistant to help users answer questions quickly, solve problems and brainstorm ideas. Of course, that is not all it has been used for, and, it would seem, there are few limitations on what Grok can do once it is asked.
A timeline of Elon Musk's Grok AI X scandal
This followed the scandal that saw Grok create deepfakes of people like Taylor Swift in non-consensual sexual acts.
Meta said in response: “This content violates our policies, and we’re removing it from our platforms and taking action against accounts that posted it. We’re continuing to monitor, and if we identify any additional violating content, we’ll remove it and take appropriate action.”
Last year, we spoke to Evie, a photographer, who was a victim of image-based abuse on X after Grok generated a sexualised image of her. This was before the “put her in a bikini” trend, but Grok was already being used to generate sexualised deepfakes.
A new trend sees a man (almost always) sharing a photo of a woman and tagging @Grok to ask for a rating.
A Glamour investigation uncovers “semen images” on TikTok, many of which were generated using Grok. At least 50 women, as well as at least two minors, were found to have been victims.
In early January, a slew of AI-generated images began appearing on X. It began with a “trend” in which men asked Grok to “put her in a bikini” below women's posts and photos on X. Soon, the trend escalated, and the AI prompts became more and more explicit and vile. Prompts instructing the bot to put her in a cling film bikini or a bikini of dental floss. Some prompts even asked the bot to put women in sexualised positions, revealing their genitalia.
The fact that so many male users on the app want to do this is horrifying. Also horrifying is the fact that Grok complied to practically all of the requests, generating fake sexual images with real women's faces.
X content
Elon Musk and his team continue to refer to the following statement as their response:
“We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary. Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content. For more information on our policies, please refer to our help pages for our full X Rules and range of enforcement options.”
X content
The ‘trend’ has seemingly spread and is now being used to target young girls. The IWF claims to have found "sexualised and topless imagery of girls" on a "dark web forum" with users citing Grok as their tool.
The BBC announces that Ofcom is investigating Elon Musk's X in response to “deeply concerning reports” of Grok's creation of sexualised deepfakes of female users, including “sexualised images of children.”
Liz Kendall, the Technology Minister, announces that it will become a criminal offence to create AI sexualised images. The law, which was first announced in January 2025, will come into force this week. In the Houses of Commons, the Minister said, “No woman or child should have to live in fear of having their image sexually manipulated by deepfake technology”, adding that, “It is illegal.”
The Minister also confirmed that the creation of these images would be made a priority offence under the Online Safety Act, meaning tech companies must proactively act to prevent it on their platforms, in line with offences such as child abuse, terrorism, or selling illegal drugs.
The government will also bring forward the banning of nudification apps, which was announced last year as part of the government's Violence Against Women and Girls strategy.
The news is welcomed by campaigners and charities, however there are questions over why it took so long – and for so many women to be affected – for the government to act.
Responding to the government’s announcements on deepfake intimate image abuse, Emma Pickering, Head of Technology-Facilitated Abuse and Economic Empowerment at Refuge, said:
"While the Government’s announcements this week are a welcome step forwards, to achieve its goal of halving violence against women and girls (VAWG) by 2034, it must go further to tackle the growing threat of tech abuse.
"Ofcom’s guidance on online VAWG is currently voluntary – but this is not enough. Refuge is calling on the government to upgrade it to a mandatory code of practice, ensuring tech companies are required to comply or face enforcement action. If online harms are not treated with the urgency they demand, women and girls will continue to pay the price.”
Hundreds of X users have prompted Grok to create illegal, sexualised images of women. And thanks to cowardly politicians and regulators, they're getting away with it.

“X has now banned Grok from generating sexualised images of women and children after the UK made it illegal to create ‘non-consensual intimate’ images. However, it is still responding to requests to put men in bikinis or sexual positions,” reports Politics UK.
X content
A number of advocacy groups, including UltraViolet, the National Organisation for Women, the liberal group MoveOn, and the parent advocacy group ParentsTogether Action, ask Apple to remove X and Grok from its App Store in several open letters, per Reuters.
Is it illegal to share and create deepfake images?
Asking AI to generate sexualised images of anyone or sharing these images without consent is illegal.
“The non-consensual sharing of a deepfake intimate image is illegal under section 66B of the Sexual Offences Act”, Imogen Sadler, a UK barrister, previously explained to Glamour. “An intimate image includes any image showing exposed genitals, buttocks or breasts – including where those parts are visible through underwear or clothing equivalent to underwear.”
“If an image is shared without a reasonable belief that the person consented, that alone can amount to a criminal offence,” she added.
The UK government moved last year to criminalise the creation and solicitation of explicit deepfakes – following campaigning by Glamour, Jodie Campaigns Professor Clare McGlynn, Baroness Charlotte Owen and Not Your Porn – however, the law has yet to be brought into force.
“If this legislation had been in force, there would be a clear framework criminalising both the requesting and the creation of intimate images," added Sadler. "Without it, we are left relying only on sharing offences.”
What consequences might Elon Musk and abusers of Grok face?
Following Ofcom's investigation into X and Grok, the company may face a fine of up to 10% of its worldwide revenue or £18 million, whichever is greater. If X refuses, Ofcom can seek a court order that would force internet service providers to block access to X in the UK entirely.
“This will require sustained leadership and robust accountability across government, alongside urgent action to fix the chronic underfunding of specialist services. Without this, women and girls will continue to pay the price.”

.png)
