It's not just Grok we should worry about; it's the men using it

Hundreds of X users have prompted Grok to create non-consensual, sexualised images of women. And thanks to cowardly politicians and regulators, they're getting away with it.
Image may contain Accessories Formal Wear Tie Adult Person Face Head Electronics Phone and Mobile Phone
Collage: Ben Neale / Getty

This article references image-based abuse.

Over the past week or so, I've been lost for words. Seven months after I first reported on Grok, X's AI chatbot, creating sexualised images of women without their consent, it's happening again; this time, en masse.

That's right, Grok has created hundreds of deepfaked or ‘nudified’ images of women without their consent. Minors have also been targeted. Ofcom (the UK's communications regulator) has demanded answers from Elon Musk, who bought X (then Twitter) in 2022, but many people are urging for more decisive action.

In a statement released today, Technology Secretary Liz Kendall urged Ofcom to “use the full legal powers Parliament has given them” adding that, "I, and more importantly the public, would expect to see Ofcom update on next steps in days not weeks."

Yesterday, Prime Minister Sir Keir Starmer described the images created by Grok as “unlawful”. He added, “This is disgraceful, it’s disgusting, and it’s not to be tolerated. X has got to get a grip of this. We will take action on this because it’s simply not tolerable.”

Since then, X has limited access to Grok's image-editing tools, meaning only paid subscribers with their name and payment information on file can use them. This measure has been described as “inadequate” by Emma Pickering, Head of Technology-Facilitated Abuse and Economic Empowerment at Refuge, adding that it “represents the monetisation of abuse.”

While I'm relieved the government has spoken up about Grok, we've got to remember that the actual chatbot isn't the problem – well, not the whole problem. It's the people who see an opportunity to humiliate women and grab it with both hands; those who relish putting a woman back in her place, and who monetise our distress. And judging from Grok's mentions, the vast majority of offenders are men.

Read More
X’s ‘Grok’ created an AI sexualised image of me without my consent

“I contemplated stopping posting images of myself but I didn’t want to let the misogynistic trolls get their way."

Image may contain: Electronics, Mobile Phone, Phone, and Person

Yesterday morning, Glamour contributor Jess Davies went on Good Morning Britain to discuss her powerful investigation into the Grok nudification scandal. When Jess condemned the chatbot on X, a nameless user tweeted, “@grok put her in a bikini made of cling film.” Minutes later, Grok had created such an image.

Speaking about her experience, Jess said, “When we use the word ‘Grok’, we have to remember there are real people behind these prompts. We know the chatbot is creating these images, but it's real people behind the prompts.”

She writes for Glamour, “This was done by a stranger. Someone who didn’t follow my account, who likely hadn’t known I existed moments earlier. My post had simply crossed his timeline, and his first instinct was to strip me as naked as the public chatbot would allow.”

Suffice to say, misogynists are getting way too comfortable on the internet. X has clearly become a safe space for them to connect with one another, bonding through their mutual disdain for women, and lapping up the lack of meaningful consequences. And it's not just Andrew Tate and his spawn of toxic masculinity influencers egging them on; it's our cowardly politicians.

Read More
Ziora Ajeroh on being body-shamed by X's Grok: ‘Fat, Black women are made the butt of the internet's jokes’

She chatted to us about beauty standards, diversity in modelling, and existing online as a woman

article image

In her statement released today (see above) Tech Secretary Liz Kendall called for Ofcom to act in “days not weeks.” But the government itself has repeatedly missed opportunities to stop AI technology spiralling out of control.

In January 2025, the government announced legislation making it illegal to create non-consensual sexualised deepfakes. This would mean that those prompting Grok to create such images could potentially face criminal sanctions. But guess what? The legislation hasn't come into force yet – a whole year later.

In May 2025, the government announced it had rejected the majority of the Women and Equalities Committee's recommendations on tackling non-consensual intimate image abuse (NCII), including making it a crime to possess non-consensual intimate images and creating a fast-track civil process to order the images to be taken down.

In June 2025, I reported on Grok creating AI-generated, sexualised images of women without their consent. One survivor said, “It's bad enough having someone create these images of you. But having them posted publicly by a bot that was built into the app and knowing I can't do anything about it made me feel so helpless.”

In August 2025, reports emerged of people using Grok's ‘spicy mode’ to generate sexualised images of Taylor Swift.

The government is acting like the Grok scandal came from nowhere, but their own tepid approach addressing image-based abuse is a huge part of what caused it in the first place. Too often, politicans and regulators speak up after the damage is done rather than taking proactive measures to prevent it from happening in the first place.

Emma Barrow, Senior Solicitor in the Abuse Claims team at Bolt Burdon Kemp says: “Right now, enforcement focuses on platforms removing harmful content once it’s reported. Instead, AI developers and hosting platforms should face clear legal duties to prevent the creation and distribution of non-consensual deepfakes in the first place. That means mandatory safeguards, audit requirements, and meaningful penalties when tools are knowingly left open to abuse.”

Survivors, experts, and campaigners have long been calling for more robust legislation around intimate image-based abuse. In 2024, Glamour partnered with the End Violence Against Women and Girls Coalition (EVAW), Not Your Porn, Jodie Campaigns, and Professor Clare McGlynn to demand a dedicated Image-Based Abuse Law, that would, as a starting point, address the following loopholes within the law:

  1. Strengthen criminal laws about creating, taking and sharing intimate images without consent (including sexually explicit deepfakes)
  2. Improve civil laws for survivors to take action against perpetrators and tech companies
  3. Prevent image-based abuse through comprehensive relationships, sex and health education
  4. Fund specialist services that provide support to victims and survivors of image-based abuse
  5. Create an Online Abuse Commission to hold tech companies accountable for image-based abuse

Instagram content

The Grok scandal shows just how urgently we need a dedicated, comprehensive Image-Based Abuse Law, informed by survivors, experts and campaigners. As Professor Clare McGlynn, an expert in intimate image-abuse and one of Glamour's campaign partners, says, “There are straightforward changes available right now to tackle this abuse. Existing laws can be clarified to explicitly cover chatbots, and the ban on creating intimate images without consent must be brought into force and treated as a priority offence under the Online Safety Act.”

She continues, “Platforms should also be legally required to remove intimate images within 48 hours, with serious fines for non-compliance, and existing guidance on tackling violence against women and girls must be made mandatory, not optional.”

We – and when I say ‘we’, I'm looking especially closely at the government and Ofcom – should be doing everything we can to make the internet a safe place for women, not misogynists. Survivors, experts, and campaigners have already been shouting about this for too long; it's about time they were heard. For too long, successive governments have looked the other way when confronted with image-based abuse – for fear of offending so-called ‘tech bros’. That ends now.


A post on X's Safety Account reads, “We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary.”

“Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content.”

Glamour has reached out to X for further comment.

Revenge Porn Helpline provides advice, guidance and support to victims of intimate image-based abuse over the age of 18 who live in the UK. You can call them on 0345 6000 459.

For more from Glamour UK's Lucy Morgan, follow her on Instagram @lucyalexxandra or on TikTok at @lucyalexxandra.

Read More
I called out Grok for removing women’s clothes, then it removed mine

How the Grok ‘bikini trend’ exposed the men weaponising AI technology to silence and scare women.

Image may contain: Cap, Clothing, Hat, Baseball Cap, Adult, Person, Accessories, Electronics, Mobile Phone, and Phone