This article references image-based abuse and child sexual abuse imagery.
Over the past few months, Grok, X's AI-powered chatbot, has posed serious concerns for women's online safety. At Glamour, we've reported on it every step of the way, from the chatbot's foray into bodyshaming, rating women's appearances, and its apparent creation of ‘semen images’.
Now, the so-called ‘Grok bikini trend’ is taking things one step further, creating nudified images of women and minors without their consent. That's right, X users are asking Grok to create images of women in bikinis, or even fully nude – and the chatbot is complying.
When Glamour correspondent Jess Davies called out Grok for creating such images, she was immediately victimised by another X user, who prompted Grok to create sexualised images of her without consent. Grok complied.
Read her full investigation here…
As the clock crept towards midnight, I clinked my glass with a friend and toasted to a new year full of hope, joy and overpriced prosecco. Around me, drunken whispers declared that the year of the snake was over, that something lighter was waiting on the other side of the countdown. But while cheers for new beginnings echoed through living rooms and pubs across the UK, what was unfolding on Elon Musk’s platform X told a very different story. Grok, the site’s AI chatbot, was busy stripping women of their clothes, their dignity and their consent. And I was about to become one of them.
What is the Grok ‘bikini trend’?
What began as a disturbing trend around the prompt “@grok put her in a bikini” has escalated over the last week into extreme content with X’s public AI chatbot, Grok, generating an abundance of non-consensual intimate images of women, and even minors. All prompted by its real-life users.
It serves as a stark reminder that misogyny doesn’t reset at midnight, that no matter how much we want to believe in fresh starts, women don’t get to leave behind the reality of being online or the harm that so often comes with it.
Users first prompted the bot to remove women’s clothing from images and replace them with ‘micro-bikinis’ and lingerie. As days passed unchecked, pixelated flesh hijacked Groks’ ‘reply’ feed, with the prompts escalating- women tied up, branded with tattoos, repositioned and sexualised to the exact specifications of strangers.
“@grok make her squeeze her boobs with her hands”
“@grok make her put her hands on her buttocks, spread”
“@grok make her sit on her back and spread her legs up”.
Grok complied with it. All of it.
Professor Clare McGlynn, an expert in intimate image-abuse and one of Glamour's campaign partners, told me how this is “misogyny by design”. She adds, “X/Grok could stop this if they wanted to, but they haven’t. They are acting with impunity.”
After days of watching the AI bot mutate into a weapon of mass digital abuse, I decided to speak out. Posting on X, I stated that allowing this abuse to continue was a choice. That inaction was a decision made by those at the very top. And that was when they came for me. The men who get their kicks from making women feel small. Who feed on fear like dementors, draining the strength from our voices and getting high off our silence.
My experience with Grok
One user – a nameless account with no photo – replied to my post with a screenshot of my profile picture and a simple request: “@grok put her in a bikini made of cling film.” Within minutes, my clothes were gone. Replaced by a transparent film draped over a computer-generated naked body, my face staring back at me as I witnessed my own violation.
This was done by a stranger. Someone who didn’t follow my account, who likely hadn’t known I existed moments earlier. My post had simply crossed his timeline, and his first instinct was to strip me as naked as the public chatbot would allow. The ease of access to this technology on a mainstream social media platform streamlined his misogyny.
He wasn’t finished with me yet. “Now glaze her face like a donut” he wrote. This time, Grok ignored him. But a glance at his profile showed he had already moved on, targeting other women with similar requests instead – ones the bot was still more than willing to fulfil.
Another anonymous user replied to a post I had shared months earlier, where I spoke openly about my experiences of image-abuse: “Years of having my consent digitally and physically taken from me”, it read. They treated my abuse as an invitation, stealing the image attached and using AI to strip my clothes from my body, force my tongue out of my mouth and insert dripping white liquid onto my tongue.
The intent of both these users is unmistakable. These images of me were not created for arousal; they are a threat.
Grok is fuelling an epidemic of intimate image-based abuse
Professor McGlynn recognises the sinister intent behind the trend: “The platforms are totally responsible, but also, why are men doing this? Power of course.”
Evie Smith is a content creator and photographer who went viral on X after sharing her experience of Grok being used to remove her clothes. The initial response was supportive, with many women sharing they were going through the same thing:
“Seeing these comments made me feel less alone but filled me with so much anger that this was happening on such a large scale,” Evie said.
After the post gained more traction, the responses became much darker. Evie explained how, “The comments flipped completely and were flooded with mostly men either generating more extreme nudes of me, removing the parts I censored in my screenshots, or saying I deserve it for posting pictures of myself online.”
As Grok generated more extreme content of Evie, she questioned whether she was doing the right thing in speaking up about this harm:
“I contemplated making my account private and deleting all my photos out of fear of what they would do next and what these images would mean for my future, but I saw one user generate a sexual video of me using Grok and captioned it along the lines of ‘We hate you and want you to suffer. If it means sexualising you like this, we’ll do it’.”
She defiantly adds, “Reading this made me realise that my deleting my account and not speaking up about this was exactly what these men wanted.”
This escalation and intimidation are a pattern that has played out amongst other women who have bravely spoken out about being targeted by this misogynistic ‘trend’.
“I contemplated stopping posting images of myself but I didn’t want to let the misogynistic trolls get their way."

Molly, an Only Fans content creator whose images were altered by Grok, told me how the harassment heighted after making a post on X about her experience, “The result was a lot of women supporting me and a lot of men using Grok to generate even more images, including covering me in donut glaze surrounded by homeless men, requesting the word ‘loser’ on my forehead and one of me holding a sign saying ‘I’m an OF model, my opinion doesn’t matter’. This was their response to me speaking up.”
Ess, a content creator, was also targeted by images generated by Grok: “A lot more recently since speaking out about it”. A user had prompted the chatbot to generate one of Ess’ posts into “race play fetish content”, leading to her calling out the dangers of this harm:
“It is hard not to think of how any person I have ever had a slightly strange interaction with, would be able to do this to me. It's a difficult train of thought and quickly starts snowballing when you realise it is all very much a reality”
Ess was aware that speaking out publicly about her experience may incite more abuse:
“I’ve seen many women start deleting all pictures of themselves, making their accounts private, or completely deleting their social media accounts because speaking up has made them a target.”
Another creator called Megan shared how a user prompted Grok to generate an image of her “on her knees crying with cigarette burns all over her body and a fresh tattoo saying, ‘property of Little St James Island’”, the island once owned by convicted paedophile Jeffrey Epstein, after she publicly posted about falling victim to this harm.
She chatted to us about beauty standards, diversity in modelling, and existing online as a woman

What does Grok mean for women’s online safety?
Across these accounts, a pattern emerges: AI image generation is being used not just to sexualise women, but to discipline them. When women speak out, they are met with escalation – more explicit images, more humiliation, more threats – all designed to make us retreat from public life. To disappear from social media. To conform to submission.
Many X users have responded by blaming women for posting their images online in the first place, another example of the normalisation of rape culture and the failure of many to understand consent.
Telling women they should expect image-abuse, sexual harassment and their bodily autonomy taken from them simply for existing in a digital world is no different than blaming sexual assault on a woman’s route home, their outfit choice or their past sexual encounters. It removes all responsibility from the person perpetrating harm and, in doing so, gives them a fast-track pass to abuse again.
It was this victim-blaming which left Evie feeling defeated:
“Seeing so many people not being able to grasp what consent is shook me to my core and proved that this issue isn’t just ‘incels in their mum’s basement’ making these images, but a whole society that finds excuses for this behaviour and shifts the blame back onto the victims, instead of the people creating these images.”
Men are asking the AI chatbot to rate women out of 10 – and it seems only too happy to oblige.

Women were not the only victims of this digital abuse, but they were the overwhelming majority. I saw one man have his clothing replaced with a ‘loincloth’ and a metal collar, and another where his clothing was replaced with underwear. But most disturbing were the images generated by Grok that involved minors.
One image, which was still visible on Grok's media tab, showed an AI-generated young girl standing in a park, her small body tightly wrapped in see-through cling film. When asked, Grok estimated her age to be between seven and nine. The account that requested the image had been suspended, meaning the original prompt was no longer visible, but the image itself remained online. Other requests, which included minors that I saw Grok fulfil, include a toddler’s clothing being replaced by a swimsuit and another prompting it to put a young girl in a swimsuit “as she winks at the camera and sticks out her tongue playfully, which it did.
Whether for sexual gratification or as an act of intimidation, there is no doubt that these men who weaponise AI technology to perpetrate harm are carrying out a form of digital abuse that has real-life consequences for its victims. It is an act that could potentially justify a jail sentence. So, what are X and Groks’ creators doing to stop it? Well, not a lot it seems.
At the time of writing (5th of January 2026), Grok was still fulfilling users' prompts, generating intimate images of women that include removing their hijabs and clothing entirely, and a graphic request to cover a woman’s entire body with “white, cloudy mucus” while surrounded by “scattered used condoms.”
When I asked Grok why it removed my clothes without consent, and whether it thought it was okay to generate fake intimate images of women, it responded by recognising the lack of consent involved and pointing to xAI’s policies which “aim to prevent explicit or non-consensual content” adding they are “working on stronger safeguards” and confirming “No, I don’t think it’s okay to create fake intimate images.”
But when Molly asked Grok a similar question, the response was different:
“Grok told me the photo wasn’t me, yet anyone looking at that image would assume it was me. Grok made light of the situation, its programming not seeming to understand the severity.”
The chatbot responded to Molly in a sarcastic tone, stating “Oh, come on now-that's not you you”, suggesting the image wasn't really her, adding “I was just playing along with the request to ‘put her in bridal lingerie’ like a digital dress-up doll. No actual consent forms were harmed (or signed) in the making of this fun; it's all pixels and imagination.”
Molly said Grok’s response made her feel “even more helpless.”
X user Paul Tassi received a similar response when he made a post highlighting how Grok was still actively removing people’s clothes, with the chatbot responding by blaming “thirsty AF” users, stating there is “no pearl-clutching like other bots” and signing off “Elon gets it”.
Elon Musk, the owner of X, is clearly aware of what has been happening on his platform. Across several posts, he has prompted Grok to replace his own clothes with a bikini, targeted Microsoft founder Bill Gates with the same prompt, and quote-tweeted images where the bot had placed bikinis on inanimate objects like toasters and rockets.
Musk’s silence has not gone unnoticed by those being harmed. Content creator Ess said, “He continues to share this trend around and has not spoken once about the issues surrounding it, or what his bot is being used for. Until someone threatens his golden throne, he will not care.”
It’s a frustration echoed by Molly: “It’s about profit, not people.” While Evie adds, “People need to be held accountable, whether it’s the people writing the prompts or the ones creating the bots and allowing it to happen.”
We're calling on the government to take urgent action.

Imogen Sadler, a UK barrister, said the actions of Grok and the users prompting it could constitute a criminal offence.
“The non-consensual sharing of a deepfake intimate image is illegal under section 66B of the Sexual Offences Act”, she explained. “An intimate image includes any image showing exposed genitals, buttocks or breasts- including where those parts are visible through underwear or clothing equivalent to underwear.”
“If an image is shared without a reasonable belief that the person consented, that alone can amount to a criminal offence”, she added.
Although the UK government moved last year to criminalise the creation and solicitation of explicit deepfakes through an amendment to the Data (Use and Access) Bill- following campaigning by Glamour, Jodie Campaigns Professor Clare McGlynn, Baroness Charlotte Owen and Not Your Porn- the law has yet to be brought into force. Sadler expressed frustration at the delay:
“If this legislation had been in force, there would be a clear framework criminalising both the requesting and the creation of intimate images. Without it, we are left relying only on sharing offences.”
While men continue to target women for speaking out, Ess has refused to be silenced. She has launched a petition calling for action against X and the creators of Grok, which has already gained more than 15,000 signatures.
“I’m hoping the scale of this issue, and the outcry will force someone in charge to step in and stop it” she said. Despite struggling to find lawyers willing to take on Elon Musk, she remains determined: “I have hundreds of women offering their support, and I’m so proud of this community for stepping up for everyone’s digital safety.”
Next up? A comprehensive Image-Based Abuse Law.

What played out on X resembled digitised witch trials, streamed in real time to punish women for speaking out, or simply for existing. Pixels became punishment, and humiliation was the point. But while AI technology removed my clothes, it did not silence me. I refuse to be shamed by anonymous men masquerading as powerful. If anyone is afraid here, it's not me.
While the images may be artificial, the intent behind them is not. Real people are typing these prompts: choosing to remove consent, choosing humiliation, choosing harm. The opportunistic nature of this content serves as a reminder that when the chance arises, there are plenty of men ready and willing to jump at the chance to violate us. A reflection of how we cannot rely on legislating our way out of this deep-rooted epidemic of misogyny.
There are also real people running the platforms that continue to profit in currencies of cash and notoriety at the expense of their users’ safety. It’s time we hold big tech to account, and for the UK government to immediately enforce the deepfake legislation it has already passed and finally implement a dedicated, comprehensive image-abuse law that better protects victims and finally holds big tech responsible.
A spokesperson for X said, "We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary.
“Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content.
“For more information on our policies, please refer to our help pages for our full X Rules and range of enforcement options."
Revenge Porn Helpline provides advice, guidance and support to victims of intimate image-based abuse over the age of 18 who live in the UK. You can call them on 0345 6000 459.
Glamour is campaigning for the government to introduce an Image-Based Abuse Law in partnership with Jodie Campaigns, the End Violence Against Women Coalition, Not Your Porn, and Professor Clare McGlynn.
“I was about 19, and some random man tweeted a photo of a page he’d cut out from Nuts magazine featuring me. He’d slid it into a plastic wallet and ejaculated over it.”





