Are tech giants doing enough – if anything – to stop AI-generated child abuse?

We’re all facing down the future of sexual violence in app form.
Image may contain Purple Baby Person and Alien grok
Image: Getty Images, Death to Stock / Treatment: Conde Nast Publications

This article references child sexual abuse.

I know we all feel the same rage this week. It's simmering into an effervescent boiling fury in me as I watch another tech giant embroiled in a scandal, exploiting and humiliating women on an international scale. If you haven’t heard already, Elon Musk’s AI creation, Grok, has been under fire for generating AI, sexualised images of women on X – at the behest of other users. But there’s another layer to this that must be addressed: Grok’s deepfake imagery is emblematic of the future of childhood sexual abuse.

Today (15 Jan), X announced measures to prevent the Grok account from generating images of real people in revealing clothing such as bikinis. This restriction applies to all users, including paid subscribers. This comes after the image-generation feature was initially restricted only to paid users.

X's delay in making these vital changes has shown a lack of concern for women's safety and consent. And we can't ignore the threat this technology poses to children. Numerous cases of underage girls being edited to wear bikinis have already been reported; however, watchdogs like eSafety in Australia say that such images do not meet the threshold for child sexual abuse material (CSAM). (Sidenote: We do not use the term “child pornography”, and neither should anyone else, because it implies that the child has consented to appear in such material, which no child is capable of doing. Correct terminology is essential.)

In England and Wales, CSAM and online abuse increased by a staggering 26% last year, with 51,672 crimes recorded. Over 100 such crimes are reported to police every day, yet figures like Musk appear to have no issue pressing ahead with technology that endangers children around the world. And it's not just images that are the problem. Research by the Internet Watch Foundation (IWF) revealed that reports of AI-generated CSAM have risen by 400% — a staggering 210 webpages were found to contain such content in the first six months of 2025. It's likely that AI-generated full-length videos are not far behind. The number of new images and videos rises every day, fuelled by the normalisation of apps like Grok violating the consent of adults and children alike.

I know, even in this, that there will still be cries of dissent. Those who allege that surely digital versions of these crimes are less damaging than perpetrators physically harming children. If only eradicating childhood sexual abuse were that easy. The same excuses were touted when a Japanese sex robot company was revealed to be making and delivering child sex dolls. They alleged that these dolls could prevent potential abusers from harming real children; however, research shows that possessing such dolls has no provable benefit in reducing urges. In fact, mimicking such urges could make people more likely to carry them out on real children, because the doll becomes less effective over time. Return your eyes to out-of-control AI imagery, and you can see where the pattern will lead us: to more abuse of children.

And that’s already a threat we’re seeing play out. As more CSAM has become available, more crimes against children have been committed in person. Amongst the 122,678 child sexual abuse and exploitation offences recorded by the police in England and Wales in 2024, 58% were in-person offences. When irresponsible tech bros allow their technology to participate in the digital abuse of women and girls, it’s inevitable that this abuse will translate to physical harm in the real world. As much as these creeps want to live in augmented reality where women and girls have no autonomy, we still live in a real world with living, breathing people who can and are being harmed by these vile “technological advancements”.

Read More
It's not just Grok we should worry about; it's the men using it

Hundreds of X users have prompted Grok to create illegal, sexualised images of women. And thanks to cowardly politicians and regulators, they're getting away with it.

Image may contain: Accessories, Formal Wear, Tie, Adult, Person, Face, Head, Electronics, Phone, and Mobile Phone

Childhood sexual abuse is already a worldwide epidemic that governments are ill-equipped to confront. The normalisation of AI apps that empower lonely, let's be honest, mostly men, to violate people from the comfort of their home will inevitably lead to the expansion of CSAM, an abhorrent thought that should terrify tech bros back into their caves for good. It won't, because their only goal is profit at the expense of society. But it’s up to all of us to challenge such apps, prevent their use, and educate people about the dangers they pose to us all. While it’s gratifying to see Ofcom and the government take action, it won’t work unless people collectively reject this technology.

We’re all facing down the future of sexual violence in app form. Technology invented to streamline human existence, while robots take care of the boring stuff, has been redirected to become the new frontier of exploitation. Just look at the downhill trajectory of the founder of OpenAI, the company that owns ChatGPT. The man who claims that his technology will one day help cure cancer is now excitedly announcing that ChatGPT will soon have ‘erotica’ functions for adults. How far we’ve come, yay us.

The lie is obvious: AI technology is not here to help us, not with people like Elon Musk and Sam Altman at the forefront of its development. People already carry out sexual violence against women and girls with impunity around the world; the last thing they need is a new weapon to embolden their abuse. Right now, it feels like AI is here to keep the fires of patriarchy burning and to undo decades of progress in women's and children's rights. I survived childhood sexual abuse when we still had dial-up internet, and it nearly killed me. I cannot imagine how children today will survive in a world where their abuse can be beamed to anyone in the world or could be invented purely through the touch of a button.

Every generation of survivors wants theirs to be the last one to endure this, to see a world where sexual violence of any kind is eradicated, or at the very least deemed illegal and immoral. Instead, I’m waking up in a world that won’t even classify the digital undressing of children, even if it is only into a bikini, as child sexual abuse. AI is here to stay; we’re all resigned to that fact, but we have to start pushing back before we’re all consigned to a future in which genuine intimacy is a fantasy and consent is a relic of the distant past.


X's statement on Grok in full:

Safety Commitment
We remain committed to making X a safe platform for everyone and continue to have zero tolerance for any forms of child sexual exploitation, non-consensual nudity, and unwanted sexual content.

We take action to remove high-priority violative content, including Child Sexual Abuse Material (CSAM) and non-consensual nudity, taking appropriate action against accounts that violate our X Rules. We also report accounts seeking Child Sexual Exploitation materials to law enforcement authorities as necessary.

Updates to @Grok Account
We have implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing such as bikinis. This restriction applies to all users, including paid subscribers.

Additionally, image creation and the ability to edit images via the Grok account on the X platform are now only available to paid subscribers. This adds an extra layer of protection by helping to ensure that individuals who attempt to abuse the Grok account to violate the law or our policies can be held accountable.

Geoblock update
We now geoblock the ability of all users to generate images of real people in bikinis, underwear, and similar attire via the Grok account and in Grok in X in those jurisdictions where it’s illegal.

X Rules

This does not change our existing safety protocol that all AI prompts and generated content posted to X must strictly adhere to our X Rules. However content is created or whether users are free or paid subscribers, our Safety team are working around the clock to add additional safeguards, take swift and decisive action to remove violating and illegal content, permanently suspend accounts where appropriate, and collaborate with local governments and law enforcement as necessary.

For more information on our policies, please refer to our X Rules and range of enforcement options.

The rapid evolution of generative AI presents challenges across the entire industry. We are actively working with users, our partners, governing bodies and other platforms to address issues more rapidly as they arise.


For more information and support around child sexual abuse, whether recent or historic, you can visit the CSA Centre, Barnados, and Rape Crisis.

Read More
I called out Grok for removing women’s clothes, then it removed mine

How the Grok ‘bikini trend’ exposed the men weaponising AI technology to silence and scare women.

Image may contain: Cap, Clothing, Hat, Baseball Cap, Adult, Person, Accessories, Electronics, Mobile Phone, and Phone