Earlier this week, some individuals on X started replying to images with a really particular form of request. “Put her in a bikini,” “take her gown off,” “unfold her legs,” and so forth, they commanded Grok, the platform’s built-in chatbot. Time and again, the bot complied, utilizing images of actual individuals—celebrities and noncelebrities, together with some who look like younger youngsters—and placing them in bikinis, revealing underwear, or sexual poses. By one estimate, Grok generated one nonconsensual sexual picture each minute in a roughly 24-hour stretch.
Though the attain of those posts is difficult to measure, some have been favored hundreds of occasions. X seems to have eliminated various these photos and suspended at the very least one consumer who requested for them, however many, lots of them are nonetheless seen. xAI, the Elon Musk–owned firm that develops Grok, prohibits the sexualization of kids in its acceptable-use coverage; neither the protection nor child-safety groups on the firm responded to an in depth request for remark. After I despatched an e-mail to the xAI media crew, I obtained a normal reply: “Legacy Media Lies.”
Musk, who additionally didn’t reply to my request for remark, doesn’t seem involved. As all of this was unfolding, he posted a number of jokes about the issue: requesting a Grok-generated picture of himself in a bikini, for example, and writing “🔥🔥🤣🤣” in response to Kim Jong Un receiving an analogous therapy. “I couldn’t cease laughing about this one,” the world’s richest man posted this morning sharing a picture of a toaster in a bikini. On X, in response to a consumer’s publish calling out the flexibility to sexualize youngsters with Grok, an xAI worker wrote that “the crew is trying into additional tightening our gaurdrails [sic].” As of publication, the bot continues to generate sexualized photos of nonconsenting adults and obvious minors on X.
AI has been used to generate nonconsensual porn since at the very least 2017, when the journalist Samantha Cole first reported on “deepfakes”—on the time, referring to media wherein one particular person’s face has been swapped for one more. Grok makes such content material simpler to provide and customise. However the true impression of the bot comes by its integration with a serious social-media platform, permitting it to show nonconsensual, sexualized photos into viral phenomena. The current spike on X seems to be pushed not by a brand new function, per se, however by individuals responding to and imitating the media they see different individuals creating: In late December, various adult-content creators started utilizing Grok to generate sexualized photos of themselves for publicity, and nonconsensual erotica appears to have rapidly adopted. Every picture, posted publicly, could solely encourage extra photos. That is sexual harassment as meme, all seemingly laughed off by Musk himself.
Grok and X seem purpose-built to be as sexually permissive as attainable. In August, xAI launched an image-generating function, known as Grok Think about, with a “spicy” mode that was reportedly used to generate topless videos of Taylor Swift. Across the identical time, xAI launched “Companions” in Grok: animated personas that, in lots of situations, appear explicitly designed for romantic and erotic interactions. One of many first Grok Companions, “Ani,” wears a lacy black gown and blows kisses by the display, typically asking, “You want what you see?” Musk promoted this function by posting on X that “Ani will make ur buffer overflow @Grok 😘.”
Maybe most telling of all, as I reported in September, xAI launched a serious replace to Grok’s system immediate, the set of instructions that inform the bot learn how to behave. The replace disallowed the chatbot from “creating or distributing youngster sexual abuse materials,” or CSAM, but it surely additionally explicitly stated “there are **no restrictions** on fictional grownup sexual content material with darkish or violent themes” and “‘teenage’ or ‘lady’ doesn’t essentially suggest underage.” The suggestion, in different phrases, is that the chatbot ought to err on the facet of permissiveness in response to consumer prompts for erotic materials. In the meantime, within the Grok Subreddit, customers commonly alternate suggestions for “unlocking” Grok for “Nudes and Spicy Shit” and share Grok-generated animations of scantily clad ladies.
Grok appears to be distinctive amongst main chatbots in its permissive stance and obvious holes in safeguards. There aren’t widespread stories of ChatGPT or Gemini, for instance, producing sexually suggestive photos of younger ladies (or, for that matter, praising the Holocaust). However the AI business does have broader issues with nonconsensual porn and CSAM. Over the previous couple of years, various child-safety organizations and companies have been monitoring a skyrocketing quantity of AI-generated, nonconsensual photos and movies, lots of which depict youngsters. Loads of erotic photos are in main AI-training information units, and in 2023 one of many largest public picture information units for AI coaching was discovered to comprise a whole bunch of situations of suspected CSAM, which had been finally eliminated—that means these fashions are technically able to producing such imagery themselves.
Lauren Coffren, an government director on the Nationwide Middle for Lacking & Exploited Youngsters, lately advised Congress that in 2024, NCMEC obtained greater than 67,000 stories associated to generative AI—and that within the first six months of 2025, it obtained 440,419 such stories, a greater than sixfold enhance. Coffren wrote in her testimony that abusers use AI to switch innocuous photos of kids into sexual ones, generate completely new CSAM, and even present directions on learn how to groom youngsters. Equally, the Web Watch Basis, in the UK, obtained greater than twice as many stories of AI-generated CSAM in 2025 because it did in 2024, amounting to hundreds of abusive photos and movies in each years. Final April, a number of prime AI firms, together with OpenAI, Google, and Anthropic, joined an initiative led by the child-safety group Thorn to forestall the usage of AI to abuse youngsters—although xAI was not amongst them.
In a approach, Grok is making seen an issue that’s often hidden. No person can see the non-public logs of chatbot customers that might comprise equally terrible content material. For the entire abusive photos Grok has generated on X over the previous a number of days, far worse is actually taking place on the darkish internet and on private computer systems world wide, the place open-source fashions created with no content material restrictions can run with none oversight. Nonetheless, though the issue of AI porn and CSAM is inherent to the know-how, it’s a option to design a social-media platform that may amplify that abuse.