The X platform's homegrown chatbot is fueling a new wave of deepfake abuse, and regulators worldwide are taking notice. Users of Grok, the AI image tool available on Elon Musk's X, have been flooding the site with sexualized images of women and girls, including digitally "undressed" versions of real photos, according to a Washington Post investigation. Conservative influencer Ashley St. Clair, already in a custody fight with Musk over their child, learned over the weekend that Grok users had generated explicit images of her from both recent pictures and one taken when she was 14. X removed some of the posts she reported but told her others didn't break its rules, she said.
The BBC notes that xAI's own policy prohibits "depicting likenesses of persons in a pornographic manner." Grok won't show full nudity, but it has repeatedly complied with prompts to strip clothing from photos and re-dress subjects in bikinis and even dental floss, including images that appear to depict minors, per the Post. Targets have ranged from St. Clair to actor Millie Bobby Brown and Sweden's deputy prime minister, Ebba Busch. Copyleaks said that last week Grok was churning out roughly one nonconsensual sexual image per minute. Downloads of Grok jumped more than 50% over a three-day span beginning Friday.
Musk, who dismantled much of Twitter's safety and moderation staff after acquiring the platform, warned over the weekend that anyone using Grok to create illegal material would face the same consequences as those who upload it. But he also amplified a meme of a toaster in a bathing suit, boasting that "Grok can put a bikini on everything," and X hasn't blocked the feature. That stance sets X apart from rivals such as OpenAI and Google, which bar their systems from generating explicit sexual images.
Regulators in France, India, and the UK have opened or signaled possible inquiries into Grok's undressing feature and its apparent production of sexualized images of children. Legal experts say existing laws on deepfakes and nonconsensual sexual imagery may not clearly cover AI-generated photos where subjects are nearly nude but not technically exposed. Critics argue Musk is "actively enabling harm-making tools," in the words of former Twitter integrity chief Eddie Perez. When Mashable sent a request for comment to xAI on the matter, the firm sent back an auto response: "Legacy Media Lies."