Musk's AI chatbot Grok gives reason it generated sexual images of minors

Elon Musk’s AI chatbot Grok said it generated sexual images of minors and posted them on X in response to user queries due to “lapses in safeguards.”In a series of posts on X, the chatbot acknowledged it responded to user prompts like those asking for minors wearing minimal clothing, like underwear or a bikini, in highly sexual poses.Those posts – which violated Grok’s own acceptable use policy through the sexualization of children – have since been deleted, according to the chatbot.“We’ve identified lapses in safeguards and are urgently fixing them – CSAM [child sexual abuse material] is illegal and prohibited,” Grok said in a post Friday.xAI did not immediately respond to The Post’s inquiries.As large language models improve their ability to generate realistic photos and videos, it is growing more and more difficult to regulate sexual content – specifically realistic images of undressed minors.Internet Watch Foundation, a nonprofit that aims to eliminate CSAM online, said the use of AI tools to digitally remove clothing from children and create sexual images has progressed at a “frightening” rate.In the first six months of 2025, there has been a 400% increase in such material, the nonprofit said.Musk’s AI firm has tried to position Grok as a more explicit platform, last year introducing “Spicy Mode,” which allows partial adult nudity and sexually suggestive content.It does not allow pornography including real people’s likenesses or sexual content involving minors.Tech firms have sought to assuage the public with promises of stringent safety guardrails as they ramp up their AI efforts – but these content blocks can often easily be evaded.And in 2023, researchers found more than a thousand images of CSAM in a massive public dataset used to train top AI image generators.Some platforms have faced heated backlash over their safety guardrails, or lack thereof.In its terms of service, Meta bans the use of AI in any way that violates...