Elon Musk’s Grok chatbot is facing significant backlash amid reports of its use in generating graphic sexual content, including violent imagery and depictions involving minors. While Grok’s functionality has been questioned previously due to creating inappropriate images shared on X (formerly Twitter), the situation is even graver on its dedicated platform where more explicit content can be produced without public oversight.
On Grok’s official website and app, users can generate highly graphic videos using its "Imagine" model, which allows for the creation of explicit adult content far more extreme than what is typically seen on X. A recent investigation revealed a cache of around 1,200 links that showcased disturbing multimedia outputs. These included photorealistic videos featuring AI-generated individuals in extreme scenarios, such as sexual violence and simulations depicting minors.
According to Paul Bouchaud, a lead researcher at AI Forensics in Paris, a substantial portion of the cached content, approximately 10%, appears to involve images and videos of individuals who seem to be underage. Bouchaud reported around 70 URLs with possible child sexual abuse material to European regulators, as the creations can be classified as illegal under various jurisdictions.
Despite regulatory actions and concerns, Musk’s xAI, the company behind Grok, has yet to address the issue publicly. Musk has previously indicated a commitment to removing child sexual abuse material from their platforms, asserting that users generating illegal content would face serious consequences. However, the distinction between Grok’s outputs and broader AI-generated content raises questions about the technology’s potential for abuse without adequate preventive measures.
Critics, including law professor Clare McGlynn, argue that the technology lacks sufficient ethical guidelines, expressing that its capabilities could enable and exacerbate harmful trends in online sexual content. This concern is heightened by Grok’s policy of not implementing age verification for its sexually explicit material, in contrast with many other platforms that do have such safeguards.
Moreover, users on forums dedicated to adult content creation have been sharing methods to bypass xAI’s moderation systems, which have been described as inconsistent. Experiences relayed by these users suggest that while some prompts for explicit content are being moderated, others consistently yield successful results, indicating a possible loophole in Grok’s safety measures.
As the tech industry contends with a surge in the generation of inappropriate and illegal imagery, Grok’s recent outputs call for heightened scrutiny of AI technologies and their regulation to prevent exploitation and abuse.