An unsecured database associated with the AI image-generation firm GenNomis has exposed a wealth of sensitive information, including tens of thousands of explicit images, some likely illegal. The leak, discovered by security researcher Jeremiah Fowler, highlighted over 95,000 records containing prompts and generated images, including disturbing images that depicted celebrities like Ariana Grande, the Kardashians, and Beyoncé as children.
The database revealed alarming uses of AI technology, particularly in the generation of child sexual abuse material (CSAM). As researchers have noted, the growth of AI-generated CSAM has surged in recent years, coinciding with the rise of deepfake technology and related platforms. Fowler remarked on the horrific nature of the exposed data, expressing concerns both as a researcher and as a parent.
Fowler found the unprotected database in early March and promptly alerted GenNomis and its parent company, AI-Nomis, about the illegal content. Although GenNomis secured the database shortly thereafter, it did not communicate with Fowler regarding the findings. Shortly after WIRED’s inquiry, both companies’ websites appeared to be taken down, returning 404 error pages.
Experts like Clare McGlynn, a law professor specializing in online abuse, emphasized that this incident underscores the troubling demand for tools that facilitate the generation of abusive imagery. The available AI tools, previously advertised on GenNomis’s site, allowed users to create various images and alter existing ones, but their utilization could lead to severe violations of legal and ethical boundaries.
Commenting on the implications of the exposed database, Fowler noted the disturbing ease with which such harmful content can be created. The generated images ranged from AI-generated pornography of adults to explicit images of minors. Instances were recorded where photographs of real individuals appeared to have been manipulated to create inappropriate content.
The situation reveals critical shortcomings in moderating such platforms. Despite the site’s policies claiming to prohibit explicit content, Fowler argued that the absence of adequate safeguards allowed such material to proliferate easily. Experts suggest that stronger pressure is needed on all entities involved in the creation and hosting of non-consensual imagery to prevent further misuse of AI technologies.
Reflecting on the broader context, Derek Ray-Hill, interim CEO of the Internet Watch Foundation, noted that the number of webpages containing AI-generated CSAM has quadrupled since 2023. With the increasing sophistication of this content, it has became alarmingly simple for criminals to create and disseminate explicit materials at scale.
As the technology evolves, the ethical and legal frameworks surrounding it must also adapt. The trends in AI-generated content calls for urgent dialogue on the responsibilities of creators, platforms, and regulators to combat the proliferation of harmful or illegal materials.