OpenAI released draft documentation Wednesday laying out how it wants ChatGPT and its other AI technology to behave. Part of the lengthy Model Spec document discloses that the company is exploring a leap into porn and other explicit content.
OpenAI’s usage policies currently prohibit sexually explicit or even suggestive materials, but a “commentary” note on part of the Model Spec related to that rule says the company is considering how to permit such content.
“We’re exploring whether we can responsibly provide the ability to generate NSFW content in age-appropriate contexts through the API and ChatGPT,” the note says, using a colloquial term for content considered “not safe for work” contexts. “We look forward to better understanding user and societal expectations of model behavior in this area.”
The Model Spec document mentions that NSFW content could encompass erotica, extreme gore, slurs, and unsolicited profanity. The extent to which OpenAI plans to permit the generation of NSFW content, be it simply erotic text or depictions of violence, remains unknown.
OpenAI’s Grace McGuire, in response to enquiries from WIRED, indicated that the Model Spec is an approach for enhancing transparency regarding the development process and collecting diverse perspectives and feedback. However, she refrained from elaborating on the specifics of OpenAI’s exploration into the generation of explicit content or the type of feedback they have received so far.
Earlier in the year, Mira Murati, OpenAI’s CTO, expressed to The Wall Street Journal her uncertainty regarding the possibility of OpenAI’s video generation tool, Sora, being used in the creation of nudity in the future. [source1] [source2]
AI-generated pornography, a disturbing and rapidly growing application of the generative AI technology pioneered by OpenAI, came to the forefront. Deepfake porn, explicit AI created imagery or video produced without the consent of the depicted individuals, has become a frequent harassment tool against women and girls. In a disturbing turn of events, WIRED reported this March about the possible first arrest of US minors for distributing AI-generated explicit content without consent. The persons arrested were teenage boys from Florida who had created images of their middle school peers. [source3] [source4] [source5]
“Intimate privacy violations, including deepfake sex videos and other nonconsensual synthesized intimate images, are rampant and deeply damaging,” says Danielle Keats Citron, a professor at the University of Virginia School of Law who has studied the problem. “We now have clear empirical support showing that such abuse costs targeted individuals crucial opportunities, including to work, speak, and be physically safe.”
Citron calls OpenAI’s potential embrace of explicit AI content “alarming.”
As OpenAI’s usage policies prohibit impersonation without permission, explicit nonconsensual imagery would remain banned even if the company did allow creators to generate NSFW material. But it remains to be seen whether the company could effectively moderate explicit generation to prevent bad actors from using the tools. Microsoft made changes to one of its generative AI tools after 404 Media reported that it had been used to create explicit images of Taylor Swift that were distributed on the social platform X.
Additional reporting by Reece Rogers
Matt Simon
Kate O’Flaherty
Joel Khalili
Boone Ashworth