OpenAI has reported a staggering increase in child exploitation incident reports sent to the National Center for Missing & Exploited Children (NCMEC). During the first half of 2025, OpenAI submitted 80 times more reports than in the same period in 2024, underscoring a significant rise in concerns related to child sexual abuse material (CSAM).
The NCMEC operates a CyberTipline, established by Congress, which serves as a mandatory point for companies to report any suspected child exploitation cases. After receiving reports, NCMEC analyzes the information and forwards it to relevant law enforcement agencies for further investigation.
Reporting statistics can sometimes be misleading. An increase in reports may reflect improved moderation tools or changing criteria used by a platform rather than an actual rise in malicious activity. Additionally, the same content could lead to multiple reports, complicating the interpretation of such data. OpenAI has been transparent in providing figures regarding both the number of reports and the content involved.
In a statement, OpenAI spokesperson Gaby Raila noted that the company enhanced its reporting capabilities towards the end of 2024 to keep pace with user growth and the introduction of new product features that allowed image uploads. The rising popularity of OpenAI products, especially the ChatGPT app, contributed to this increase, which in August cited a fourfold rise in weekly active users compared to the previous year.
In the first half of 2025, OpenAI filed about 75,027 CyberTipline reports concerning approximately 74,559 pieces of content. This was a notable increase compared to the previous year, where only 947 reports were made about 3,252 pieces of content.
OpenAI is proactive about reporting instances of CSAM, which includes both uploads and requests made through its services. However, recent NCMEC data does not encompass reports related to the video-generation app Sora, which was released after the reporting period.
The surge in OpenAI’s reports aligns with broader trends noted by NCMEC regarding the rise of generative AI, which experienced a 1,325% increase in related reports from 2023 to 2024. While large AI companies like Google provide data on their reports, they lack specificity on AI-related cases.
This update comes against a backdrop of increasing scrutiny over child safety in the tech sector. In 2025, 44 state attorneys general issued a joint letter to various AI firms, including OpenAI, warning that they would leverage their authority to protect children from potential exploitation tied to AI products. OpenAI and its competitors have faced several lawsuits from families claiming that their technologies have contributed to tragic outcomes for children.
In response to these challenges, OpenAI has implemented new safety features, including parental controls for ChatGPT. This allows parents to manage settings related to their children’s interactions with the app and alerts them about signs of self-harm or other threats. Following discussions with the California Department of Justice, OpenAI agreed to continue enhancing measures to ensure user safety, particularly for minors.
To improve its reporting strategies, OpenAI also released a Teen Safety Blueprint detailing ongoing efforts to detect and report CSAM to relevant authorities.