When Sam Altman Advocated for a Countersurveillance Audit at OpenAI

In her book Empire of AI, journalist Karen Hao details the growing anxieties at OpenAI, particularly concerning CEO Sam Altman’s promises following the company’s lucrative deal with Microsoft in 2019. Altman’s commitments startled employees, especially those concerned with AI safety, who feared such deals could restrict their ability to manage risks associated with powerful AI models.

The AI safety team, led by Dario Amodei, began having serious doubts about Altman’s integrity. One employee noted the dissonance between their pragmatic approach to generating revenue and the hasty decisions being made, leading them toward potentially compromising positions in understanding AI deployments.

By 2019, paranoia crept into the company as employees became increasingly aware of the dangers posed by misaligned AI systems. A notable incident involved a researcher who mistakenly flipped a critical sign in their code, accidentally prompting a model to generate highly offensive content overnight. While the mistake was somewhat humorous, it highlighted the fragility of AI safety.

Concerns grew as employees worried about the implications of AI technology falling into the wrong hands or competitors discovering OpenAI’s secrets. They often described OpenAI’s foundational technology simply as "scale"—a key to its transformative power. Leadership played into these fears, cautioning against global adversaries like China, Russia, and North Korea, which heightened tensions among the diverse workforce, particularly non-American employees questioning the necessity of keeping AI development exclusively within the U.S.

Amid these concerns, a shift in culture occurred. The once collaborative and academic environment began to feel like a high-stakes arms race, evoking comparisons to the secrecy surrounding the Manhattan Project. Although OpenAI had cultivated a community spirit, with Friday music nights and camaraderie, the pressure of global competition weighed heavily on employees.

As unease mounted, Altman himself became increasingly concerned about internal security. Following Elon Musk’s departure from the company, he secretly commissioned a countersurveillance audit to check for any lingering threats, suspecting that information leaks could come from staff at Neuralink, due to their physical workspace overlap with OpenAI.

Throughout this period, Altman rationalized the need for rapid, less transparent progress as pivotal to maintaining a leadership role in AI development. He framed the drive to innovate quickly as essential not only for OpenAI’s success but also for the global safety of artificial general intelligence (AGI). He emphasized the notion that failing to outpace potential adversaries could lead to disaster.

Notably, the author’s attempts to engage with Altman and other key figures for commentary went unanswered, underscoring the tension within the organization during this tumultuous era.

Excerpt adapted from Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, by Karen Hao. Published by arrangement with Penguin Press. Copyright © 2025 by Karen Hao.

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Article

The Case That Unraveled: How Google Searches Led to the Arrest of Three Teens in a Murder Plot

Next Article

Global Crackdown: Authorities Execute Major Takedown of Infostealer Exploited by Cybercriminals

Related Posts