In his polarizing “Techno-Optimist Manifesto” last year, venture capitalist Marc Andreessen listed a number of enemies to technological progress. Among them were “tech ethics” and “trust and safety,” a term used for work on online content moderation, which he said had been used to subject humanity to “a mass demoralization campaign” against new technologies such as artificial intelligence.
Andreessen’s declaration drew both public and quiet criticism from people working in those fields—including at Meta, where Andreessen is a board member. Critics saw his screed as misrepresenting their work to keep internet services safer.
On Wednesday, Andreessen offered some clarification: When it comes to his 9-year-old son’s online life, he’s in favor of guardrails. “I want him to be able to sign up for internet services, and I want him to have like a Disneyland experience,” the investor said in an onstage conversation at a conference for Stanford University’s Human-Centered AI research institute. “I love the internet free-for-all. Someday, he’s also going to love the internet free-for-all, but I want him to have walled gardens.”
Despite the implications of his manifesto, Andreessen expressed approval for tech companies, and implicitly their trust and safety teams, to establish and uphold their own content guidelines on their platforms.
“Each company has a fair amount of freedom to determine these things,” he remarked. “For example, Disney has different rules of conduct in Disneyland compared to the streets of Orlando.” He mentioned that tech companies need to have trust and safety teams to avoid government penalties for allowing child sexual abuse imagery and other prohibited content.
Andreessen voiced concerns about content moderation when it results in the domination of a few companies over cyberspace and their potential collusion with governmental bodies, leading to widespread restrictions. He referred to this scenario as having “potent societal consequences,” though he did not elaborate on these. “A pervasive environment of censorship and control is a genuine problem,” he added.
He proposed a solution involving the promotion of competition within the tech industry and varied approaches to content moderation. He emphasized the significant impact of platform and company policies, stating, “What happens on these platforms, in these systems, and in these companies truly matters.”
Andreessen did not mention X, the social media platform controlled by Elon Musk and previously known as Twitter, a platform in which his firm Andreessen Horowitz had invested following Musk’s takeover in late 2022. Musk proceeded to dismiss many of the company’s trust and safety personnel, dissolve Twitter’s AI ethics team, ease content regulations, and readmit users previously given permanent bans.
These developments, along with Andreessen’s investment and declaration, cultivated some notions that the investor favored minimal restrictions on free speech. His additional comments emerged during a discussion with Fei-Fei Li, the co-director of Stanford’s HAI, under the theme “Removing Impediments to a Robust AI Innovative Ecosystem.”
In the conversation, Andreessen again voiced opinions he’s shared over the last year, arguing that delaying AI development through regulatory or recommended actions by some AI safety advocates would be similar to what he considers the historical error of the US reducing its nuclear energy investments decades earlier.
He remarked that nuclear energy could have been a “silver bullet” for current concerns over carbon emissions from other sources of electricity. Instead, the US hesitated, and as a result, climate change has not been controlled as effectively as possible. Andreessen described this approach as overly cautious, predominated by a frame of thought that ”if there are possible harms, there should consequently be regulations, limits, halts, and strictures.”
For similar reasons, Andreessen expressed a desire for increased governmental investment in AI infrastructure and research, as well as a more liberated environment for AI experimentation. This could include lesser restrictions on open-source AI models. He argues that if he wants his son to enjoy the ‘Disneyland’ of AI experiences, certain regulations from governments or trust and safety teams might be necessary.
By Joseph Cox
By Matt Burgess
By Marah Eakin