Elon Musk started the week by posting testily on X about his struggles to set up a new laptop running Windows. He ended it by filing a lawsuit accusing OpenAI of recklessly developing human-level AI and handing it over to Microsoft.
Musk’s lawsuit is filed against OpenAI and two of its executives, CEO Sam Altman and president Greg Brockman, both of whom worked with the rocket and car entrepreneur to found the company in 2015. It claims that the pair have breached the original “Founding Agreement” worked out with Musk, which it says pledged the company to develop AGI openly and “for the benefit of humanity.”
Musk’s suit alleges that the for-profit arm of the company, established in 2019 after he parted ways with OpenAI, has created AGI without proper transparency and licensed it to Microsoft, which has invested billions into the company. It demands that OpenAI be forced to release its technology openly and that it be barred from using it to financially benefit Microsoft, Altman, or Brockman.
A crucial portion of the case focuses on the bold yet disputable technological assertion that OpenAI has successfully created Artificial General Intelligence (AGI), a term generally used to denote machines that can universally equal or surpass human intelligence.
The lawsuit notably alleges that GPT-4, OpenAI’s large language model, is an instance of AGI. It takes its evidence from studies which show the system being able to pass the Uniform Bar Exam and other common tests, thereby transcending certain key human capacities. “GPT-4 is not merely capable of reasoning. It is more adept at reasoning than the average human,” the lawsuit alleges.
While GPT-4’s launch in March 2023 was greeted as a significant innovation,its AGI credentials are not accepted by all in the AI community. “GPT-4’s abilities might be general, albeit it is certainly not AGI as people commonly conceive of the term,” states Oren Etzioni, a professor emeritus at the University of Washington and AI specialist.
Christopher Manning, a professor at Stanford University specializing in AI and language, refers to the AGI claim in Musk’s lawsuit as a “wild assertion”. Manning delves into the disparate interpretations of what AGI might constitute within the AI field. Some experts might lower their expectations, arguing that GPT-4’s ability to execute a wide range of tasks may be sufficient grounds for classifying it as AGI. Others, however, prefer to reserve the term for algorithms capable of outsmarting most or perhaps all humans across the spectrum. “Under this definition, it’s evident that we don’t yet possess AGI, and are still a long way off from it,” he opines.
GPT-4 won notice—and new customers for OpenAI—because it can answer a wide range of questions, while older AI programs were generally dedicated to specific tasks like playing chess or tagging images. Musk’s lawsuit refers to assertions from Microsoft researchers, in a paper from March 2023, that “given the breadth and depth of GPT-4’s capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system.” Despite its impressive abilities, GPT-4 still makes mistakes and has significant limitations to its ability to correctly parse complex questions.
“I have the sense that most of us researchers on the ground think that large language models [like GPT-4] are a very significant tool for allowing humans to do much more but that they are limited in ways that make them far from stand-alone intelligences,” adds Michael Jordan, a professor at UC Berkeley and an influential figure in the field of machine learning.
Steven Levy
Aarian Marshall
Byron Tau
Aarian Marshall
Jordan adds that he prefers to avoid the term AGI entirely because it is so vague. “I’ve never found Elon Musk to have anything to say about AI that was very calibrated or based on research reality,” he adds.
Another difficulty for Musk’s lawsuit is that OpenAI has long used its own definition of AGI, describing it as “a highly autonomous system that outperforms humans at most economically valuable work.” GPT-4 seems far short of that mark today.
Musk has offered different definitions for AGI in the past that would disqualify GPT-4 for that honor. In December 2022, shortly after he declared OpenAI’s newly launched ChatGPT “scary good,” the entrepreneur suggested that an algorithm would need to “invent amazing things or discover deeper physics” to deserve the moniker. “I’m not seeing that potential yet,” Musk wrote.
OpenAI’s first release of ChatGPT was built on top of an AI model called GPT-3. It and GPT-4, which powers the premium version of ChatGPT today, are the latest in a series of programs pioneered by OpenAI known as large language models. They learn to predict the text that should follow a string by training on huge amounts of text sourced from the web, books, and other places. Although GPT-4—and rivals such as Google’s Gemini—have stunned AI researchers with their flexibility and power, they remain prone to fabricating information, blurting out unpleasantries, or becoming confused and incoherent.
Recognizing GPT-4 as AGI is a central part of Musk’s lawsuit. It’s part of the basis for its claim that OpenAI’s founding ideas have been breached and also that the for-profit arm breached its own licensing agreement with Microsoft, which says that the company can only receive “pre-AGI” technology.
Mark Lemley, a professor at Stanford Law School, is doubtful of both the AGI claim and the suit’s broader legal merits. While OpenAI does seem less open and has become more profit-focused, it is far from clear what rights that gives Musk.
“Notably, the complaint does not include any contract between Musk and the company or the text of any rights he has to enforce those principles or get his money back,” Lemley says. “If those documents existed I would expect they would be prominently featured in the complaint.” Although the suit refers to a “Founding Agreement,” it cites only an email between Musk and Altman before the company was founded and its brief certificate of incorporation, not any specific contract.
The lawsuit may stumble on other grounds, like the claims about OpenAI’s creation of a for-profit arm. Although that structure is unusual for a technology company, many corporations are controlled by nonprofits.
“I’m really skeptical that the case is meritorious or that it has any chance of success,” says Samuel Brunson, an associate dean at Loyola University Chicago who teaches about nonprofit law. “In large part, Musk is arguing that OpenAI’s pursuit of profits and its coinvestment with for-profit entities has caused it to stop being a nonprofit. And that’s just wrong.”