Written by: Stephen Ornes
The original version of this story appeared in Quanta Magazine.
Two years ago, in a project called the Beyond the Imitation Game benchmark, or BIG-bench, 450 researchers compiled a list of 204 tasks designed to test the capabilities of large language models, which power chatbots like ChatGPT. On most tasks, performance improved predictably and smoothly as the models scaled up—the larger the model, the better it got. But with other tasks, the jump in ability wasn’t smooth. The performance remained near zero for a while, then performance jumped. Other studies found similar leaps in ability.
The authors described this as “breakthrough” behavior; other researchers have likened it to a phase transition in physics, like when liquid water freezes into ice. In a paper published in August 2022, researchers noted that these behaviors are not only surprising but unpredictable, and that they should inform the evolving conversations around AI safety, potential, and risk. They called the abilities “emergent,” a word that describes collective behaviors that only appear once a system reaches a high level of complexity.
But things may not be so simple. A new paper by a trio of researchers at Stanford University posits that the sudden appearance of these abilities is just a consequence of the way researchers measure the LLM’s performance. The abilities, they argue, are neither unpredictable nor sudden. “The transition is much more predictable than people give it credit for,” said Sanmi Koyejo, a computer scientist at Stanford and the paper’s senior author. “Strong claims of emergence have as much to do with the way we choose to measure as they do with what the models are doing.”
We’re only now seeing and studying this behavior because of how large these models have become. Large language models train by analyzing enormous data sets of text—words from online sources including books, web searches, and Wikipedia—and finding links between words that often appear together. The size is measured in terms of parameters, roughly analogous to all the ways that words can be connected. The more parameters, the more connections an LLM can find. GPT-2 had 1.5 billion parameters, while GPT-3.5, the LLM that powers ChatGPT, uses 350 billion. GPT-4, which debuted in March 2023 and now underlies Microsoft Copilot, reportedly uses 1.75 trillion.
That rapid growth has brought an astonishing surge in performance and efficacy, and no one is disputing that large enough LLMs can complete tasks that smaller models can’t, including ones for which they weren’t trained. The trio at Stanford who cast emergence as a “mirage” recognize that LLMs become more effective as they scale up; in fact, the added complexity of larger models should make it possible to get better at more difficult and diverse problems. But they argue that whether this improvement looks smooth and predictable or jagged and sharp results from the choice of metric—or even a paucity of test examples—rather than the model’s inner workings.
Andy Greenberg
Andrew Couts
Steven Levy
Amit Katwala
Three-digit addition offers an example. In the 2022 BIG-bench study, researchers reported that with fewer parameters, both GPT-3 and another LLM named LAMDA failed to accurately complete addition problems. However, when GPT-3 trained using 13 billion parameters, its ability changed as if with the flip of a switch. Suddenly, it could add—and LAMDA could, too, at 68 billion parameters. This suggests that the ability to add emerges at a certain threshold.
But the Stanford researchers point out that the LLMs were judged only on accuracy: Either they could do it perfectly, or they couldn’t. So even if an LLM predicted most of the digits correctly, it failed. That didn’t seem right. If you’re calculating 100 plus 278, then 376 seems like a much more accurate answer than, say, −9.34.
So instead, Koyejo and his collaborators tested the same task using a metric that awards partial credit. “We can ask: How well does it predict the first digit? Then the second? Then the third?” he said.
Koyejo credits the idea for the new work to his graduate student Rylan Schaeffer, who he said noticed that an LLM’s performance seems to change with how its ability is measured. Together with Brando Miranda, another Stanford graduate student, they chose new metrics showing that as parameters increased, the LLMs predicted an increasingly correct sequence of digits in addition problems. This suggests that the ability to add isn’t emergent—meaning that it undergoes a sudden, unpredictable jump—but gradual and predictable. They find that with a different measuring stick, emergence vanishes.
Brando Miranda, Sanmi Koyejo, and Rylan Schaeffer have suggested that the “emergent” abilities of large language models are both predictable and gradual.
Andy Greenberg
Andrew Couts
Steven Levy
Amit Katwala
But other scientists highlight that the study doesn’t completely refute the principle of emergence. For instance, it does not provide guidance on how to predict which metrics will display sudden enhancement in an LLM, states Tianshi Li, a computer scientist at Northeastern University. “So in that view, these abilities remain unpredictable,” she commented. Others, such as Jason Wei, a computer scientist currently at OpenAI who has assembled a catalogue of emergent abilities and contributed to the BIG-bench paper, maintains that initial claims of emergence were valid as for skills like arithmetic, the correct answer truly is all that matters.
“There’s certainly an intriguing discussion to be had about this,” conveys Alex Tamkin, a scientist researching at the AI startup Anthropic. The fresh paper effectively disassembles multi-step activities to acknowledge the contribution of individual elements. “But this doesn’t encompass the whole picture. We can’t conclude that all these improvements are illusory. I am of the opinion that the literature indicates that even when with single-step forecasts or usage of consistent metrics, discontinuities still exist, and as the model size increases, you can observe improvements in a leap-like manner.”
And even if the emergent phenomenon in current LLMs can be dismissed due to differing measurement instruments, it is expected that future larger, more complex LLMs won’t be the same. “When we augment LLMs to another tier, invariably they will assimilate knowledge from other tasks and models,” predicts Xia “Ben” Hu, a computer scientist at Rice University.
This continuous re-evaluation of emergence is more than an academic query for scientists. For Tamkin, it is pertinent to the ongoing enterprise of forecasting how LLMs will perform. “These technologies exhibit extensive and diverse applicability,” he observes. “I anticipate that the community utilizes this as a springboard for maintaining the emphasis on the importance of creating a predictive science for these things. How do we prevent being taken by surprise by the next generation of models?”
Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.