If you buy something using links in our stories, we may earn a commission. This helps support our journalism. Learn more. Please also consider subscribing to WIRED
Arvind Narayanan, a computer science professor at Princeton University, is best known for calling out the hype surrounding artificial intelligence in his Substack, AI Snake Oil, written with PhD candidate Sayash Kapoor. The two authors recently released a book based on their popular newsletter about AI’s shortcomings.
But don’t get it twisted—they aren’t against using new technology. “It’s easy to misconstrue our message as saying that all of AI is harmful or dubious,” Narayanan says. He makes clear, during a conversation with WIRED, that his rebuke is not aimed at the software per say, but rather the culprits who continue to spread misleading claims about artificial intelligence.
In AI Snake Oil, those guilty of perpetuating the current hype cycle are divided into three core groups: the companies selling AI, researchers studying AI, and journalists covering AI.
Companies that claim to foresee the future through algorithms are often regarded with suspicion due to their potential for fraudulent activities. According to Narayanan and Kapoor in their book, these predictive AI systems usually first affect minorities and those in poverty. For instance, an algorithm in the Netherlands was formerly used by local government to identify possible welfare fraud culprits, but it incorrectly targeted non-Dutch speaking women and immigrants.
The skepticism extends to firms that focus mainly on existential risks such as artificial general intelligence (AGI), an idea of creating a super-powerful algorithm that outperforms humans in labor. Narayanan shares that a significant motivation for his career in computer science was the chance to contribute to AGI development. However, the issue lies in companies putting more weight on long-term risks rather than addressing the current adverse effects of AI tools, a criticism often echoed by researchers.
The authors also attribute much of the AI hype and misconceptions to poor and unrepeatable research. Kapoor points out a common issue in many fields, where ‘data leakage’ — testing AI with parts of its training data, akin to giving students test answers beforehand — leads to overly optimistic assertions of AI’s effectiveness.
Find the book at:
If you purchase items through our links, we might earn a commission which supports our journalistic work. Learn more.
In AI Snake Oil, academics are accused of fundamental errors, while journalists are depicted as more malevolently inclined, engaging in unethical practices to maintain relations with big tech firms, according to Princeton researchers. They argue, “Many articles merely rephrase press releases, presenting them as genuine news.” Such journalists favor preserving their connections over conducting impartial reporting.
I believe this critique of access journalism is just. Looking back, perhaps I should have posed more challenging questions in interviews with key AI industry figures. However, the researchers may simplify the issue too much. Being allowed entry by major AI firms doesn’t restrict me from producing skeptical content or pursuing investigative work that might upset them. (This remains true even when they enter into agreements with companies like WIRED’s parent company, such as OpenAI.)
Moreover, inflated media stories often distort the real abilities of AI. Narayanan and Kapoor discuss a 2023 piece by New York Times writer Kevin Roose, entitled Bing’s A.I. Chat: ‘I Want to Be Alive. 😈’, illustrating how journalists can foster public misconceptions about sentient AIs. Kapoor notes, “Roose was a writer of such pieces, which potentially deepens misunderstanding when repeated across multiple sensationalist headlines about chatbots seeking life.” He recalls the 1960s’ ELIZA chatbot, which quickly had human traits ascribed to it, highlighting our tendency to humanize algorithms.
Roose opted not to comment when contacted and referred to a passage from his related column, which highlighted his understanding that AI is not sentient, despite discussing topics like its “secret desire to be human” and “thoughts about its creators” in a chatbot transcript, which sparked concerns among readers in the comments section over the chatbot’s capabilities.
Concerns about misleading visual representations in AI journalism were also raised in AI Snake Oil. Common but clichéd images, such as robot photos or human brains filled with computer circuits intended to symbolize AI and its neural networks, were criticized. Narayanan pointed out the flaws of the “circuit brain” imagery and advocated for using pictures of AI chips or graphics processing units to more accurately depict artificial intelligence.
The critique extends to the overhyped discourse around large language models which, despite potential over-expectations, are still believed to have lasting societal impacts, according to Kapoor. With the emergence of generative tools via smartphone apps and formatting devices, informed discussions on the real limitations and capabilities of AI are crucial. The sentiment holds even if an AI bubble may burst, underscoring the importance of these discussions.
Understanding AI begins by acknowledging its complex nature, which the term ‘AI’ simplifies into a neat, marketable concept. AI Snake Oil suggests dividing AI into two categories: predictive AI, which uses historical data to predict future outcomes, and generative AI, which creates responses based on past data. These distinctions are crucial for a deeper grasp of what AI entails and its practical applications.more information.
It is beneficial for everyone who interacts with AI tools to dedicate some time to understanding fundamental principles such as machine learning and neural networks. This knowledge can help unravel the complexities of the technology and protect against the overwhelming hype surrounding AI.
From my experience reporting on AI over the last two years, I’ve observed that while some readers comprehend certain limitations of generative AI, such as inaccurate outputs and biased responses, the general understanding of its flaws remains vague. For instance, in the forthcoming season of my newsletter, AI Unlocked, I’ve included a lesson geared toward exploring whether ChatGPT can reliably give medical advice based on reader inquiries and handle sensitive health information discreetly.
Understanding the origins of the AI’s training data, which often comes from extensive online resources or specific forums like Reddit, can lead users to question the AI’s reliability more critically, decreasing misplaced trust in the technology.
Narayanan advocates vigorously for enhanced educational standards about AI, implementing lessons on its advantages and risks with his own children from a young age. He emphasizes starting such education in elementary school, sharing his proactive, tech-centric educational strategies based on his insights from research.
Generative AI may now be able to write half-decent emails and help you communicate sometimes, but only well-informed humans have the power to correct breakdowns in understanding around this technology and craft a more accurate narrative moving forward.