Harnessing the Power of AI: Looking Beyond Mistrust in Tech Companies

By Steven Levy

It seems evident to me that almost 70 years after the first conference on artificial intelligence—where the nascent field’s leaders suggested the task would be completed within a decade—the field is now poised to make a transformational impact on our lives. We don’t need to reach artificial general intelligence, or AGI, whatever that means, for this to happen. I wrote as much in this column three weeks ago, citing evidence that after the astonishing leap of large language models that gave us ChatGPT, the advancements had not “plateaued” as some critics were charging. I also disagreed with the wave of skeptics claiming that what looked amazing in OpenAI’s GPT-4, Anthropic’s Claude 3, Meta’s Llama 3, and an armada of Microsoft Copilots was merely a linguistic variation of a card trick. The hype, I insisted, is justified.

It turns out that conclusion is anything but evident to lots of people. The pushback was immediate and furious. My rather neutral tweet about the column was viewed over 29 million times, and lots of those eyeballs were shooting death lasers at me. I received hundreds of comments, and though a good number expressed agreement, the vast majority were negative and expressed disagreement in an impolite manner.

The attacks came from several camps. First were those disparaging the advance of AI itself, claiming I was a lousy journalist for blindly accepting the fake narrative of the tech companies pushing AI. “This is a shill, nothing more,” said one commenter. Another said, “You’re parroting the lies put forth by those scam artists.” After Google released its AI Overview search feature, which was prone to jaw-dropping errors, my responders seized on its mistakes as proof that there was no there in generative AI. “Enjoy your pizza with extra glue,” someone advised me.

Others took this chance to speak out about the dangers of AI, emphasizing the significance AI has in today’s society. “So was the Atom Bomb,” one user commented online. “How did that work out?” Another group criticized large language models (LLMs) for being trained on copyrighted materials, pointing out a legitimate concern that does not negate the capabilities of these models.

One of the responses that caught my attention was from an individual who referred to my mention of an LLM successfully passing the bar exam. “Passing the bar exam is something DeepMind could do back when it performed well at Jeopardy,” the person stated, erroneously. The notable Jeopardy! contestant was in fact IBM’s Watson, not DeepMind, which was barely starting out at that time and was finely tuned to excel at the game show. Considering that the bar exam does not involve framing questions from given answers, it’s absurd to assume Watson could pass it. Even the most advanced LLM, when consulted, debunked the idea, confirming that Watson would not be successful in the bar exam setting. Score one for the robots.

Despite the often disrespectful tone of the comments—which is typical behavior on social platforms—I understand where they are coming from but believe they are misplaced. We are currently experiencing a delay where users are just starting to discover how to harness the incredible offerings from AI companies. Disregard the not-so-smart outputs from AI Overviews and other LLMs (but let’s not forget that not just Google has issues with inaccurate responses). Major tech corporations are intentionally launching incompletely developed products into the market to quickly learn and improve them, driven also by the fierce competition they face.

By Matt Burgess

By Matt Burgess

By Megan Farokhmanesh

By Joseph Cox

Meanwhile, in less visible ways, AI is already changing education, commerce, and the workplace. One friend recently told me about a big IT firm he works with. The company had a lengthy and long-established protocol for launching major initiatives that involved designing solutions, coding up the product, and engineering the rollout. Moving from concept to execution took months. But he recently saw a demo that applied state-of-the-art AI to a typical software project. “All of those things that took months happened in the space of a few hours,” he says. “That made me agree with your column. Tons of the companies that surround us are now animated corpses.” No wonder people are freaked.

Many criticisms of AI stem from a lack of trust towards the companies driving its development. I had a conversation with Ali Farhadi, CEO of the Allen Institute for AI, a nonprofit dedicated to AI research. Farhadi believes the optimism around AI is warranted but recognizes the public’s skepticism. He pointed out that only a few companies have the resources to advance in AI, which adds to the distrust. “AI is regarded as a mysterious and costly technology that’s being hurriedly advanced without full understanding,” he noted. This rush, he argues, could lead to unpredictable outcomes that may provoke public backlash. Farhadi advocates for transparency in how AI models are trained, especially by major corporations.

Another complicating factor is the commitment of many AI developers towards achieving AGI (Artificial General Intelligence), a goal that stirs mixed feelings. Despite being a central vision of institutions like OpenAI, Farhadi criticizes the concept as vague and believes it hampers the practical adoption of AI technologies. In his experience, merely mentioning “AGI” can significantly delay academic progress.

On a personal note, I’m neutral about the immediate emergence of AGI; its future impact remains uncertain. Discussions with AI experts often reveal that even they are unsure about what to expect in the long-term.

From what I’ve observed, however, it’s apparent that AI will continue to evolve and integrate more deeply into both professional and personal spheres. While AI advancements will streamline many tasks, they will also disrupt numerous jobs and industries. Although some people might find new opportunities in the wake of AI advancements, others will struggle with unemployment. Given these dynamics, it’s important for those of us involved in AI to acknowledge and empathize with the public’s concerns and frustrations.

By Matt Burgess

By Matt Burgess

By Megan Farokhmanesh

By Joseph Cox

Reflecting on the 1956 AI conference at Dartmouth often recalls the brilliance of Marvin Minsky, a man whose intellect remained unmatched even upon his passing in 2016. His capabilities sparked the question, vividly posed in an article, whether any AI could ever replicate the “meat” of his brain—a notion both fascinating and chilling.

Marvin Minsky, a pioneer alongside John McCarthy in the creation of artificial intelligence since the 1950s, envisioned machines with human-like cognition. Yet, the sheer depth and unpredictable nature of Marvin’s own intelligence suggested that no creation, not even a million Singularities, could match the cognitive prowess of his mind. His imagination knew no bounds.

Being in Marvin Minsky’s presence was a remarkable experience. Teaching at MIT since 1958, his contributions spanned across AI, neural networks, and robotics, and he was the inventor of technologies like the head-mounted display. Beyond his inventions, it was Marvin’s vibrant, profound dialogues—sprinkled with both wisdom and whimsical humor—that truly shaped his legacy. His unique perspective had a lasting impact, often shifting one’s view of the world completely and underscoring the value of unconventional thinking.

Mark raises a pertinent question about the future, contemplating, “What does tech have to worry about in another Trump term?”

Thanks for asking, Mark. I’ll avoid making general remarks about what everyone has to worry about in another Trump term and concentrate on the question at hand. The climate for tech after a Trump victory is more complicated now that a number of super-rich Silicon Valley tech figures are supporting the former president—felony conviction notwithstanding. This week, tech billionaires Chamath Palihapitiya and David Sacks hosted a sold-out Trump fundraiser, which charged $300,000 to join the “host committee” and stay for dinner, and $50,000 to attend just the reception. Elon Musk is reportedly angling to be Trump’s tech adviser in a second term.

By Matt Burgess

By Matt Burgess

By Megan Farokhmanesh

By Joseph Cox

Clearly some tech people aren’t worried about Trump. Indeed, his return to the White House might actually be a short-term boon for some of the biggest companies. Trump would almost certainly reverse the Biden administration’s hard line toward regulation and antitrust prosecution. (Bye-bye, net neutrality. Hello, giant acquisitions by tech companies.)

But there would be plenty for tech to worry about, too. Trump has a well-documented history of rewarding his supporters and punishing those who don’t bend the knee. Remember how he tried to steer TikTok to Oracle, run by his booster Larry Ellison? Tech works best as a meritocracy—crony capitalism would be counterproductive for the industry.

The first Trump administration never got around to big infrastructure investments—would it now roll back Biden’s big grants in chip manufacturing? We might also see a drift in tech policy: The Biden White House has issued a detailed order on artificial intelligence that includes close scrutiny of the technology’s potential downsides and security risks. Would Trump unwind it all? (He hasn’t talked much about AI on the campaign trail.) Ultimately the smartest tech executives in big companies would figure out how to appease Trump. But long-term, a dwindling of public investment and research and the rise of a crony-based system might well weaken the US tech age

Oh, and expect Trump to mandate that all government communications should be conducted on TruthDigital. Just kidding. I think.

You can submit questions to mail@wired.com. Write ASK LEVY in the subject line.

It’s not even summer yet, and the highs in India are topping 120 degrees Fahrenheit. So maybe it’s not so bad that it’s 110 degrees in Phoenix.

AI Overviews aren’t always wrong. But here’s one case where a correct answer was suspiciously close to language in a WIRED story.

How one California town sent drones to answer 911 calls—at a possible cost of the privacy of those living in poorer neighborhoods.

By Matt Burgess

By Megan Farokhmanesh

By Joseph Cox

Inside the biggest sting in FBI history.

If you are going to write a sci-fi novel, who would be the ideal collaborator? Yep, Keanu Reeves.

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Article

Microsoft to Disable 'Recall' Feature by Default Following Security Concerns

Next Article

Balatro: The Long-Awaited Physical Release Is Here!

Related Posts