Philosopher Nick Bostrom is surprisingly cheerful for someone who has spent so much time worrying about ways that humanity might destroy itself. In photographs he often looks deadly serious, perhaps appropriately haunted by the existential dangers roaming around his brain. When we talk over Zoom, he looks relaxed and is smiling.
Bostrom has made it his life’s work to ponder far-off technological advancement and existential risks to humanity. With the publication of his last book, Superintelligence: Paths, Dangers, Strategies, in 2014, Bostrom drew public attention to what was then a fringe idea—that AI would advance to a point where it might turn against and delete humanity.
To many in and outside of AI research the idea seemed fanciful, but influential figures including Elon Musk cited Bostrom’s writing. The book set a strand of apocalyptic worry about AI smoldering that recently flared up following the arrival of ChatGPT. Concern about AI risk is not just mainstream but also a theme within government AI policy circles.
Bostrom’s new book takes a very different approach. Instead of highlighting the dangers, Deep Utopia: Life and Meaning in a Solved World, contemplates a future where mankind has created superintelligent machines successfully and avoided catastrophe. All illnesses have been eliminated and humans live indefinitely in an environment of infinite abundance. Bostrom’s book explores the potential meaning of life in a techno-utopia and questions if it might feel somewhat empty. He discussed this with WIRED in a lightly edited Zoom conversation for better length and clarity.
Will Knight: Why transition from writing about superintelligent AI posing a threat to humanity, to considering a future where it does good?
Nick Bostrom: The numerous things that could go haywire with the development of AI are now getting much more focus. There’s been a huge shift in the past decade. At this moment, all the leading frontier AI laboratories have research teams working to devise scalable alignment techniques. And in the recent couple of years, we’ve seen political leaders begin to pay attention to AI as well.
The increase in depth and sophistication in thinking about what happens if we don’t fall into one of these pitfalls hasn’t matched this. The thinking on the subject has generally been quite shallow.
When you wrote Superintelligence, few would have anticipated the existential AI risks becoming a topic of mainstream discussion so soon. Is it safe to assume that the issues addressed in your new book be regarded with similar urgency?
If the advent of automation continues, these discussions will inevitably arise and progressively intensify.
The prominence of social companion applications is on the rise. While this may invigorate an array of unique perspectives and potentially pave the way for a cultural disrupt, it also presents a rather significant concern. What happens if a part of the population derives pleasure by mistreating these applications, particularly those people who struggle to find joy in everyday life?
The incorporation of AI in political campaigns, marketing strategies, and automated propaganda systems may have far-reaching implications in politics and information fields. However, if we approach this wisely, it could significantly boost our capacity as constructive democratic citizens, providing individualized advice on policy proposals and their implications. This, indeed, will introduce an array of dynamics for our society.
Would a future in which AI has solved many problems, like climate change, disease, and the need to work, really be so bad?
Boone Ashworth
Lauren Goode
Emily Mullin
Tim Barber
Ultimately, I’m optimistic about what the outcome could be if things go well. But that’s on the other side of a bunch of fairly deep reconsiderations of what human life could be and what has value. We could have this superintelligence and it could do everything: Then there are a lot of things that we no longer need to do and it undermines a lot of what we currently think is the sort of be all and end all of human existence. Maybe there will also be digital minds as well that are part of this future.
Coexisting with digital minds would itself be quite a big shift. Will we need to think carefully about how we treat these entities?
My view is that sentience, or the ability to suffer, would be a sufficient condition, but not a necessary condition, for an AI system to have moral status.
It’s conceivable that even AI systems lacking consciousness could be attributed various degrees of moral status. Consider an advanced reasoner with self-conception, stable preferences, potentially aspirational life goals, and the ability to form reciprocal relationships with humanity. It’s plausible to posit that there could be inappropriate ways of interacting with such a system.
Is it not safer to restrict AI from evolving willfulness and developing self-awareness?
There are compelling factors driving AI advancement currently. The enormous economic advantages will become progressively apparent. In addition, there are scientific breakthroughs to consider, such as new medicines, renewable energy resources, and so forth. Importantly, there will also be an increasingly significant impact on national security, with military motivation for pushing the technology forward.
It’s desirable that the pioneers of next-level AI – especially the truly transformative superintelligent systems – be able to pause at crucial times. This could prove beneficial for safety.
I would be much more skeptical of proposals that seemed to create a risk of this turning into AI being permanently banned. It seems much less probable than the alternative, but more probable than it would have seemed two years ago. Ultimately it wouldn’t be an immense tragedy if this was never developed, that we were just kind of confined to being apes in need and poverty and disease. Like, are we going to do this for a million years?
Turning back to existential AI risk for a moment, are you generally happy with efforts to deal with that?
Well, the conversation is kind of all over the place. There are also a bunch of more immediate issues that deserve attention—discrimination and privacy and intellectual property et cetera.
Companies interested in the longer term consequences of what they’re doing have been investing in AI safety and in trying to engage policymakers. I think that the bar will need to sort of be raised incrementally as we move forward.
Boone Ashworth
Lauren Goode
Emily Mullin
Tim Barber
In contrast to so-called AI doomers there are some who advocate worrying less and accelerating more. What do you make of that movement?
People sort of divide themselves up into different tribes that can then fight pitched battles. To me it seems clear that it’s just very complex and hard to figure out what actually makes things better or worse in particular dimensions.
I’ve spent three decades thinking quite hard about these things and I have a few views about specific things but the overall message is that I still feel very in the dark. Maybe these other people have found some shortcuts to bright insights.
Perhaps they’re also reacting to what they see as knee-jerk negativity about technology?
That’s also true. If something goes too far in another direction it naturally creates this. My hope is that although there are a lot of maybe individually irrational people taking strong and confident stances in opposite directions, somehow it balances out into some global sanity.
I think there’s like a big frustration building up. Maybe as a corrective they have a point, but I think ultimately there needs to be a kind of synthesis.
Since 2005 you have worked at Oxford University’s Future of Humanity Institute, which you founded. Last month it announced it was closing down after friction with the university’s bureaucracy. What happened?
It’s been several years in the making, a kind of struggle with the local bureaucracy. A hiring freeze, a fundraising freeze, just a bunch of impositions, and it became impossible to operate the institute as a dynamic, interdisciplinary research institute. We were always a little bit of a misfit in the philosophy faculty, to be honest.
What’s next for you?
I feel an immense sense of emancipation, having had my fill for a period of time perhaps of dealing with faculties. I want to spend some time I think just kind of looking around and thinking about things without a very well-defined agenda. The idea of being a free man seems quite appealing.