Jason Matheny is a delight to speak with, provided you’re up for a lengthy conversation about potential technological and biomedical catastrophe.
Now CEO and president of Rand Corporation, Matheny has built a career out of thinking about such gloomy scenarios. An economist by training with a focus on public health, he dived into the worlds of pharmaceutical development and cultivated meat before turning his attention to national security.
As director of Intelligence Advanced Research Projects Activity, the US intelligence community’s research agency, he pushed for more attention to the dangers of biological weapons and badly designed artificial intelligence. In 2021, Matheny was tapped to be President Biden’s senior adviser on technology and national security issues. And then, in July of last year, he became CEO and president of Rand, the oldest nonprofit think tank in the US, which has shaped government policy on nuclear strategy, the Vietnam War, and the development of the internet.
Matheny talks about threats like AI-enabled bioterrorism in convincing but measured tones, Mr. Doomsday in a casual suit. He’s steering Rand to investigate the daunting risks to US democracy, map out new strategies around climate and energy, and explore paths to “competition without catastrophe” in China. But his long-time concerns about biological weapons and AI remain top of mind.
Onstage with WIRED at the recent Verify cybersecurity conference in Sausalito, California, hosted by the Aspen Institute and Hewlett Foundation, he warned that AI is making it easier to learn how to build biological weapons and other potentially devastating tools. (There’s a reason why he joked that he would pick up the tab at the bar afterward.) The conversation has been edited for length and clarity.
Lauren Goode: To start, we should talk about your role at Rand and what you’re envisioning for the future there. Rand has played a critical role in a lot of US history. It has helped inform, you know, the creation of the internet—
Jason Matheny: We’re still working out the bugs.
Right. We’re going to fix it all tonight. Rand has also influenced nuclear strategy, the Vietnam War, the space race. What do you hope that your tenure at Rand will be defined by?
There’s three areas that I really want to help grow. First, we need a framework for thinking about what [technological] competition looks like without a race to the bottom on safety and security. For example, how can we assure competition with China without catastrophe? A second area of focus is thinking about how we can map out a climate and energy strategy for the country, in a way that is acceptable to our technology requirements, the infrastructure that we have and are building, and gets the economics right.
And then a third area is understanding the risks to democracy right now, not just in the United States but globally. We’re seeing an erosion of norms in how facts and evidence are treated in policy debates. We have a set of very anxious researchers at Rand who are seeing this decay of norms. I think that’s something that’s happening not just in the United States but globally, alongside a resurgence of variants of autocracy.
Jason Parham
Author: Matt Burgess
Author: Simon Hill
Author: Nena Farrell
One type of risk you’ve been very interested in for a long time is “biorisk.” What’s the worst thing that could possibly happen? Take us through that.
I commenced my career in the realm of public health, focusing on the control of infectious diseases such as malaria and tuberculosis. The year 2002 marked a significant breakthrough with the successful synthesis of a virus from scratch through a Darpa project. This was a wake-up call to the public health and biosciences communities, illuminating how potential misuse could transform biology into an engineering discipline. The realization hit home particularly hard for veterans of the smallpox eradication campaign, who had labored for decades to wipe out a disease which could now be recreated synthetically.
Our society bears a great deal of vulnerability. The Covid pandemic serves as a stark example of this fact.
From there, my career trajectory led me to the domain of biosecurity, with the aim of enhancing security around biolabs to minimize their misuse potential. Also, the goal was to better identify biological weapons programs which, regrettably, are still numerous in certain parts of the world. Additionally, we were keen to integrate more security into society to enhance resilience, essential in battling both engineered and natural pandemics.
Regrettably, society still remains rife with vulnerabilities. Covid has showcased this quite effectively. This virus, mild in historical standards, with an infection fatality rate below 1 percent, has shown us the potential damage. In comparison, there are natural viruses with fatality rates exceeding 50 percent and synthetic viruses that exhibit near 100 percent lethality, comparable in transmissibility to SARS-CoV-2. Although vaccine design and manufacturing technology have progressed vastly, approval times have not. So the time required to vaccinate a population remains stubbornly consistent, mirroring the timescales of our parents’ and grandparents’ generations.
When I first started getting interested in biosecurity in 2002, it cost many millions of dollars to construct a poliovirus, a very, very small virus. It would’ve cost close to $1 billion to synthesize a pox virus, a very large virus. Today, the cost is less than $100,000, so it’s a 10,000-fold decrease over that period. Meanwhile, vaccines have actually tripled in cost over that period. The defense-offense asymmetry is moving in the wrong direction.
And what do you see as our greatest adversary in biorisks?
First is nature. The evolution of natural viruses continues. We’re going to have future viral pandemics. Some of them are going to be worse than Covid, some of them are going to be not as bad as Covid, but we’ve got to be resilient to both. Covid cost just the US economy more than $10 trillion, and yet what we invest in preventing the next pandemic is maybe $2 billion to $3 billion of federal investment.
Another category is intentional biological attacks. Aum Shinrikyo was a doomsday cult in Japan that had a biological weapons program. They believed that they would be fulfilling prophecy by killing everybody on the planet. Fortunately, they were working with 1990s biology, which wasn’t that sophisticated. Unfortunately, they then turned to chemical weapons and launched the Tokyo sarin gas attacks.
The barrier to entry for somebody who wants to carry out a biological attack is eroding.
We have individuals and groups today that have mass-casualty intent and increasingly express interest in biology as a weapon. What’s preventing them from being able to use biology effectively are not controls on the tools or the raw materials, because those are all now available in many laboratories and on eBay—you can buy a DNA synthesizer for much less than $100,000 now. You can get all the materials and consumables that you need from most scientific supply stores.
What an apocalyptic group would lack is the know-how to turn those tools into a biological weapon. There’s a concern that AI makes the know-how more widely available. Some of the research done by AI safety and research company Anthropic has looked at risk assessments to see if these tools could be misused by somebody who didn’t have a strong bio background. Could they basically get graduate-level training from a digital tutor in the form of a large language model? Right now, probably not. But if you map the progress over the last couple of years, the barrier to entry for somebody who wants to carry out a biological attack is eroding.
Jason Parham
Matt Burgess
Simon Hill
Nena Farrell
So … we should remind everyone there’s an open bar tonight.
Unhappy hour. We’ll pick up the tab.
Right now everyone is talking about AI and a super artificial intelligence potentially overtaking the human race.
That’s going to take a stiffer drink.
You are an effective altruist, correct?
According to the newspapers, I am.
Is that how you would describe yourself?
I don’t think I’ve ever self-identified as an effective altruist. And my wife, when she read that, she was like, “You are neither effective nor altruistic.” But it is certainly the case that we have effective altruists at Rand who have been very concerned about AI safety. And it is a community of people who have been worried about AI safety longer than many others, in part because a lot of them came from computer science.
So you’re not an effective altruist, you’re saying, but are someone who’s been very cautious about AI for a long time, like some effective altruists are. What was it that made you think years ago that we needed to be cautious about unleashing AI into the world?
I think it was when I realized that so much of what we depend on protecting us from the misuse of biology is knowledge. AI that can make highly specialized knowledge easier to acquire without guardrails is not an unequivocal good. Nuclear knowledge will be created. So will biological weapon knowledge. There will be cyber weapon knowledge. So we have to figure out how to balance the risks and benefits of tools that can create highly specialized knowledge, including knowledge about weapons systems.
It was clear even earlier than 2016 that this was going to happen. James Clapper, former US director of national intelligence was also worried about this, but so was President Obama. There was an interview in WIRED in October of 2016. Obama warned that AI could power new cyberattacks and said that he spent “a lot of time worrying” about pandemics. I think he was worried about what happens when you can do software engineering much, much faster that’s focused on generating malware at scale. You can basically automate a workforce, and now you’ve got effectively a million people who are coding novel malware constantly, and they don’t sleep.
Jason Parham
Matt Burgess
Simon Hill
Nena Farrell
At the same time, it will improve our cybersecurity, because we can also have improvements in security that are amplified a million-fold. So one of the big questions is whether there will be some sort of cyber offense or cyber defense natural advantage as this stuff scales. What does that look like over the long term? I don’t know the answer to that question.
Do you think it’s at all possible that we will enter any kind of AI winter or a slow-down at any point? Or is this just hockey-stick growth, as the tech people like to say?
It’s hard to imagine it really significantly slowing down right now. Instead it seems there’s a positive feedback loop where the more investment you put in, the more investment you’re able to put in because you’ve scaled up.
So I don’t think we’ll see an AI winter, but I don’t know. Rand has had some fanciful forecasting experiments in the past. There was a project that we did in the 1950s to forecast what the year 2000 would be like, and there were lots of predictions of flying cars and jet packs, whereas we didn’t get the personal computer right at all. So forecasting out too far ends up being probably no better than a coin flip.
How concerned are you about AI being used in military attacks, such as used in drones?
There are a lot of reasons why countries are going to want to make autonomous weapons. One of the reasons we’re seeing is in Ukraine, which is this kind of petri dish of autonomous weapons. The radio jamming that’s used makes it very tempting to want to have autonomous weapons that no longer need to phone home.
But I think cyber [warfare] is the realm where autonomy has the highest benefit-cost ratio, both because of its speed and because of its penetration depth in places that can’t communicate.
But how are you thinking about the moral and ethical implications of autonomous drones that have high error rates?
I think the empirical work that’s been done on error rates has been mixed. [Some analyses] found that autonomous weapons were probably having lower miss rates and probably resulting in fewer civilian casualties, in part because [human] combatants sometimes make bad decisions under stress and under the risk of harm. In some cases, there could be fewer civilian deaths as a result of using autonomous weapons.
But this is an area where it is so hard to know what the future of autonomous weapons is going to look like. Many countries have banned them entirely. Other countries are sort of saying, “Well, let’s wait and see what they look like and what their accuracy and precision are before making decisions.”
I think that one of the other questions is whether autonomous weapons are more advantageous to countries that have a strong rule of law over those that don’t. One reason to be very skeptical of autonomous weapons would be because they’re very cheap. If you have very weak human capital, but you have lots of money to burn, and you have a supply chain that you can access, then that characterizes wealthier autocracies more than it does democracies that have a strong investment in human capital. It’s possible that autonomous weapons will be advantageous to autocracies more than democracies.
Jason Parham
Matt Burgess
Simon Hill
Nena Farrell
You’ve indicated that Rand is going to increase its investment in analysis on China, particularly in areas where there are gaps in understanding of its economy, industrial policy, and domestic politics. Why this increased investment?
[The US-China relationship] is one of the most important competitions in the world and also an important area of cooperation. We have to get both right in order for this century to go well.
The US hasn’t faced a strategic competitor with more than two-thirds of our GDP since the War of 1812. So [we need] an accurate assessment of net strengths and net weaknesses in various areas of competition, whether it’s in economic, industrial, military, human capital, education, or talent.
And then where are the areas of mutual benefit where the US and China can collaborate? Non-proliferation, climate, certain kinds of investments, and pandemic preparedness. I think getting that right really matters for the two largest economies in the world.
I recently had the opportunity to talk with Jensen Huang, the CEO of Nvidia, and we talked about US export controls. Just as one example, Nvidia is restricted from shipping its most powerful GPUs to China because of the measures put in place in 2022. How effective is that strategy in the long term?
One piece of math that’s hard to figure out: Even if the US succeeded in preventing the shipment of advanced chips like [Nvidia] H100s to China, can China get those chips through other means? A second question is, can China produce its own chips that, while not as advanced, might still perform sufficiently for the kinds of capabilities that we might worry about?
If you’re a national security decisionmaker [in China] and you’re told, “Hey, we really need this data center to create the arsenal of offensive tools we need. It’s not going to be as cost-effective as using H100s, we’ll have to pay four times more because of having a bigger energy bill and it’ll be slower,” you’re probably going to pay the bill. So the question then becomes, at what point is a decisionmaker no longer willing to pay the bill? Is it 10X the cost? Is it 20X? We don’t know the answer to that question.
But certain kinds of operations are no longer possible because of those export controls. That gap between what you can get a Huawei chip to do and what you can get an Nvidia chip to do keeps on growing because [the chip technology is] sort of stuck in China, and the rest of the world will keep on getting more advanced. And that does prevent a certain kind of military efficiency in computing that could be useful for a variety of military operations. And I think New York Times reporter Paul Mozur was the first to break the news that Nvidia chips were powering the Xinjiang data center that’s being used to monitor the Uighur prison camps in real time.
That raises a really hard question: Should those chips be going into a data center that are being used for human rights abuses? Regardless of one’s view of the policy, just doing the math is really important, and that’s mostly what we focus on at Rand.