US National Security Experts Raise Alarm: AI Companies Must Enhigten Protection of Sensitive Data

By Paresh Dave

Last year, the White House struck a landmark safety deal with AI developers that saw companies including Google and OpenAI promise to consider what could go wrong when they create software like that behind ChatGPT. Now a former domestic policy adviser to President Biden who helped forge that deal says that AI developers need to step up on another […]

front: protecting their secret formulas from China.

“Because they are behind, they are going to want to take advantage of what we have,” said Susan Rice regarding China. She left the White House last year and spoke on Wednesday during a panel about AI and geopolitics at an event hosted by Stanford University’s Institute for Human-Centered AI. “Whether it’s through purchasing and modifying our best open source models, or stealing our best secrets. We really do need to look at this whole spectrum of how do we stay ahead, and I worry that on the security side, we are lagging.”

The concerns raised by Rice, who was formerly President Obama’s national security adviser, are not hypothetical. In March the US Justice Department announced charges against a former Google software engineer for allegedly stealing trade secrets related to the company’s TPU AI chips and planning to use them in China.

Legal experts at the time warned it could be just one of many examples of China trying to unfairly compete in what’s been termed an AI arms race. Government officials and security researchers fear advanced AI systems could be abused to generate deepfakes for convincing disinformation campaigns, or even recipes for potent bioweapons.

There isn’t universal agreement among AI developers and researchers that their code and other components need protecting. Some don’t view today’s models as sophisticated enough to need locking down, and companies like Meta that are developing open source AI models release much of what government officials, such as Rice, would suggest holding tight. Rice acknowledged that stricter security measures could end up setting US companies back by cutting the pool of people working to improve their AI systems.

Interest in—and concern about—securing AI models appears to be picking up. Just last week, the US think tank RAND published a report identifying 38 ways secrets could leak out from AI projects, including bribes, break-ins, and exploitation of technical backdoors.

RAND’s recommendations included that companies should encourage staff to report suspicious behavior by colleagues and allow only a few employees access to the most sensitive material. Its focus was on securing so-called model weights, the values inside an artificial neural must network that get tuned during training to imbue it with useful functionality, such as ChatGPT’s ability to respond to questions.

Under a sweeping executive order on AI signed by President Biden last October, the US National Telecommunications and Information Administration is expected to release a similar report this year analyzing the benefits and downsides to keeping weights under wraps. The order already requires companies that are developing advanced AI models to report to the US Commerce Department on the “physical and cybersecurity measures taken to protect those model weights.” And the US is considering export controls to restrict AI sales to China, Reuters reported last month.

By Joseph Cox

By Matt Burgess

By Dhruv Mehrotra

By Hannah Zeavin

Google, in public comments to the NTIA ahead of its report, mentioned that it anticipates an increase in efforts to disrupt, degrade, deceive, and steal its models. Google assured that its secrets are protected by a “security, safety, and reliability organization” composed of engineers and researchers with top-tier expertise. It also mentioned developing “a framework” involving an expert committee to manage access to models and their weights.

Similarly, OpenAI shared in its comments to the NTIA the necessity of both open and closed model types depending on the situation. OpenAI, which is known for models such as GPT-4, and associated applications and services like ChatGPT, recently established a security committee on its board and shared on its blog the security steps it takes for training models. The post aimed at encouraging other labs through transparency to adopt similar protective measures without specifying the exact threats.

In a discussion with Rice at Stanford, RAND CEO Jason Matheny highlighted concerns about security vulnerabilities. He noted how export controls used to reduce China’s access to advanced computing chips have limited Chinese developers’ capabilities in creating their own models, which, according to Matheny, has increased their likelihood of attempting to directly steal AI software. He emphasized the substantial potential cost of a cyberattack aimed at swiping AI model weights, possibly costing an American firm hundreds of billions to develop, arguing the investment in security to prevent such thefts is insufficient but crucial.

China’s embassy in Washington, DC, has not yet responded to a request for comment from WIRED about accusations of theft. Previously, the embassy has categorically denied such allegations, labeling them as unfounded accusations from Western authorities.

Google has acknowledged that it alerted authorities to an episode that led to a US lawsuit accusing theft of AI chip technology secrets for China’s benefit. Although Google claims to have robust measures in place to protect its proprietary information, legal documents reveal substantial delays in detecting the activities of Linwei Ding, a Chinese citizen who has entered a not guilty plea to the charges.

The individual in question, also known as Leon, was employed by Google in 2019 to develop software for its advanced computing data centers. Starting in 2022, over the course of approximately one year, he is accused of transferring over 500 files containing sensitive information to his personal Google account. According to official documents, his tactics included utilizing Apple’s Notes app on his work laptop to paste information, convert these files to PDFs, and then upload them to different locations, managing to bypass Google’s security measures designed to prevent such unauthorized actions.

According to US authorities, during the time of the alleged thefts, the engineer was communicating with the CEO of an AI startup in China and had begun establishing his own AI business in China. He could face a maximum sentence of 10 years in prison if found guilty.

Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Article

Exploring Destiny 2: The Final Shape - An In-Depth Review-in-Progress

Next Article

The Snowflake Attack: Unraveling One of the Potentially Largest Data Breaches in History

Related Posts