The US government issued new rules Thursday requiring more caution and transparency from federal agencies using artificial intelligence, stating they are needed to protect the public as AI rapidly advances. The new policy also embeds provisions to foster AI innovation in government agencies when the technology can be harnessed for public good.
The US is aspiring to establish itself as an international pioneer with its new framework for government AI. Vice President Kamala Harris expressed during a news briefing prior to the announcement that the administration plans for these policies to “serve as a model for global action.” She articulated that the US “will persistently call on all nations to follow our lead and prioritize the public interest when it comes to government use of AI.”
The refreshed policy from the White House Office of Management and Budget will direct AI usage across the federal government. It demands greater transparency concerning how the government employs AI and also advocates for more development of the technology within federal agencies. The strategy recognizes the administration’s attempt to strike an equilibrium between mitigating risks stemming from deeper use of AI—whose extent remains unknown—and leveraging AI tools to address pressing issues like climate change and disease.
The Biden administration has recently issued several announcements aimed at the advancement and control of AI. One significant action occurred in the previous month when President Biden signed a comprehensive executive order concerning AI. This order seeks to encourage the development of AI technology by the government, but simultaneously requires entities possessing large AI models to disclose information about their operations for national security reasons.
In November, alongside the UK, China, and certain EU countries, the US participated in signing a declaration that both recognized the potential dangers of rapid AI progression and advocated for global cooperation. The same week, Harris disclosed a non-binding declaration about the military usage of AI, endorsed by 31 nations. This document establishes basic safeguards and recommends the deactivation of systems that exhibit “unintended behavior.”
The policy newly instituted for the US government’s AI utilization instructs agencies to implement measures to guard against unpredictable repercussions of AI implementation. Agencies are mandated to certify that the AI tools they employ do not put Americans in danger. For instance, if the Department of Veterans Affairs wishes to use AI in its hospitals, it needs to ensure that the technology does not result in racially biased diagnoses. Studies have indicated that the AI systems and other algorithms employed to inform diagnoses or determine which patients receive care can perpetuate traditional patterns of discrimination.
If an agency is unable to assure such protections, it must either terminate the use of the AI system or rationalize its continued use. All US agencies are obliged to meet these new conditions by a given deadline of December 1.
The policy also calls for increased transparency about government AI systems, requiring agencies to release AI models, data, and code owned by the government, as long as the release of such information does not pose a threat to the public or government. It is mandatory for agencies to report publicly each year on how they are utilizing AI, the potential risks posed by these systems, and how these risks are being mitigated.
The new rules also mandate that federal agencies enhance their AI expertise, by requiring each to designate a chief AI officer to supervise all AI usage within that agency. This role is focused on promoting AI innovation while also monitoring its dangers.
Officials state that the modifications will also eliminate some obstacles to AI usage in federal agencies, possibly enabling more responsible testing with AI. The technology could potentially assist agencies in assessing damage after natural disasters, predicting extreme weather events, tracking the spread of diseases, and controlling air traffic.
Countries worldwide are taking steps to regulate AI. The EU passed its AI Act in December, a law that regulates the development and application of AI technologies, and officially adopted it earlier this month. China is also developing comprehensive AI regulation.
Aarian Marshall
Chris Baraniuk
Aarian Marshall
David Gilbert