USA (Washington Insider Magazine)—The Biden administration will start enforcing a new requirement for the developers of major artificial intelligence systems to reveal their safety test results to the government.
The White House Artificial Intelligence Council is meeting on Monday, three months after President Joe Biden signed an executive order to reduce AI’s risks.
Given the investments and uncertainties caused by the launch of new AI tools such as ChatGPT that can generate text, images and sounds, AI has emerged as a leading economic and national security consideration for the federal government. The Biden administration is also looking at congressional legislation and working with other countries and the European Union on technology management rules.
White House Deputy Chief of Staff Bruce Reed, who will call the council meeting on Monday, stated that the federal government had significantly improved in the prior 90 days on AI, saying Biden’s “directive to us is moving fast and fix things.”
The White House expressed nine government agencies – including Defense, Transportation, Treasury, and Health and Human Services – presented risk assessments to the Department of Homeland Security required under Biden’s order.
At the same time, steps in Congress to pass legislation addressing AI have been delayed despite numerous high-level forums and legislative proposals.
On Friday, the Biden administration proposed requiring U.S. cloud businesses to determine whether foreign entities are accessing U.S. data commands to train AI models through “know your customer” rules.
“We can’t have non-state actors or China or folks who we don’t want accessing our cloud to train their models,” U.S. Commerce Secretary Gina Raimondo told Reuters Friday. “We use export controls on chips. Those chips are in American cloud data centres, so we must consider closing down that avenue for potential malicious activity.”
In an interview, Ben Buchanan, the White House special adviser on AI, stated that the government wants “to know AI systems are safe before they’re released to the public — the president has been obvious that companies need to meet that bar.”
The software businesses are committed to a set of categories for the safety tests, but companies still need to concede to a common standard on the tests. In October, the government’s National Institute of Standards and Technology will develop a uniform framework for assessing safety as part of Biden’s order.
The government also has climbed up the hiring of AI experts and data scientists at federal agencies. “We know that AI has transformative effects and potential,” Buchanan said. “We’re not trying to upend the apple cart there, but to ensure the regulators are prepared to manage this technology.”
Last month, Raimondo said that Commerce would not allow Nvidia “to ship the most sophisticated, highest-processing-power AI chips, which would enable China to train their frontier models.”
Biden’s executive order gathers the Defense Production Act to demand that developers of AI systems that threaten U.S. national security, the economy, public health, or safety share the results of safety trials with the U.S. government before they are publicly released.
The Commerce Department plans to send those survey requests to companies soon. Raimondo told Reuters companies they would have 30 days to respond. “Any company that doesn’t want to comply is a red flag for me,” she expressed. Amazon.com’s AWS, Alphabet’s Google Cloud and Microsoft’s (MSFT.O) Azure unit are top cloud providers.
