The Biden administration has introduced new rules for AI developers: They must disclose their safety tests to the Commerce Department.
AI companies must provide the Department of Commerce with comprehensive information, including the results of their safety tests. The goal is to make the development and use of AI technologies more transparent and safer.
In addition to regulations for AI developers, the Department of Commerce is developing regulations for U.S. cloud companies that provide services to foreign AI developers.
The National Institute of Standards and Technology is supporting the measure with a standardized framework for evaluating AI safety.
The measure is part of an executive order signed by President Joe Biden in October. It falls under the Defense Production Act.
EU and US in search of AI rules
The U.S. government has identified AI as an economic and national security priority. It is exploring legislation and developing AI regulations at the international level.
Nine federal agencies, including the Departments of Defense and Health and Human Services, have completed risk assessments for the use of AI in critical infrastructure, such as the electric grid.
To address the growing challenges of evaluating and developing AI projects, the government is increasingly hiring AI experts and data scientists across federal agencies.
Ben Buchanan, White House Special Advisor for AI, emphasizes the transformative potential of AI. The administration is committed to ensuring that regulators are adequately prepared to deal with this technology, Buchanan said. AI systems must be safe before they are released to the public.
“The president has been very clear that companies have to meet that bar,” Buchanan said.
The EU’s AI legislation is expected to be formally adopted in the coming days.