US (Washington Insider Magazine) —The United States is intensifying efforts to regulate artificial intelligence, with Vice President Kamala Harris asserting that leaders bear a “moral, ethical and societal duty” to mitigate AI’s risks. As she spearheads the Biden administration’s agenda for a global AI safety roadmap, Harris emphasizes the need for international cooperation to protect society.
In her remarks, Harris stressed the necessity for a shared understanding among nations to manage AI’s impacts. “We must be guided by common rules and norms,” she said, committing the U.S. to collaboration with allies to both apply existing regulations and forge new standards for AI.
As AI applications expand, analysts agree that human oversight is essential to prevent misuse. This technology is transforming sectors from military intelligence to healthcare and even artistic creation. President Joe Biden reinforced the administration’s commitment by signing an executive order setting new AI standards. This includes a requirement for major AI developers to report safety test results to the government, supporting transparency.
The administration also launched an AI Safety Institute and released preliminary policy guidelines for federal AI use and military AI applications. Days earlier, the Defense Intelligence Agency announced its AI-powered military intelligence database would soon be operational, demonstrating the rapid integration of AI across defense sectors.
Amid these developments, industry leaders like Elon Musk continue to voice concerns. Musk, who sees AI as a major threat, advocates for a regulatory “third-party referee” to prevent harm. Earlier this year, Musk joined over 33,000 people in calling for a pause in AI model training to assess safety risks.
Critics highlight AI’s potential for bias, as Jessica Brandt from the Brookings Institution explains: “AI is only as good as the data it’s trained on.” This concern is echoed by Amba Kak of AI Now, who points out that current AI systems can be manipulated and often spread misinformation.
While the U.S. works on specific initiatives, experts like Bill Whyman of the Center for Strategic and International Studies predict a “bottom-up” regulatory approach, contrasting with Europe’s likelihood of broad AI legislation. The focus, Whyman suggests, will remain on targeted measures like funding for AI research and safety, rather than a sweeping national law.
