
A U.S.-based advocacy group has urged the Trump administration to introduce mandatory security screening for advanced artificial intelligence models before they are publicly released, citing growing concerns over national security threats. Americans for Responsible Innovation, in a letter sent to administration officials on Monday, recommended that companies developing powerful frontier AI systems should undergo government-led evaluations to assess risks related to cyberattacks and weapons development capabilities.
The call comes amid increasing attention on Anthropic’s latest AI model, Mythos, which officials believe could potentially make sophisticated cyberattacks faster and easier to execute. The White House is currently examining the broader implications of rapidly advancing AI technologies and how they could be exploited by malicious actors. The advocacy group suggested that AI developers failing to meet security standards should be denied access to lucrative U.S. government contracts.
According to the proposal, the U.S. Center for AI Standards and Innovation (CAISI) should lead the development of mandatory review mechanisms, while Congress should establish a permanent enforcement office within the Department of Commerce. The suggested regulations would apply to companies spending at least $100 million annually on computing power to train frontier AI models or generating more than $500 million yearly from AI-related products and services. The recommendations mirror similar AI safety reporting requirements introduced in California last year.
Pic Courtesy: google/ images are subject to copyright









