
The U.S. military’s partnership with artificial intelligence firm Anthropic is on the brink of collapse as both sides clash over how the company’s powerful AI model, Claude, could be used. The Pentagon has given Anthropic until Friday at 5:01 p.m. to permit use of its AI for “all lawful purposes” or risk losing a major defense contract. Anthropic has insisted on strict safeguards preventing the model’s use for mass surveillance of Americans or fully autonomous military operations, arguing that current AI systems are not reliable enough for such roles.
Pentagon Chief Technology Officer Emil Michael said the Defense Department had offered significant concessions, including written acknowledgment of existing federal laws limiting domestic surveillance and policies governing autonomous weapons. However, Anthropic rejected the proposal, claiming the revised language made “virtually no progress” and contained loopholes that could allow safeguards to be ignored. The dispute escalated publicly, with Michael accusing Anthropic CEO Dario Amodei of dishonesty, while Amodei maintained the company could not “in good conscience” agree to the military’s demands.
If no agreement is reached, the Pentagon plans to terminate the partnership, label Anthropic a supply-chain risk, and potentially seek alternative AI providers. Officials are also reportedly considering invoking the Defense Production Act to compel compliance. The standoff underscores a broader ideological divide between national security officials seeking flexible AI deployment and technology leaders warning about risks such as autonomous weapons and large-scale surveillance, highlighting the growing tension over how advanced AI should be governed in defense applications.
Pic Courtesy: google/ images are subject to copyright









