Microsoft’s decision to file a legal brief in support of Anthropic’s battle against the Pentagon has put a sharp spotlight on the unresolved questions of AI governance in an era of increasingly autonomous military systems. The brief, submitted to a federal court in San Francisco, called for a temporary restraining order against the Defense Department’s supply-chain risk designation. The filing was accompanied by a separate brief from Amazon, Google, Apple, and OpenAI, reflecting the technology industry’s broad concern about the precedent this case could set.
The conflict between Anthropic and the Pentagon began when the company refused to enter a $200 million contract that would have deployed its AI on classified military systems without protections against its use for mass surveillance or autonomous lethal weapons. Defense Secretary Pete Hegseth labeled Anthropic a supply-chain risk after negotiations collapsed, and the Pentagon’s technology chief subsequently declared that renegotiation was not a possibility. Anthropic filed two simultaneous lawsuits in California and Washington DC challenging the designation.
Microsoft’s brief is informed by its own use of Anthropic’s AI in federal military systems and its participation in the Pentagon’s $9 billion cloud computing contract. The company also holds additional federal agreements worth several billion dollars more spanning defense, intelligence, and civilian agencies. Microsoft publicly called for a collaborative approach in which government and industry work together to ensure AI serves national security without being misused for surveillance or unauthorized military action.
Anthropic’s court filings argued that the supply-chain risk designation was an unconstitutional act of ideological retaliation against a US company for its publicly stated views on AI safety. The company revealed that it does not currently have confidence in Claude’s safety and reliability in lethal autonomous warfare scenarios, which it said was the genuine basis for the restrictions it sought. Anthropic argued the designation had been applied in a manner that violated its First Amendment rights.
House Democrats have separately written to the Pentagon demanding information about whether AI was used in a strike in Iran that reportedly killed more than 175 civilians at an elementary school. Their letters ask about AI targeting systems and the degree of human review exercised before and during the strike. These parallel legal and legislative battles are together forcing a long-overdue national conversation about accountability and oversight in AI-assisted warfare.
Microsoft’s Legal Brief for Anthropic Puts a Spotlight on AI Governance in the Age of Autonomous Weapons
Published on
