Microsoft has put its enormous corporate weight behind Anthropic’s right to say no to the Pentagon on questions of AI safety, filing a court brief in a San Francisco federal court calling for a temporary restraining order against the supply-chain risk designation. The brief argued that allowing the designation to stand would cause serious harm to the technology supply chains that underpin both commercial AI and national defense. Amazon, Google, Apple, and OpenAI have also backed Anthropic’s right to set AI safety boundaries through a separate joint filing.
Anthropic’s right to say no was tested when the company refused to enter a $200 million contract without protections preventing the use of its Claude AI for mass surveillance or autonomous lethal weapons. Defense Secretary Pete Hegseth labeled the company a supply-chain risk after negotiations collapsed, and the Pentagon’s technology chief later publicly ruled out any renegotiation. Anthropic responded with two simultaneous lawsuits in California and Washington DC.
Microsoft’s corporate weight is backed by its direct integration of Anthropic’s technology into federal military systems and its partnership in the Pentagon’s $9 billion cloud computing contract. Additional federal agreements spanning defense, intelligence, and civilian agencies further strengthen Microsoft’s standing in this dispute. Microsoft publicly argued that responsible AI governance and national security were complementary rather than competing goals.
Anthropic’s court filings argued that the supply-chain risk designation was an unconstitutional act of ideological retaliation for the company’s publicly stated AI safety positions. The company disclosed that it does not currently believe Claude is safe or reliable enough for lethal autonomous operations, which it said was the genuine basis for its contract demands. Anthropic noted that this designation had never before been applied to a US company.
Congressional Democrats have separately demanded answers from the Pentagon about whether AI was used in a strike in Iran that reportedly killed over 175 civilians at a school. Their formal letters ask about AI targeting tools and human oversight processes. Together, Microsoft’s corporate weight, the industry coalition, and congressional pressure are creating a formidable defense of Anthropic’s right to say no to the Pentagon on AI safety matters.