HomeTechnologyMicrosoft Backs Anthropic With Court Filing as Pentagon's AI Ethics War Threatens...

Microsoft Backs Anthropic With Court Filing as Pentagon’s AI Ethics War Threatens to Escalate Further

Published on

As the Pentagon’s AI ethics war with Anthropic threatens to escalate further, Microsoft has backed the AI company with a court filing in a San Francisco federal court calling for a temporary restraining order against the supply-chain risk designation. The brief argued that allowing the designation to stand would cause serious harm to the technology networks supporting national defense. Amazon, Google, Apple, and OpenAI have also backed Anthropic, making this a comprehensive industry response to an escalating confrontation.
The confrontation has been escalating since Anthropic refused to allow its Claude AI to be used for mass surveillance of US citizens or to power autonomous lethal weapons during a $200 million contract negotiation. Defense Secretary Pete Hegseth labeled the company a supply-chain risk, and the Pentagon’s technology chief publicly ruled out renegotiation, signaling the government’s intent to hold firm. Anthropic filed two simultaneous lawsuits in California and Washington DC challenging the designation.
Microsoft’s backing is grounded in its direct integration of Anthropic’s technology into federal military systems and its participation in the Pentagon’s $9 billion cloud computing contract. Additional agreements with government agencies worth several billion dollars more further deepen the company’s stake. Microsoft publicly argued that responsible AI governance and national security required collaboration between government and the technology sector.
Anthropic’s court filings argued that the supply-chain risk designation was an unconstitutional act of retaliation for the company’s publicly stated AI safety positions. The company disclosed that it does not currently believe Claude is safe or reliable enough for lethal autonomous operations, which it said was the genuine basis for its contract demands. Anthropic noted that no US company had ever previously received this designation.
Congressional Democrats have separately pressed the Pentagon for answers about whether AI was involved in a strike in Iran that reportedly killed over 175 civilians at a school. Their formal inquiries are adding legislative pressure to an already escalating confrontation. Together, Microsoft’s backing, the industry coalition, and congressional scrutiny are creating a formidable force that may ultimately prevent the Pentagon’s AI ethics war from escalating further.

popular articles

Meta Transforms User Safety with WhatsApp’s Advanced Security

Meta has transformed user safety on WhatsApp with advanced security features accessible through Strict...

Telecommunications Increasingly Automated with AI Call Routing

Telecommunications customer service increasingly uses artificial intelligence for call routing, technical support, and account...

Ofcom Demands “Urgent Answers” as UK Threatens X Ban; Musk Claims Free Speech Attack

The UK’s communications regulator, Ofcom, is demanding "urgent answers" from X as the government...

Nvidia Tech is the Key to Scalable, Safe Robotaxi Fleets

The promise of robotaxis has stalled in recent years due to safety incidents and...

More like this

Meta Makes Quiet Move: Instagram Encrypted Messaging Is Being Discontinued

Meta has confirmed that end-to-end encryption will be removed from Instagram's direct messages, effective...

Macron’s Delhi Demand: Make the Internet the Safe Space Children Deserve

Children deserve safe spaces. They have them in schools, in playgrounds, in regulated public...

Anthropic Achieves $380 Billion Valuation Through $30 Billion Round as AI Competition Heats Up

The artificial intelligence sector experienced another landmark event with Anthropic's announcement of a $30...