Judge Blocks Pentagon Ban on Anthropic's AI Model
AI-generated from multiple sources. Verify before acting on this reporting.
WASHINGTON — A federal judge issued a temporary restraining order on Wednesday blocking the Pentagon from designating Anthropic as a supply chain risk and prohibiting federal agencies from using its Claude artificial intelligence model.
The ruling, handed down in the U.S. District Court for the District of Columbia, halts the Department of Defense's directive that would have effectively barred the government from contracting with or utilizing the company's technology. The order comes after Anthropic filed an emergency motion arguing the ban was retaliatory rather than grounded in legitimate national security concerns.
The Pentagon had moved to classify Anthropic as a foreign adversary risk under existing supply chain security protocols, a designation that would trigger an immediate moratorium on federal procurement. Defense officials stated the move was necessary to mitigate potential vulnerabilities in AI systems used for critical infrastructure and defense operations. They cited the need to ensure the integrity of algorithms deployed in sensitive environments.
Anthropic, however, contends the government's action is a direct response to the company's public opposition to the use of AI in domestic surveillance programs and the development of autonomous weapons systems. The company argued in court filings that the timing of the ban coincided with its recent testimony before Congress and public statements criticizing the militarization of generative AI.
"The administration's actions appear to be a punitive measure designed to silence dissent rather than a genuine effort to protect national security," Anthropic's legal team stated in the motion. The company emphasized that its code and infrastructure are subject to rigorous third-party audits and that no specific security flaw was cited by the Pentagon.
The judge's temporary order prevents the Pentagon from enforcing the ban while the case proceeds to a full hearing. The court noted that the government failed to provide immediate evidence of an imminent threat posed by the Claude model, while Anthropic demonstrated that the ban would cause irreparable harm to its business operations and its ability to serve government clients.
Federal agencies, including the Department of Homeland Security and the intelligence community, had already begun transitioning away from Anthropic's services in anticipation of the directive. The restraining order now leaves those agencies in a state of uncertainty regarding their ongoing contracts and data processing workflows.
The Pentagon has not yet commented on the ruling, though a spokesperson indicated the department is reviewing the court's decision. Legal experts suggest the case could set a significant precedent for how the government regulates AI vendors and whether national security designations can be used to target companies based on their political or ethical stances.
The next court date has not been scheduled, and both parties are expected to submit further briefs outlining their positions. The outcome of the case will determine whether the federal government can restrict access to specific AI technologies based on non-technical criteria.