A judge blocks the Trump administration from limiting Anthropic’s contracts with the federal government

A federal judge in California blocked the Trump administration from choosing Anthropic a supply chain risk to national security and cutting all work with the AI company.
Anthropic sued the Defense Department and other government agencies this month after the Pentagon called it a “national security risk.” President Donald Trump has said he will also ban the use of Anthropic products in some government agencies.
“Defendants’ designation of Anthropic as a ‘supply chain risk’ is both unconstitutional and unconscionable,” wrote Judge Rita Lin, a US district judge in California, in her order Thursday night. “The Department of War does not provide a legal basis for Anthropic’s unequivocal insistence on potential sink use limits.”
Lin temporarily suspended his order for a week to allow administrative time to appeal the case.
The Defense Department and the White House did not immediately respond to a request for comment Thursday evening.
The supply chain risk designation requires the Pentagon and its contractors to stop using Anthropic’s commercial AI services across Defense businesses.
Defense Secretary Pete Hegseth in a post on X in late February said he was making an order to give the company a “supply chain risk” label. Trump also said he is ordering all government agencies, including the Treasury Department and the State Department, to stop using Anthropic’s AI technology.
“The record shows that the Challenged Actions were taken without reasonable notice or a pre-deprivation process (and, in the case of the Presidential Directive and the Hegseth Directive, without any post-deprivation process either),” Lin wrote in his order.
The judge’s order on Thursday also prevents other agencies from ending their work with Anthropic. In it, Lin wrote that the order restores the status quo.
“This order does not require the Department of the Army to use Anthropic products or services and does not prevent the Department of the Army from outsourcing to other artificial intelligence providers, as long as those actions are consistent with applicable regulations, statutes, and constitutional provisions,” the order said.
Anthropic filed two lawsuits against the Department of Defense – one in the US District Court of Northern California and the other in the US District Court of Washington, DC – alleging that the federal government’s actions go beyond a normal contract dispute and are instead an “unlawful campaign of retaliation” that followed months of heated discussions about how the military should be able to use AInthrop’s systems.
Anthropic had sought firm assurances that the Pentagon would not use its AI systems for autonomous weapons or mass domestic surveillance.
Anthropic is the creator of the Claude chatbot system and is the only AI company whose services have been cleared for use in classified networks of the Department of Defense.
Hours after Hegseth’s announcement last month, OpenAI CEO Sam Altman said his company had reached an agreement with the Pentagon to use its services in classified areas.
“Although Anthropic was aware that the government was objecting to its contract terms, it had no notice or opportunity to object before Defendants publicly barred it from all federal government work and listed it as a private company that does business with the U.S. military,” Lin wrote. “It also had no notice or opportunity to challenge the factual basis for its designation as a supply chain risk, which it learned from this trial.”



