Anthropic says the Pentagon has declared it a national security risk

Anthropic said Thursday that the Department of Defense has designated it a national security threat, a surprise move that bars it from doing business with the U.S. military and could send shock waves through the U.S. AI industry.
The designation, which the company said it received Wednesday and specifically labels Anthropic as a “national security procurement risk,” requires the Pentagon and its contractors to stop using Anthropic’s AI services for all defense businesses.
Defense Secretary Pete Hegseth telegraphed the move Friday evening at X.
It comes after months of heated discussions about how the military should be able to use Anthropic’s Claude AI systems. Although it is a new technology, generative AI models like Claude are being quickly adopted by the Trump administration, including for military use.
For the past few months, the Pentagon has been negotiating new contract terms with Anthropic, along with other leading American AI companies, to allow for expanded military use of AI. While the Pentagon wants to use powerful AI systems for “any legitimate use,” Anthropic CEO Dario Amodei wanted firm assurances that the Pentagon would not use its AI technology for lethal autonomous weapons or mass surveillance at home.
Amodei confirmed the supply chain risk label in a statement Thursday night and said the company disagrees with it, writing that “we do not believe this action is legally binding, and we see no other option than to challenge it in court.”
“Anthropic has more in common with the Department of Defense than our differences,” he wrote in a statement. “We are both committed to advancing America’s national security and protecting the American people, and we agree on the urgency of implementing AI across government. All of our future decisions will come from that shared foundation.”
Other AI systems are coming in
Until last week, Anthropic was the only AI company whose services were open for use on Defense Department classified networks. Hours after Hegseth announced that he would seek to name Anthropic as a supply chain risk last week, OpenAI CEO Sam Altman announced that his company had reached a new agreement with the Pentagon to use OpenAI’s services in classified settings, which would allow OpenAI to take over a large portion of Anthropic’s current business with the Pentagon.
Elon Musk’s xAI and its Grok AI systems also struck a deal with the Pentagon last week to eliminate use in decentralized networks.
In a statement on his website Thursday evening, Amodei emphasized that the ban on Anthropic’s business with the military does not apply to contracts with military suppliers for non-defense purposes. Anthropic has extensive business agreements with many of America’s leading technology companies, including Amazon and Microsoft, many of which have major contracts with the Pentagon.
A senior official from the Ministry of Defense confirmed that the supply chain risk reduction was implemented immediately. “From the beginning,” the official told NBC News on Thursday, “this has been about one important goal: the military being able to use the technology for all legitimate purposes.
Hegseth wrote in a post announcing the move last Friday that Anthropic “will continue to provide the Department of the Army with its services for a period not to exceed six months to allow for a seamless transition to a better and more patriotic service.”
“Anthropic presented a master class in arrogance and betrayal and a textbook case of how not to do business with the United States Government or the Pentagon,” he said in the post. “Our position has never wavered and never will: the Department of War must have full, unrestricted access to Anthropic models for all LEGAL purposes of defending the Republic.”
Industry concerns about the label
In a statement last week during negotiations on the troubled contract and before Hegseth announced the move, Amodei noted that the procurement risk label, usually reserved for foreign adversaries and related businesses, “had never applied to an American company.”
Many legal observers said the position was legally untenable and meant to warn other companies to toe the Pentagon’s line.
The mere threat of such a name has already rattled Washington and the tech industry. Fearing the emergence of a potentially dangerous decision, defense experts, Anthropic’s rival OpenAI and members of Congress have tried to cool the rift between Anthropic and the Pentagon throughout this week.
The powerful technology advocacy group, whose members include Nvidia and Apple, sent a letter to Hegseth on Wednesday urging him to stop legalizing the consumer risk label.
Many industry investors fear that by targeting one of America’s largest and most successful AI companies, the Department of Defense is setting a dangerous precedent that will scare away investment and chill America’s AI industry.
Last Friday, just over an hour before the 5pm ET deadline to reach the deal Hegseth set earlier in the week, President Donald Trump said he would lift the ban on Anthropic from other government agencies.
“Leftwing nut jobs at Anthropic have made a HUGE MISTAKE in trying to STRENGTHEN the Department of the Army, and force them to obey their Terms of Service instead of our Constitution,” Trump wrote.
The Pentagon already uses Anthropic’s Claude systems as part of a deal with data analytics firm Palantir. According to recent reports from the Washington Post and the Wall Street Journal, Anthropic’s AI systems have been used to help the military conduct intelligence assessments and identify targets in the ongoing war in Iran. NBC News did not confirm those reports.
Anthropic made its first deal with Palantir in 2024, allowing the Department of Defense to use its services on classified networks, and was awarded another $200 million contract in July to develop “prototype AI capabilities that enhance US national security.”
In earlier rounds of negotiations, Anthropic agreed to allow the Pentagon to use its AI systems for cyber and missile defense purposes.
‘No profit’
Some experts have noted a clear disconnect between calling one of America’s biggest AI companies a national security risk while avoiding applying the same label to DeepSeek, a leading Chinese AI company accused of wrongdoing. DeepSeek did not immediately respond to multiple news organizations’ requests for comment on the matter.
“We treat an American AI company much worse than we treat an AI company controlled by the Chinese Communist Party,” said Michael Sobolik, an expert on AI and China issues and a senior fellow at the Hudson Institute. “We cannot negotiate with the most advanced, most successful American companies by asking American questions about military use and privacy.
“The US government is in danger of cutting the legs of our leading AI companies in the early years of this AI race,” Sobolik continued. “When we do that, when America’s border models are higher and better than China’s, it seems like we’re cutting off our noses to look at our faces.”
Tim Fist, director of emerging technologies at the Washington-based Institute for Progress think tank, said the new appointment would go against America’s AI ambitions.
“The designation of a supply chain threat, often used against foreign adversaries, both hurts one of America’s top AI companies and makes other companies more reluctant to cooperate with the federal government,” Fist commented. “The reported reputation is hurting the AI industry and thus US national security is not benefiting.”



