US news

Tensions between the Pentagon and Anthropic are reaching a boiling point

In the past week, tensions between the Pentagon and artificial intelligence giant Anthropic have reached a boiling point.

Anthropic, the creator of the Claude chat system and a borderline AI company with a defense contract worth up to $200 million, has built its brand on promoting AI safety, raising red lines the company says it can’t cross.

Now, the Pentagon appears to be pushing those boundaries.

Rumors of a possible rift between Anthropic and the Department of Defense, now renamed the Department of Defense, began to intensify after The Wall Street Journal and Axios reported the use of Anthropic products in the operation to capture Venezuelan President Nicolás Maduro.

It is not clear how Anthropic’s Claude was used.

Anthropic did not raise or find any violations of its policies after Maduro’s tenure, according to two people familiar with the matter, who asked not to be identified to discuss sensitive topics. They said the company is very visible in how its AI tool Claude is used, such as in data analysis tasks.

Anthropic was the first AI company allowed to offer services on distributed networks, with Palantir, which it partnered with in 2024. Palantir said in the announcement of the partnership that Claude can be used to “support government tasks such as processing large amounts of complex data quickly” and “to help US officials make more informed decisions in time-critical situations.”

Palantir is one of the preferred contractors for data and software, for example collecting data from space sensors to provide better strikes for the military. It has also drawn scrutiny for its work under the Trump administration and law enforcement agencies.

While Anthropic has maintained that it does not and will not allow its AI systems to be used directly in lethal autonomous weapons or home surveillance, the reported use of its technology in connection with the attack on Venezuela through a contract with Palantir has allegedly raised concerns for an Anthropic employee.

Anthropic CEO Dario Amodei, right, and Chief Product Officer Mike Krieger talk after unveiling Claude 4 during the Code and Claude conference May 22, 2025, in San Francisco.Don Feria / Anthropic AP Content Services

Semafor reported on Tuesday that, during a regular meeting between Anthropic and Palantir, a Palantir executive was concerned that an Anthropic employee seemed to disagree with how its plans could be used in practice, leading to “a breakdown in Anthropic’s relationship with the Pentagon.”

A top Pentagon official told NBC News that “Anthropic’s top executive contacted Palantir’s top executive, asking if their software was used in the Maduro attack.”

According to a Pentagon official, a Palantir executive “was alarmed that this question was raised in such a way that Anthropic might disagree with the use of their software during the attack.”

Citing the classified nature of military operations, an Anthropic spokesperson would not confirm or deny that Claude chatbot systems were used in Maduro’s work: “We cannot comment that Claude, or any other AI model, was used in any specific operation, classified or otherwise,” the spokesperson told NBC News in a statement.

A spokesperson pushed back on the idea that the incident caused a misunderstanding, telling NBC News that the company did not have informal conversations about Claude’s use with colleagues or share any disagreements related to work with the military.

“Anthropic has not discussed the use of Claude in specific operations with the Department of Defense,” the spokesperson said. “And we have not discussed this, or raised concerns, with any of our industry partners outside of regular discussions on hard technology issues.”

Palantir did not respond to a request for comment.

A key rift between Anthropic and the Department of Defense appears to stem from a broader dispute over the military’s future use of Anthropic systems. The Department of Defense recently reiterated its desire to be able to use all available AI systems for any purpose permitted by law, while Anthropic says it wants to keep its guards.

Senior Pentagon spokesman Sean Parnell told NBC News that “The Department of Defense’s relationship with Anthropic is being reviewed.”

“Our nation needs our partners to be willing to help our soldiers win in any war,” he said in a statement.

“Ultimately, this is about our military and the safety of the American people.” On Tuesday, Undersecretary of Defense Emil Michael said the department’s talks with Anthropic had hit a snag due to disagreements over the use of its systems, according to CNBC.

At the beginning of January, the Secretary of Defense, Pete Hegseth, released a new AI strategy document that required any contracts with AI companies to eliminate company-specific surveillance zones or restrictions on how the military can use the companies’ AI systems, which recently allowed “any legitimate use” of AI for the purposes of the Department of Defense.

The document required defense officials to include this language in any Defense Department AI contract within 180 days, which would affect Anthropic’s cooperation with the military.

Although Anthropic has widely supported the use of its services for national security purposes, it has maintained that its systems are not used for home surveillance or completely autonomous weapons.

The Department of Defense has objected to Anthropic’s emphasis on these two issues and put increasing pressure on the company.

“Claude is used for a variety of intelligence applications across the government, including the Department of Defense, in accordance with our Use Policy,” an Anthropic spokesperson said. “We are having productive discussions, in good faith, with the Department of Defense about how to move this project forward and address these complex issues.”

Relative to other AI companies, Anthropic has prioritized business and national security plans for its AI programs. In August 2025, Anthropic created a national security and public sector advisory council made up of senior defense and intelligence officials and last week added Chris Lidell, former deputy chief of staff to President Donald Trump, to the board of directors.

Anthropic has partnered with Palantir since late 2024 to provide US defense agencies with access to various Claude systems. At the time, Anthropic’s head of sales and partnerships, Kate Earle Jensen, said the company is “proud to be at the forefront of delivering responsible AI solutions to US geographies, improving analytics and efficiency in critical government operations.”

Anthropic, along with other leading American AI companies such as OpenAI and Google, signed two-year contracts with the Department of Defense through July 2025, each worth up to $200 million to help “prototype the frontier of AI capabilities that improve US national security.”

“Anthropic is committed to using the edge of AI to support America’s national security,” an Anthropic spokesperson told NBC News in a statement. “We were the first AI company to put our models on decentralized networks and the first to offer custom models to national security customers.”

Anthropic CEO Dario Amodei has repeatedly emphasized Anthropic’s commitment to using its AI services for national security purposes. In an essay published in late January, Amodei wrote that “democracy has a legitimate interest in some powerful AI military and political tools,” and that “we must equip democracy with AI, but we must do it carefully and within limits.”

Michael Horowitz, who led the AI ​​and emerging technologies policy at the Pentagon and is now a political science professor at the University of Pennsylvania, said that any concerns about the use of Anthropic programs to engage in autonomous lethal weapons are likely to be irrelevant in the current discussions given the nature of the programs that Anthropic does.

“I would be surprised if Anthropic’s models were the right ones to use for autonomous weapons systems right now, because the algorithms for that would be more predictable than Claude’s,” Horowitz told NBC News.

“My sense is that Anthropic wants to increase the depth and breadth of their work with the Pentagon. Based on what we know, this sounds like more of an argument about theoretical possibilities than real-world use cases on the table.”

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button