OpenAI changes deal with Pentagon as critics raise alarm over surveillance

OpenAI CEO Sam Altman unveiled a reworked agreement with the Pentagon on Monday night that governs the Defense Department’s use of its AI services, which he said provides strong assurances that the military will not use OpenAI systems to monitor the home.
The new agreement states that “the AI system will not be used for domestic surveillance of US citizens and nations,” according to a post on the OpenAI website. OpenAI has faced a media backlash as news of a preliminary agreement between a leading AI company and the Pentagon emerged on Friday. Many observers say that the original language shared on the OpenAI website provided more loopholes for the government to spy on Americans.
The move comes after weeks of heated talks between rival AI firm Anthropic and the Pentagon over how the military could use advanced AI systems. Although the Department of Defense wanted Anthropic to agree to use its systems for “any lawful purpose,” Anthropic maintained that its systems could not be used for home surveillance or to control lethal autonomous weapons. Until last week, Anthropic was the only major AI company whose services were actively used on decentralized networks.
Researchers argue that without guardrails, AI could allow authorities to monitor people with unprecedented speed and precision, combing through mountains of digital data to track people’s movements and behavior.
“It is important to protect the liberties of the American people,” Altman wrote in a post on X Monday night announcing new contract language that he said better limits domestic surveillance. “The Department has also confirmed that our facilities will not be used by War Department spies (for example, the NSA).”
Katrina Mulligan, head of national security partnerships at OpenAI, added in another post on X Tuesday morning that “the intelligence components are not included in this contract,” stating that she would be open to future work with the NSA “if the appropriate safeguards were in place.”
OpenAI did not respond to a request for comment.
Many observers remained unmoved on Tuesday, concerned that a summary of OpenAI’s contract with the Pentagon published by the company remained deliberately vague and provided blueprints for domestic surveillance by various intelligence agencies within the Defense Department. The full text of the contract has not been made public.
“OpenAI said the Department of Defense has contractually agreed not to use ChatGPT in American intelligence agencies,” said Brad Carson, a former congressman and Army general counsel who now heads the Washington DC policy group America for Responsible Innovation. “They are happy to show the language of the contract if it benefits them, but they refuse to release this contract agreement to the public.”
“I’ve come to the conclusion unequivocally that this arrangement doesn’t really exist, and they’re just trying to fake it,” Carson told NBC News. Carson recently founded an AI-focused super PAC that received $20 million from Anthropic competitor OpenAI.
Several legal experts agreed that more transparency about the entire contract and any other important clauses is needed to properly evaluate the company’s claims.
“We still need to see the entire contract to say anything with a reasonable degree of confidence,” said Brian McGrail, senior adviser at the AI Safety Center, a nonprofit research and advocacy group. “It’s definitely a step in the right direction, and I want to give OpenAI credit.”
The OpenAI deal with the Pentagon was announced shortly after Defense Secretary Pete Hegseth said he would call rival AI firm Anthropic, which has long been in contract talks with the Pentagon, a major threat to national security. Anthropic said the term, which would force the Pentagon and its contractors to stop using Anthropic’s services for defense purposes, had never been made public by the American company.
In an event held in Sausalito, California, on Monday, Gen. Retired Paul Nakasone, former director of the National Security Agency and the US Cyber Command, said that the Pentagon should work to integrate all the leading companies of American AI technology in the defense of the country.
“We need Anthropic, we need OpenAI, we need all our big model language companies to work with our government,” said Nakasone, a member of OpenAI’s board of directors, at a conference sponsored by the Aspen Institute. “I think the supply chain piece is wrong. The conversations over the weekend and the context of those conversations were difficult for me to listen to. As an American citizen, someone who works in government, I think it’s wrong, okay? This is not a supply chain risk.”
Anthropic has long insisted that the Defense Department cannot use its AI systems for mass surveillance at home or for direct use in autonomous weapons, although it added approvals for the military to use its systems for cyber and missile defense purposes in December. After a meeting between Anthropic CEO Dario Amodei and Hegseth last Tuesday, the Department of Defense announced that Anthropic had reached an agreement at 5 pm ET on Friday.
However, on Thursday, a spokesperson for Anthropic told NBC News that “the latest language from the Department of Defense is designed as a compromise and is contradicted by legislation that would allow those protections to be ignored at will.”
But as Anthropic’s relationship with the Department of Defense crumbled, OpenAI went deeper, with Friday’s announcement of a contract adding new intrigue to a story that had already captivated much of the tech and defense community. In his letter Monday night, Altman said the rush to put money into the deal made the negotiations look “opportunistic and sloppy” even though OpenAI was “genuinely trying to slow things down and avoid a worst-case outcome.”
Throughout the weekend and earlier this week, an army of legal experts scrutinized the latest public contract language from OpenAI, trying to determine whether the company’s terms actually added any powerful protections beyond any official Defense Department use.
“I’m confused as to why the Pentagon would accept this language when they’re just trying to out-Anthropic by asking for something like this,” Charlie Bullock, a senior researcher at the Institute for Law and AI think tank, wrote in X after the new language appeared.
Many legal experts argue that every word in a contract carries a lot of weight, as they say the government will study the terms of the contract in detail.
“The pattern that we’ve seen play out over and over again in these surveillance debates is that the intelligence and national security community end up interpreting exceptions in a much, much broader way than any reasonable person would,” McGrail said. “And because so much is private, there’s limited visibility of public pushback.”
“Now there could be a new hole being used here that we’re not predicting? It’s absolutely possible,” McGrail added.
Experts have also focused on whether the contract is permanently based on today’s notions of legality, as they worry that the government may change the boundaries of “any lawful use” by issuing new executive orders or legal opinions.
The recent debate over the military’s use of AI in domestic surveillance has focused largely on the government’s ability to use commercially available data in its operations, as other methods of spying on Americans can be more difficult to obtain legal approval.
For years, companies that serve or display ads on phones or laptops have been able to collect targeted data about users, including precise location data, and sell that information to various government agencies to identify travel patterns and people’s behavior.
Mulligan, OpenAI’s national security lead, said in a Monday night X post that “new contract language reinforces that domestic surveillance is prohibited under this agreement, including including information obtained commercially.”
Sen. Ron Wyden, D-Ore., who has repeatedly warned in recent years that the federal government is buying and selling data on Americans for surveillance purposes, criticized the Pentagon for not agreeing with Anthropic’s privacy concerns.
“The Department of Defense is challenging Anthropic and asking for regulations on how DOD uses its product,” Wyden said in an emailed statement. “That’s a big cause for fear, given AI’s ability to turn disparate pieces of public or commercial data into highly revealing profiles of Americans. Location data, web browsing records, and information about mental health, political activity and religious affiliations are all available for pennies on the open market and can make Americans understand what’s perfectly legal.”
“Creating AI profiles of Americans based on that data represents a dramatic increase in mass surveillance that should not be allowed, regardless of what the current, outdated laws on the books say.”
Amodei, the CEO of Anthropic, has repeatedly noted that strong commitments from the Department of Defense not to use AI to spy on Americans are necessary because the law has not yet caught up to AI’s growing power to analyze or analyze large amounts of data. Recent research has also shown that people can be recognized by today’s AI systems, even if the underlying data is said to be anonymized.
Protesters of OpenAI’s first deal with the Pentagon surrounded OpenAI’s headquarters in San Francisco this weekend with chalk messages encouraging employees to always question the company’s policies, while releases of OpenAI’s ChatGPT app increased following news of the deal.
Michael Horowitz, former deputy secretary of defense for emerging capabilities and current professor of political science at the University of Pennsylvania, told NBC News that the dispute between the Pentagon and Anthropic went beyond simple contract terms.
“This dispute shows a breakdown in trust between Anthropic and the Pentagon, where Anthropic does not trust that the Pentagon will use its technology responsibly, and the Pentagon does not trust that Anthropic will allow its technology to be used for what the Pentagon views as important national security use cases,” Horowitz said. “Part of that is cultural differences, part of that is politics, part of that is humanity.”



