US news

US military is using AI to help plan airstrikes on Iran, sources say, as lawmakers seek surveillance

As the US military expands its use of AI tools to identify targets for airstrikes in Iran, members of Congress are calling for greater monitoring and oversight of the technology’s use in the military.

Two people with knowledge of the matter, who asked not to be identified to discuss sensitive matters, confirmed that the military is using the AI ​​systems of data analysis company Palantir to identify potential targets in ongoing attacks. The use of Palantir’s software, which relies in part on Anthropic’s Claude AI programs, comes as Defense Secretary Pete Hegseth aims to put artificial intelligence at the heart of America’s military — and as he clashes with Anthropic leadership over limits on AI use.

However, as AI takes on a broader role on the battlefield, lawmakers are calling for greater focus on the safeguards that should govern its use and increased transparency about how much control is given to the technology.

“We need a full, unbiased review to determine whether AI is already harming or endangering lives in the war with Iran,” Rep. Jill Tokuda, D-Hawaii, a member of the House Armed Services Committee, told NBC News in response to questions about the use and reliability of AI in military situations. “A person’s judgment should always be between life and death decisions.”

The Department of Defense and leading AI companies such as OpenAI and Anthropic have publicly stated that current AI systems should not be able to kill without a human signature. But concerns remain that relying on AI for some aspects of its operations or decision-making could lead to errors in military operations.

The Pentagon’s chief spokesman, Sean Parnell, in a post on X on February 26 that the military “does not want to use AI to create autonomous weapons that operate without human involvement.”

The Defense Department did not respond to questions about how the military is balancing its use of AI to reduce human workload while ensuring the analysis and targeting suggestions are accurate.

Lawmakers and independent experts who spoke to NBC News expressed outrage over the military’s use of such tools, calling for clear safeguards to ensure that people remain involved in life-or-death decisions on the battlefield.

“AI tools are not 100% reliable – they can fail in subtle ways and yet operators continue to over-trust them,” said Rep. Sara Jacobs, D-Calif, a member of the House Armed Services Committee.

“We have a responsibility to enforce strict rules on the military use of AI and to ensure that there is human discretion in all decisions to use lethal force, because the cost of getting it wrong can be dangerous to the people and service members who operate these systems,” he said.

Anthropic’s Claude has become an integral part of Palantir’s Maven intelligence analysis program, which was also used in the US mission to capture Venezuelan President Nicolás. Maduro. News of Claude’s role in recent military operations was first reported by The Wall Street Journal and The Washington Post.

But that role was complicated by Anthropic’s conflict with Hegseth after the company sought to prevent the military from using its AI for home surveillance and autonomous lethal weapons. Last week, the Department of Defense labeled Anthropic a national security threat, a move that threatens to remove it from military use in the coming months. Anthropic filed a lawsuit to challenge the designation.

Anthropic declined to comment. Palantir did not respond to a request for comment.

In a video sent to X on Wednesday, Adm. Brad Cooper, the head of US Central Command, acknowledged that AI has been an important tool in helping the US select targets in Iran.

“Our soldiers use a variety of advanced AI tools. These systems help us sift through large amounts of data in seconds so our leaders can cut through the noise and make smart decisions faster than the enemy can react,” he said.

“Humans will always make the final decisions about what to shoot and what not to shoot and when to shoot, but advanced AI tools can turn processes that used to take hours and sometimes even days into seconds.”

The Trump administration has publicly embraced the use of technology both in the military and across government.

Rep. Pat Harrigan, RN.C., said AI is already critical to quickly processing military intelligence, including on Iran.

“AI is a tool that helps our warfighters process large amounts of data faster than any human alone, and what we saw in Operation Epic Fury, more than 2,000 targets attacked with incredible precision, is proof of how these capabilities can be used responsibly and effectively,” Harrigan, who also serves on the NBC News Committee in a statement, told the NBC News Services Committee.

“But no AI system can replace the judgment, training, and experience of the American warfighter. A human in the field is not a legal requirement, it is a necessity, and nothing in the way our military operates suggests otherwise,” he said.

While no lawmakers contacted by NBC News said AI should be completely removed from military use, others say more oversight is needed.

Sen. Elissa Slotkin, D-Mich., a member of the Senate Armed Services Committee, said the Defense Department has not done enough to determine how well people evaluate AI-assisted or artificial intelligence.

“It’s really up to the people, and in this case the Secretary of Defense, to ensure that there is an end to the population for the foreseeable future, and that’s just what we don’t trust,” he said.

Sen. Mark Warner, D-Va., the top Democrat on the Senate Intelligence Committee, said he is concerned about the military’s use of AI to help identify targets and that there are unanswered questions about how the new technology is being used. “This has to be resolved,” he told NBC News.

OpenAI and Anthropic, both of which have worked with the US military, have said that even their most advanced systems are flawed, and the world’s top AI researchers admit they don’t fully understand how leading AI systems work.

In an interview with NBC last month, Anthropic CEO Dario Amodei said: “I can’t tell you how likely it is that even the systems we build are completely reliable.”

A major OpenAI study published in September found that all major AI chatbots, which rely on programs called macro-language models, “hallucinate” or generate responses at intervals.

Sen. Kirsten Gillibrand, DN.Y., called for clear rules on how the military can use AI.

“The Trump administration has already proven that it is willing to subvert American law to prosecute an unpopular war,” he told NBC News. “There is little reason to believe that the DOD will be held accountable for its use of AI without clear safeguards.”

Mark Beall, head of government affairs at the AI ​​Policy Network, a Washington DC think tank, and director of AI strategy and policy at the Pentagon from 2018 to 2020, said that while AI can reverse the process of deciding where to strike, it’s clear that humans still need to fully evaluate targets.

“There are many steps before the trigger is pulled. AI systems are being used very effectively to speed up existing workflows and allow commanders and analysts and planners to have better and faster decision-making skills,” he added. “But when it comes to deployment of weapons systems, this technology is not ready yet.”

“These systems are going to be really good, and as other adversaries start to use them, there will be more pressure to shorten the AI ​​effect update to work at a useful and efficient speed,” Beall said. “We have to figure out how to solve this credibility problem before we get there. Regardless of what you think about lethal autonomous weapons, making them safe and effective is of global interest.”

Heidy Khlaaf, a senior scientist at the AI ​​Now Institute, a nonprofit that advocates for the appropriate use of technology, said she worries that relying on AI to quickly process data for life-or-death decisions could be a way for the military to avoid accountability for mistakes.

“It’s very dangerous that somehow ‘speed’ is being sold to us as a strategic strategy here, when it’s just a cover for indiscriminate targeting when you look at how bad these types are,” Khlaaf said.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button