
Sign in to join the discussion.
On February 28, 2026, the United States and Israel launched Operation Epic Fury, striking 1,000 targets inside Iran in the first 24 hours alone - nearly double the scale of the 2003 "shock and awe" campaign in Iraq.[1] By mid-March, the total had crossed 6,000.[2] [3] The difference was not a larger arsenal or a more willing executive. It was a software system.
That system is the Maven Smart System, built by Palantir Technologies and integrated with Anthropic's Claude large language model. Maven consolidates what were previously eight or nine separate intelligence and targeting platforms into a single interface - ingesting satellite imagery, drone video feeds, intercepted communications, radar data, and human intelligence reports, then processing them through machine learning algorithms that identify targets, recommend weapons, and assess strike options.[4] Cameron Stanley, the Pentagon's chief digital and artificial intelligence officer, described the result at Palantir's AIPCon conference in March: "We've gone from identifying the target to now coming up with a course of action, to now actioning that target, all from one system. This is revolutionary."[5] What once required hours, he said, now takes minutes. In some cases, seconds.
The Maven Smart System is an AI-powered intelligence and targeting platform that unifies sensor data from across the battlefield into a single decision-support interface. Before Maven, the same analytical work was distributed across eight or nine separate systems, each requiring human operators to manually move detections between platforms - a process Cameron Stanley described at AIPCon as people "literally moving detections left and right in order to get to our desired end state."[5] Maven eliminates that handoff friction. The military's ambition for it has a name: the "third offset" - the idea that after nuclear weapons and precision-guided munitions, the decisive American advantage in this era would be the speed and quality of command decisions.
Project Maven launched in April 2017, originally with Google as the primary technology partner. When Google withdrew in 2018 following internal employee protests, Palantir absorbed the project. The scale of what followed is striking: a Georgetown University investigation found that the 18th Airborne Corps had by 2024 used AI to reduce a 2,000-person intelligence analyst team to just 20, processing the same volume of information faster.[6] Chad Wahlquist, a Palantir architect who worked on the system, confirmed as much at AIPCon: "I saw stats where normally we would have 2,000 intelligence officers actually trying to do targeting and look at stuff. Now that's 20."[5] That is not a marginal efficiency gain. It is a categorical change in who - and how many - make decisions to use lethal force.
The system's pipeline maps closely onto the military's F3EAD targeting model: Find, Fix, Finish, Exploit, Analyze, Disseminate. Where that cycle previously required distinct teams and handoffs at each stage - imagery analysts, signals intelligence officers, targeteers, legal advisors, weapons officers - Maven compresses it into a single workflow. During a live demonstration at AIPCon, Stanley walked through how the system ingests a satellite image alongside flight-tracking and other data feeds, narrows a scene to a specific vehicle in a parking lot, and then surfaces a recommended strike asset - in the demonstration's case, a .50-caliber M2 Browning mounted on a Stryker - all through a sequence of three mouse clicks.[12] The Missile Defense Advocacy Alliance, which has reviewed the system's development, notes that the targeting cycle was compressed from roughly 12 hours in Maven's 2020 field exercises to under one minute in current deployment.[13]
Anthropic's Claude is embedded within Maven as the large language model handling intelligence synthesis - helping analysts sort through incoming data, summarize assessments, and surface recommendations. According to sources with knowledge of the integration, Claude does not directly issue targeting commands; it functions as a reasoning layer between raw intelligence and human decision-makers.[7] Palantir architect Patrick Dods, a former submariner who works on Maven, described the system's original mandate as "reducing the hay in a haystack" - using pre-trained computer vision models to perform automatic detection, classification, and identification of objects of military interest, so that analysts can rapidly build a plan of action "not only around tactical action, but around operational and theater level missions."[5] The human operator, in theory, retains the final call.
The logic of speed in warfare is not new. During World War II, assembling a strike package from imagery collection to completed target dossier could take weeks or months. During the first Gulf War, Iraq's mobile Scud launchers exploited that lag - firing and relocating before US forces could respond. Each subsequent decade brought a shorter targeting cycle: armed Predator drones in the early 2000s collapsed certain missions from days to hours. Maven compresses them further still.
Adm. Brad Cooper, head of US Central Command, put the military's position plainly in a video statement in March: "These systems help us sift through vast amounts of data in seconds, so our leaders can cut through the noise and make smarter decisions faster than the enemy can react."[7] The Pentagon's own AI strategy document states: "Military AI is going to be a race for the foreseeable future, and therefore speed wins… We must accept that the risks of not moving fast enough outweigh the risks of imperfect alignment."[6]
What that framing does not address is what is lost when targeting cycles compress. When a system can process and recommend against thousands of targets in a single day, the time available for human verification of each one does not scale accordingly - it contracts. Mark Beall, who served as the inaugural director of strategy and policy at the Pentagon's Joint Artificial Intelligence Center from 2018 to 2020 and now serves as president of the AI Policy Network, framed the structural tension directly: "There are a lot of steps before the trigger gets pulled. AI systems are being deployed very effectively to accelerate existing workflows. But when it comes to actually deploying weapon systems, this technology is not ready yet."[7]
On the first day of Operation Epic Fury, a US Tomahawk cruise missile struck the Shajareh Tayyebeh elementary school in Minab, Iran. 168 people were killed, according to Iranian authorities, including at least 110 schoolchildren - a figure confirmed by Amnesty International's in-depth investigation.[8] A preliminary Pentagon investigation found that US Central Command had created the target coordinates using outdated data from the Defense Intelligence Agency. Satellite imagery analyzed by multiple news organizations showed that the school had been physically separated from an adjacent Islamic Revolutionary Guard Corps compound by a fence line constructed between 2013 and 2016 - a fact that was either absent from or never updated in the targeting database. The Semafor news outlet reported that publicly available Iranian business listings accurately showed the school's location, and that a simple commercial internet search could have surfaced the discrepancy; more than 120 members of Congress demanded answers as to whether Maven was used to identify the school as a target.[2] The Pentagon confirmed it was investigating Maven's possible role; no definitive public finding has been issued.
What exactly failed in Minab remains officially unresolved: a Pentagon investigation confirmed reliance on outdated data, but whether Maven's classification algorithms compounded that failure or were simply fed a corrupted input has not been established. Daniel Rothenberg, co-director of the Future Security Initiative at Arizona State University, has noted that the intelligence failure dynamic itself is not new - research on the anti-ISIS campaign documented a civilian casualty rate approximately 35 times higher than official US estimates, driven in part by stale or inaccurate targeting records.[1] What AI changes is not the origin of those failures but their velocity: a system processing thousands of target recommendations per day carries forward data errors at a pace no manual review chain could detect in time.
The phrase "human in the loop" has become the standard assurance offered by military officials, AI companies, and lawmakers when asked about autonomous weapons. The Pentagon's chief spokesperson stated in late February that the military did not "want to use AI to develop autonomous weapons that operate without human involvement." Anthropic's usage policy, which became the source of its legal confrontation with the Pentagon, prohibits Claude's use for lethal autonomous systems operating without human oversight.[9] OpenAI has said the same of its own models.
The problem is that "human in the loop" describes an architectural arrangement, not the quality of judgment that arrangement produces. The relevant question is not whether a human clicks to authorize a strike - it is whether that human has the information, the time, and the cognitive bandwidth to form a genuine independent assessment. David Leslie, professor of ethics, technology and society at Queen Mary University of London, has called the risk "cognitive off-loading": when a system presents a synthesized recommendation with supporting intelligence already processed and packaged, the reviewer is left to evaluate a conclusion rather than examine the underlying evidence, within what Leslie described as "a much narrower time band."[2] Stanley's AIPCon demonstration made the point more concretely than any critic could: from satellite image to recommended strike asset in three mouse clicks.[12] At a pace of 1,000 targets in 24 hours, that band approaches zero. Authorization becomes the whole of the task.
Sen. Mark Kelly, questioning a general at a Senate Armed Services Committee hearing about the Low-Cost Uncrewed Combat Attack System - a drone platform operating in Iran - asked directly whether humans remained the final decision-makers on drone strikes. The general declined to answer in a public forum. Sen. Kelly's conclusion: "I am not sure that the law of armed conflict has dealt with this issue."[1] Sen. Elissa Slotkin, a member of the same committee, put it more plainly: "It's really up to the humans, and in this case the Secretary of Defense, to ensure that there's human redundancy for the foreseeable future, and that is what we just don't have confidence in."[7]
The civilian harm numbers from Operation Epic Fury have to be read against a specific institutional context: the US military spent more than a decade after Iraq and Afghanistan building legal and procedural infrastructure to reduce civilian casualties - targeting lawyers, civilian harm mitigation cells, post-strike assessment requirements. That infrastructure has been systematically reduced under the Trump administration. Military lawyers who advise on compliance with international humanitarian law and rules of engagement have been sidelined and, in some cases, dismissed.[6] The operating philosophy, in Secretary of Defense Pete Hegseth's own words: "Maximum lethality, not tepid legality. Violent effect, not politically correct."[2]
The aggregate civilian toll, per available figures as of late March: the Iranian Red Crescent reports 67,414 civilian sites struck, including 498 schools and 236 health facilities.[10] The UN's human rights chief, Volker Türk, stated that civilians were bearing "the brunt of a reckless war," with the Iranian Health Ministry reporting over 1,200 deaths as of mid-March and preliminary Al Jazeera figures placing total deaths - civilian and military - at nearly 2,000.[10][11]
The reduction of institutional safeguards is not incidental to the AI story - it is integral to it. AI-assisted targeting operates within a chain of human decisions about thresholds, oversight, and accountability. When those decisions are made by an administration that has explicitly deprioritized legal constraints, the technology does not compensate; it amplifies.
Maven's deployment in Iran is not simply a government project. It is a commercial one. The Pentagon awarded Palantir an initial Maven contract worth $480 million in 2024, expanded to $1.3 billion by 2025; the US Army separately awarded Palantir a contract worth up to $10 billion.[2] The Pentagon has since designated Maven an official program of record - meaning it is permanent infrastructure, not a pilot. Palantir's market capitalization has approached $360 billion on the strength of these military partnerships.[2]
Palantir CEO Alex Karp, speaking at AIPCon, did not address the Iran campaign directly but was unequivocal about the company's position: "Once the war starts, we're not interested in debating how we're supporting them. We are very, very proud to have our role in making sure that American men and women come home safe and happy and proud of what they're doing. And that sometimes means that people on the other side don't go home."[5]
Anthropic's position has been considerably more fraught. Claude is embedded in Maven and was reportedly used in Iran strikes - yet the company has simultaneously sued the Pentagon over its refusal to strip safety guardrails from the model, including restrictions on use for lethal autonomous systems and domestic surveillance. A federal judge sided with Anthropic in an early ruling, blocking the Pentagon's "supply chain risk" designation.[9] The underlying litigation continues. (Omniscient Media covered the legal confrontation in full at the time of the filing: "Anthropic Sues the Pentagon, and the Paradox at the Heart of the Case.")
The infrastructure built around this campaign is not temporary. At AIPCon, Stanley disclosed that Maven now has over 20,000 active users across more than 35 tools operating across three security domains - a figure that reflects years of quiet institutionalization before the Iran campaign made it visible.[5] The platform is a program of record, meaning it is funded and governed as permanent military infrastructure rather than a procurement experiment. What is being established in Iran is not simply a set of tactics. It is an operational model.
The accountability questions raised by this campaign - who is responsible when AI-assisted targeting kills 168 schoolchildren? what meaningful human judgment looks like at 1,000 targets per day? whether "human in the loop" is a substantive safeguard or a liability disclaimer - are not questions the current administration has shown interest in answering. They are, however, questions that will outlast this campaign, and that every future military power with access to comparable technology will be forced to confront.
Heidy Khlaaf, chief scientist at the AI Now Institute, offered a summary that is worth sitting with: "It's very dangerous that 'speed' is somehow being sold to us as strategic here, when it's really a cover for indiscriminate targeting when you consider how inaccurate these models are."[7]
The US military would contest that framing. The evidence from the first month of Operation Epic Fury does not make the contest easy.
KOLD/AZFamily: "AI targeting system doubles pace of US strikes in Iran" (Mar. 25, 2026) Inline ↗
Peoples Dispatch: "Kill chain: Silicon Valley, AI, and the war on Iran" (Mar. 27, 2026) Inline ↗
Financial Times: "The AI-driven 'kill chain' transforming how the US wages war" (2026) Inline ↗
Reuters: "US uses Anthropic AI, B-2 bombers and suicide drones in Iran strikes" (Mar. 1, 2026) Inline ↗
The Register: "Pentagon AI chief praises Palantir tech for speeding battlefield strikes" (Mar. 13, 2026) Inline ↗
The Conversation: "Iran war shows how AI speeds up military kill chains" (Mar. 17, 2026) Inline ↗
NBC News: "U.S. military is using AI to help plan Iran air attacks, sources say, as lawmakers call for oversight" (Mar. 11, 2026) Inline ↗
Amnesty International: "USA/Iran: Those responsible for deadly and unlawful US strike on school that killed over 100 children must be held accountable" (Mar. 2026) Inline ↗
UN Human Rights Office (OHCHR): "Civilians bear brunt of reckless war in the Middle East, says Türk" (Mar. 2026) Inline ↗