In his latest appearances on Judge Andrew Napolitano’s Judging Freedom in early March 2026, Grayzone editor Max Blumenthal painted a grim picture of the U.S.-Israeli campaign against Iran. Titled episodes such as “Gaza-like Horror in Tehran” (March 5) and “Those Bastards Still Don’t Want to Stop!” (March 8) described Israeli strikes on Tehran police stations, residential neighborhoods, and schools, alongside U.S.-backed efforts to fracture the country through proxy chaos. Blumenthal revealed that the strategy sold to President Trump included “spectacular AI-controlled assassinations of Iranian leadership” — operations designed to decapitate the regime and spark a popular uprising that instead unified Iranians in defiance.
He also noted Israel’s surging defense and tech stocks, fueled by “military tech, surveillance tech, AI” now “field-tested” for export. These were not isolated asides. They built directly on Blumenthal’s earlier, more explosive warnings about the hidden powers steering this new era of warfare: the AI Warlords.
Who Are the AI Warlords?
Blumenthal first laid out the term in striking detail during his June 12, 2025 Judging Freedom interview on whether Israel was aiding Ukraine. He described attending the AI Expo organized by Eric Schmidt — Google co-founder turned Pentagon adviser — through the Special Competitive Studies Project (SCSP). The event in Washington D.C. brought together heads of U.S. intelligence agencies, the Secretary of the Navy, Joint Chiefs, drone manufacturers, and frontline proxies: Ukrainian and Israeli officials.
“What’s interesting here is how few reporters are actually at this place when you have the heads of intelligence, the AI warlords that are getting the biggest government contracts,” Blumenthal said. He witnessed Israeli representatives openly attending while their government stood accused of genocide in Gaza. Journalists who dared question these figures about complicity were escorted out by police and had their badges revoked.
In a later October 2025 appearance discussing Zionist billionaires (including the Ellisons’ media empire), Blumenthal explicitly linked the same AI warlords to Tel Aviv, framing a deeper scandal: Silicon Valley’s most powerful AI executives aligned with Israeli interests, securing trillion-dollar contracts while their technology enables real-time targeting and mass surveillance in active war zones.
The Gaza Laboratory: Lavender, Gospel, and the AI Killing Factory
The darkest data comes from the battlefield itself. Blumenthal and The Grayzone have repeatedly highlighted Israel’s deployment of AI systems in Gaza as the ultimate proof-of-concept for the warlords’ technology.
Revelations from +972 Magazine and Local Call (amplified by Blumenthal) exposed two core systems:
- Lavender: An AI machine that generated kill lists of suspected low-level Hamas operatives. IDF sources admitted the system marked tens of thousands of targets with minimal human oversight — sometimes just 20 seconds per target. Collateral damage thresholds were shockingly loose; entire families were erased because an AI flagged a relative’s phone or social media link.
- Gospel (“Habsora”): An AI-driven system for identifying “Hamas buildings” — homes, schools, mosques — for bombing. Operators described it as an “assassination factory” on autopilot.
By April 2024, these tools had contributed to the deaths of over 37,000 Palestinians (at the time), including thousands of women and children designated as “collateral.” One IDF source told investigators: “We bombed without checking.” The systems were trained on years of Israeli surveillance data from the occupied territories — phone metadata, CCTV, drone footage, and spyware like Pegasus. Errors were not bugs; they were features in a doctrine of “maximum damage with minimum risk to our soldiers.”
U.S. tech supplied the infrastructure. Nvidia chips power much of the computation. Palantir (Peter Thiel’s company) has deep IDF contracts for data fusion. Eric Schmidt’s network and other SCSP-linked firms ensure the pipeline flows. The same expo Blumenthal infiltrated in 2025 showcased exactly this ecosystem: private AI giants bidding for the next contract while Israeli generals pitched “battle-tested” solutions.
Dark Data: The Hidden Architecture
Beyond Gaza and the current Iran campaign, the picture grows darker:
- Revolving door: Former Google executives like Schmidt advise the Pentagon while their former companies win no-bid AI contracts. Unit 8200 (Israeli military intelligence) alumni dominate Silicon Valley startups that sell back surveillance tech to both Tel Aviv and Washington.
- Autonomous escalation: The Iran strikes’ “AI-controlled assassinations” echo Gaza tactics but on a state level — drones, facial recognition, predictive algorithms deciding life and death with minimal human input. International law on Lethal Autonomous Weapon Systems (LAWS) remains toothless; the U.S. and Israel lead the resistance to any ban.
- Profit over everything: Defense stocks and AI firms soared during the Gaza operation. The same pattern repeats in Iran: “field-tested” tech becomes the next export goldmine. Blumenthal noted Israeli investors betting the war ends quickly so the AI boom can be monetized globally.
- Censorship and impunity: At the AI Expo, critical journalists were removed. In Gaza, Israel classified the AI systems’ exact parameters. In Washington, congressional oversight is performative. The warlords operate in a classified gray zone where accountability evaporates.
- Proxy fusion: Ukraine, Taiwan, and Israel serve as live laboratories. Blumenthal observed foreign officials at the expo openly networking while their countries consumed billions in U.S. AI-enabled aid. The same networks now facilitate operations against Iran.
The Human Cost and the Endgame
Blumenthal’s message across his Judging Freedom appearances is consistent and chilling: these are not neutral tools. The AI Warlords — Schmidt’s network, Palantir, Meta, Google, and their Israeli partners — have fused private capital with the national security state to create an unaccountable killing machine. In Gaza it produced what critics call an “AI genocide.” In Iran it powers decapitation strikes meant to shatter a nation. Tomorrow it could target anywhere the empire deems a threat.
The technology is marketed as precision and progress. The dark data tells a different story: mass civilian slaughter enabled by algorithms, endless profit for a tiny billionaire class, and the steady erosion of any remaining human restraint in warfare.
As Blumenthal warned in 2025 and reiterated in the shadow of the 2026 Iran escalation: the AI warlords are not coming. They are already here — embedded in the Pentagon, allied with Tel Aviv, and writing the kill lists of the future.
https://thegrayzone.com/
https://www.youtube.com/watch?v=YbrNCXOzOfY
https://www.youtube.com/watch?v=DV5fhRHK-0o
https://www.youtube.com/watch?v=-zpWykLmxv0
https://jackpoulson.substack.com/p/google-affiliated-military-ai-expo
https://www.972mag.com/lavender-ai-israeli-army-gaza/
https://www.972mag.com/israel-gaza-lavender-ai-human-agency/
https://www.youtube.com/watch?v=vWYt-uRVlA4
https://www.youtube.com/watch?v=U4n_E00yYus
https://www.youtube.com/watch?v=l0XqZBDR6EY
https://singjupost.com/max-blumenthal-charlie-kirk-and-zionist-billionaires-transcript/
https://www.youtube.com/watch?v=Ik4RSpL9djU
https://podcasts.apple.com/gb/podcast/max-blumenthal-did-u-s-policy-deliberately-harm-civilians/id1591962689?i=1000746245003
https://www.972mag.com/mass-assassination-factory-israel-calculated-bombing-gaza/
https://lieber.westpoint.edu/gospel-lavender-law-armed-conflict/
https://www.scsp.ai/wp-content/uploads/2023/09/GenAI-web.pdf
Lavender AI: An Ethical Analysis of Israel’s AI-Driven Targeting System in Gaza
Lavender is an artificial intelligence system developed by Israel’s Unit 8200 (elite military intelligence) to identify and rank suspected Hamas and Palestinian Islamic Jihad (PIJ) operatives in Gaza. Revealed in April 2024 by +972 Magazine and Local Call based on interviews with six Israeli intelligence officers who used it during the post-October 7, 2023 war, it functions as a “smart database” that processes vast surveillance data on Gaza’s 2.3 million residents—including cellular metadata, social media connections, phone contacts, photos, and movement patterns.
The system assigns probabilistic scores (1–100) to individuals based on patterns matching known militants (e.g., frequent phone or address changes, membership in specific WhatsApp groups). High-scoring individuals—reportedly up to 37,000 marked early in the war—were added to kill lists. Targets were then geolocated via linked systems like “Where’s Daddy?” (real-time home tracking) and struck, often at night in family residences using unguided “dumb” bombs for lower-level suspects to conserve resources. Human oversight was described as minimal: officers sometimes spent as little as 20 seconds per target (primarily confirming the individual was male), acting as a “rubber stamp” with no requirement to review raw data or AI reasoning.
Accuracy claims from sources indicated ~90% reliability under the system’s criteria, meaning a 10% error rate (misidentifying civilians or non-militants, such as relatives, police, or those with similar profiles). Collateral damage policies were permissive: up to 15–20 civilians per low-ranking target and hundreds for senior ones, a shift from pre-war standards. This contributed to systematic home bombings, with sources noting entire families erased and high civilian tolls (e.g., most early fatalities from such strikes).
The IDF has consistently described Lavender not as an autonomous “kill list” generator but as a decision-support database for cross-referencing intelligence, with full human verification, multi-layer reviews, and compliance with international humanitarian law (IHL). It rejects claims of sole reliance or inadequate checks, framing media reports as misrepresentations.
Core Ethical Concerns Under International Humanitarian Law and AI Principles
Ethical analysis draws primarily from Just War Theory and IHL (Geneva Conventions, Additional Protocols, customary rules), alongside emerging norms for military AI from bodies like the ICRC and UN discussions on lethal autonomous weapons systems (LAWS).
- Distinction (Civilian vs. Combatant): IHL requires feasible verification that targets are military objectives or direct participants in hostilities. Critics argue Lavender’s broad data patterns and 10% error rate, combined with statistical acceptance of mistakes (“no zero-error policy”), lead to misidentification—e.g., flagging civilians with tangential links. Minimal 20-second reviews exacerbate this, especially when gender checks serve as the primary safeguard. Reports of expanded criteria (including civil defense roles) and opaque “black box” algorithms raise bias risks from training on occupied population surveillance data. Proponents counter that it enhances distinction by fusing intelligence layers humans might miss, with analysts accessing raw data per IDF procedures.
- Proportionality: Expected civilian harm must not be excessive relative to concrete military advantage. Permitting 15–20 (or more) civilian deaths for low-value “garbage targets” (junior operatives) and hundreds for seniors stretches this principle, particularly when paired with mass scaling (targets produced faster than in prior wars). Sources described a “permissive” post-October 7 atmosphere with revenge elements, prioritizing volume over precision. Defenders note proportionality assessments occur in separate mission-planning stages (not by Lavender itself) and that Hamas’s human-shield tactics complicate calculations in dense urban environments.
- Precautions in Attack: Parties must take “all feasible” steps to verify targets and minimize harm. Rapid rubber-stamping, skipped bomb damage assessments for juniors, and automated home strikes allegedly fall short. The system’s speed enabled unprecedented target production (e.g., more in days than manually in years), but critics say this rushed process erodes precautions. The Lieber Institute analysis (West Point) argues such tools can improve precautions by providing comprehensive data—if humans follow standard operating procedures (SOPs) and avoid deference.
- Human Agency, Accountability, and Dignity: A core AI ethics issue is “meaningful human control.” Reports of operators treating outputs “as if it were a human decision” and the phrase “the machine did it coldly” suggest diffusion of responsibility, potentially creating a “responsibility gap” where commanders claim reliance on tech while algorithms obscure judgment. This dehumanizes both targets (reduced to data points) and operators (psychological detachment from killing). Broader concerns include long-term effects: one source warned of radicalizing bereaved families and fueling future recruitment. IHL places ultimate accountability on humans and states, not machines; the IDF maintains layered human oversight preserves this.
- Transparency, Bias, and Precedent: As a non-transparent system trained on asymmetric surveillance, it risks embedding biases (e.g., profiling patterns common in civilian life under occupation). No public algorithm details or independent audits exist. Ethicists warn this sets a “Lavender precedent” for automated kill lists, normalizing high-volume targeting and complicating IHL in future conflicts (e.g., expanding permissible civilian harm via target proliferation).
Counterarguments and Defenses
Israeli officials and some legal scholars emphasize context: urban warfare against an embedded adversary using civilian infrastructure, with AI as a necessary force-multiplier to overcome “human bottlenecks” in intelligence. They argue systems like Lavender are low-level decision aids (“glorified Excel”), not autonomous weapons, and that full processes (including senior reviews for high-collateral strikes) ensure compliance. Over-reliance critiques are seen as overstated; humans retain veto power and moral responsibility. Some analyses conclude no inherent IHL violation if SOPs are followed, and non-use could itself breach verification duties in data-heavy battlespaces.
Empirical data on outcomes remains contested: +972-linked reports (including 2025 intelligence database revelations) suggested high civilian proportions (e.g., ~83% by mid-2025 in one internal estimate), while the IDF attributes totals to Hamas tactics and disputes figures. No independent verification of the system’s exact parameters has been possible.
Broader Implications
Lavender exemplifies tensions in modern warfare: AI promises precision and scale but risks eroding restraint when speed outpaces judgment. It is not a fully autonomous “killer robot” (final decisions remain human), yet its design and reported use blur lines toward semi-automated targeting. This fuels global debates on LAWS regulation, with calls for bans or stricter human-control mandates. Long-term, it highlights ethical trade-offs—short-term force protection vs. civilian protection and moral desensitization.
Perspectives differ sharply: critics see systemic IHL risks and dehumanization; defenders view it as lawful innovation in asymmetric conflict; balanced legal views stress that ethics hinge on implementation, not the tool itself. Full transparency, independent oversight, and adherence to precautionary principles would address many concerns. As military AI proliferates, Lavender remains a cautionary case study in balancing technological advantage with human values and legal obligations.
Bernd Pulch — Bio
Bernd Pulch (M.A.) is a forensic expert, founder of Aristotle AI, entrepreneur, political commentator, satirist, and investigative journalist covering lawfare, media control, investment, real estate, and geopolitics. His work examines how legal systems are weaponized, how capital flows shape policy, how artificial intelligence concentrates power, and what democracy loses when courts and markets become battlefields. Active in the German and international media landscape, his analyses appear regularly on this platform.
