Before we talk about how AI-enabled weapons will cause genocides that no one can be held accountable for, let’s break down how governments and defense contractors have invested over $18 billion in military AI in the past five years, while global investment in AI for peace, development, and humanitarian aid remains under $1 billion- despite its proven potential to save lives and stabilize regions.
The global military AI market was valued at over $9 billion in 2023 and is projected to exceed $20 billion by 2030, driven by autonomous weapons, surveillance, and combat systems.
In contrast, global spending on AI for humanitarian or development purposes remains in the hundreds of millions—a fraction of military investment. Redirecting even 10% of that military AI spending could fund hunger relief programs capable of reaching hundreds of millions of people annually.
In the documentary Unknown: Killer Robots, we see the terrifying implications of allowing artificial intelligence to control the “kill chain”—the sequence of decisions that leads from identifying a target to taking a human life. If a drone kills the wrong person, who’s to blame? Palmer Lucky? The general? The algorithm? This mirrors the aftermath of the 2008 housing crash, where millions lost homes and jobs, yet no one of significance went to jail. When complex systems replace individual accountability, atrocities can occur without consequences. AI-controlled warfare risks creating a future where machines are blamed instead of the tech companies, institutions or governments that developed and purchased them.
Now, with the introduction of what’s been dubbed the “Big Beautiful Bill”—a sweeping deregulation package aimed at “unleashing AI innovation”- we may remove one such safeguard, currently enshrined in international humanitarian law and echoed in Pentagon policy – the requirement that a human must make the final decision to use lethal force. Deregulating AI without clear red lines around military use is not progress—it’s automated, unaccountable violence.
The “fog of war” refers to the uncertainty, confusion, and lack of situational awareness that soldiers and commanders experience in combat—how chaos and limited information obscure clear decision-making. Today we live in the “fog of AI”—a new kind of uncertainty, where systems make decisions too fast, too complex, or too opaque for human oversight.