Why are we using chatbots in warfare?
Hey everyone,
Last week, we watched the start of another war. Thanks to a very public spat between the US military and Anthropic, we can confidently say that AI — and specifically LLMs — was used to assess and accelerate the damage.
International law is failing us in this moment. Israel’s use of AI to indiscriminately target people and innovation-wash their genocide in Palestine has become a blueprint. And rather than face justice in The Hague, Netanyahu has managed to start the war he’s long wanted, this time with the US using its full military might and artificial “intelligence” to attack Iran and Iranians.
I’m normally a very structured, rational thinker, but I’ve found myself having an emotional response this time — a mix of despair and dread about where we’re headed. We’re at a turning point: the US is on the cusp of deeply integrating a technology that we know is unreliable into its use of more and more lethal force, as part of illegal campaigns.
How did we get here?
- After years of calling out corporate tech power with little traction, the human rights community, and with it the digital rights community, has been hollowed out by funding cuts and burnout. The people and organizations that would be vociferously fighting against the integration of LLMs into war zones are beleaguered, trying to work within a world saying that AI is the future of everything.
- The US government has become what many human rights groups long warned it could become: a country led by a lawless executive branch whose power has been strengthened through decades of war-on-terror lawfare and unchecked Islamophobia. Democrats and Republicans alike have steadily tuned American law to allow unaccountable violence, force, and domination at home and abroad.
- And then, there’s the AI story. The mythology of innovation, of heroic disruptors remaking the world. These CEOs were always going to look for “real” problems to solve — ways to prove they could serve serious customers and justify the unprecedented capital being poured into their companies. Now they’re trying to wedge immature technology into the most consequential arenas of war and violence.
Tech executives like Sam Altman and Dario Amodei are racing to embed shaky technology into military decision-making, lowering procurement standards in the process — all to win a massive bet that unprecedented capital expenditure and consumer harms can somehow convert into a sustainable business model.
Yet instead of grappling with that reality, the telenovela between OpenAI and Anthropic has become the story. But let’s be clear: Altman and Amodei are one-in-the-same, bit players in a much larger global movement.
So what can we do?
We can see these companies and this technology for what they really are and control the conversations we’re having about them. We can move past the debate of which chatbot is more ethical, and address the core ethical questions: are we really okay with chatbots — that being ChatGPT and Claude — being used in war? And why are they being used in the first place?
If more people speak plainly that we cannot accept the use of these technologies to scale and innovation-wash historical violence, it will matter. It shifts the conversation, raises the political cost, and reminds decision-makers that the public is paying attention.
There is a time and place to debate the technical intricacies of how a technology can be used — to argue about procurement safeguards, which company’s policies are marginally better, or what the most ethical consumer choice might be. But if you find yourself wondering under what conditions these systems might add value, remember who holds power.
Moments like these show exactly how technology in the wrong hands can be used: irresponsibly, fatally, and as an instrument of autocracy, unconcerned with the ethics or accuracy of the outputs. Instead, these systems become little more than technified permission for the powerful to do whatever they want.
This is the time to say something simpler:
We do not want to live in a world where LLMs direct lethal force.
We do not want war zones as testing grounds for chatbots.
And we cannot allow war crimes to be innovation-washed in the name of “progress”.
I made a short video this week to talk more directly about what’s happening, and why it matters right now:
So speak out. Talk to the people around you and articulate a vision for a different world — one where LLMs aren’t directing lethal force, and where our power isn’t limited to choosing between slightly different tech products.
These conversations turn isolated frustration into shared understanding. When people connect around these concerns, we start building the political power needed to actually make change. The more people do this, the harder it becomes for companies and governments to present this future as inevitable.
The more we push back against the normalization of AI in warfare, the harder it becomes to hide what’s happening in plain sight.
Alix