Why are we using chatbots in warfare?


Why are we using chatbots in warfare?

Hey everyone,

Last week, we watched the start of another war. Thanks to a very public spat between the US military and Anthropic, we can confidently say that AI — and specifically LLMs — was used to assess and accelerate the damage.

International law is failing us in this moment. Israel’s use of AI to indiscriminately target people and innovation-wash their genocide in Palestine has become a blueprint. And rather than face justice in The Hague, Netanyahu has managed to start the war he’s long wanted, this time with the US using its full military might and artificial “intelligence” to attack Iran and Iranians.

I’m normally a very structured, rational thinker, but I’ve found myself having an emotional response this time — a mix of despair and dread about where we’re headed. We’re at a turning point: the US is on the cusp of deeply integrating a technology that we know is unreliable into its use of more and more lethal force, as part of illegal campaigns.

How did we get here?

  1. After years of calling out corporate tech power with little traction, the human rights community, and with it the digital rights community, has been hollowed out by funding cuts and burnout. The people and organizations that would be vociferously fighting against the integration of LLMs into war zones are beleaguered, trying to work within a world saying that AI is the future of everything.
  2. The US government has become what many human rights groups long warned it could become: a country led by a lawless executive branch whose power has been strengthened through decades of war-on-terror lawfare and unchecked Islamophobia. Democrats and Republicans alike have steadily tuned American law to allow unaccountable violence, force, and domination at home and abroad.
  3. And then, there’s the AI story. The mythology of innovation, of heroic disruptors remaking the world. These CEOs were always going to look for “real” problems to solve — ways to prove they could serve serious customers and justify the unprecedented capital being poured into their companies. Now they’re trying to wedge immature technology into the most consequential arenas of war and violence.

Tech executives like Sam Altman and Dario Amodei are racing to embed shaky technology into military decision-making, lowering procurement standards in the process — all to win a massive bet that unprecedented capital expenditure and consumer harms can somehow convert into a sustainable business model.

Yet instead of grappling with that reality, the telenovela between OpenAI and Anthropic has become the story. But let’s be clear: Altman and Amodei are one-in-the-same, bit players in a much larger global movement.

So what can we do?

We can see these companies and this technology for what they really are and control the conversations we’re having about them. We can move past the debate of which chatbot is more ethical, and address the core ethical questions: are we really okay with chatbots — that being ChatGPT and Claude — being used in war? And why are they being used in the first place?

If more people speak plainly that we cannot accept the use of these technologies to scale and innovation-wash historical violence, it will matter. It shifts the conversation, raises the political cost, and reminds decision-makers that the public is paying attention.

There is a time and place to debate the technical intricacies of how a technology can be used — to argue about procurement safeguards, which company’s policies are marginally better, or what the most ethical consumer choice might be. But if you find yourself wondering under what conditions these systems might add value, remember who holds power.

Moments like these show exactly how technology in the wrong hands can be used: irresponsibly, fatally, and as an instrument of autocracy, unconcerned with the ethics or accuracy of the outputs. Instead, these systems become little more than technified permission for the powerful to do whatever they want.

This is the time to say something simpler:

We do not want to live in a world where LLMs direct lethal force.

We do not want war zones as testing grounds for chatbots.

And we cannot allow war crimes to be innovation-washed in the name of “progress”.

I made a short video this week to talk more directly about what’s happening, and why it matters right now:

video preview

So speak out. Talk to the people around you and articulate a vision for a different world — one where LLMs aren’t directing lethal force, and where our power isn’t limited to choosing between slightly different tech products.

These conversations turn isolated frustration into shared understanding. When people connect around these concerns, we start building the political power needed to actually make change. The more people do this, the harder it becomes for companies and governments to present this future as inevitable.

The more we push back against the normalization of AI in warfare, the harder it becomes to hide what’s happening in plain sight.

Alix

🎙️On the pod feed now: Reframing Impact

A couple of weeks ago at the AI Impact Summit, world leaders and tech executives shook hands and clapped each other on the back over their shared efforts to use AI to save the world. Terms like “sovereignty” and “AI for good” were thrown around freely, stripped of their original meaning and repackaged as vaguely inspirational slogans. As one of our listeners put it, this is a world that’s often incomprehensible to outsiders, full of jargon and implicit meanings where language itself becomes a weapon.

That’s why we created this series: to cut through the propaganda and bring in voices who can unpack these buzzy terms — and explain what people in power really mean when they use them.

ID in IndiaPart One: Staying sovereign when it’s exploitation all the way down

  • Sovereignty: A highly elastic term, which is why people love to repurpose it for their needs. Rafael Grohmann takes us through a few definitions.
  • Data Rich: Sometimes nations that have been so marred by colonial forces have nothing left to offer but their data. Karen Hao explains how AI empires sustain themselves this way.
  • Human Capital: Another key part of any AI supply chain is the invisible workforce having their labour offered up like cake, as Joan Kinyua of the Data Labelers Association puts it.
  • Linguistic Diversity: Researchers have been concerned about this for years, but the concept has only just hit the mainstream. Chenai Chair tells us why.

Part Two: The tyranny of population-level digital technologies

  • AI for Good: Quite literally a paradox, and a term that we probably need to stop using all together, according to Abeba Birhane.
  • AI for Development: Speaking of toxic positivity, Usha Ramanathan discusses the Aadhaar programme in India.
  • Democratisation: While we all know what this word can represent nowadays, Audrey Tang explains that there is a fair way of deploying tech at a population-level.
  • Open Source: Meredith Whittaker explains all the ways the terms has been used and abused over the years — and how it cannot really be applied to AI.

Part Three: Why does scale always win?

  • Accountability: Even the most ruthless dictator thinks they are accountable. Nikhil Dey explains how power falsifies accountability.
  • AI for Climate: All dreams of accountability kind of die when governments buy into tech solutionism. Naomi Klein explains why AI cannot address the climate crisis.
  • Multilateralism: These are big conversations happening across multiple contexts — Chinasa T. Okolo explains what this term does and does not represent.
  • Frugal AI: Can you be frugal with something built on scale? Timnit Gebru explains the value of keeping things small, and the costs of large scale AI.

ICYMI: The People’s Policy: Holding Big Tech Accountable was livestreamed on Monday when David Seligman, Alvaro Bedoya, and Elliott “El’Bo” Awatt came on to discuss their work in fostering movements in Colorado that challenge corporate power by standing up for workers rights. The recording hit the pod feed on Friday, and you can watch the conversation on Youtube.

Highlights from the New Protagonist Network

Framing AI critiques

AI in public services

AI supply chain

AI Impact Summit

AI Slop

Find out more about the New Protagonist Network and apply to join.

ICYMI: NPN Slack updates & welcoming new members

Two upcoming events and opportunities:

  • AI Impact Summit closed-door debrief, on March 11: Did you attend the AI Impact Summit in India and want to share your unfiltered thoughts in a closed-door space? Or you didn’t attend but would like to hear from people who did? Join us for this closed-door debrief moderated by Alix, with Amba Kak and Mila Samdub.
  • Broadcast Media Training, on April 7: Spots are open for the April 7 session of our Broadcast Media Training, with partner NEON. Signs-up are open until March 17 and on a first-come, first-served basis.

Welcome to our new members! Nasir Anthony Montalvo, Aurora Gómez Delgado (Tu nube seca mi rio), Andy Davies, Maria Jose Lira, Catriona Gray.

What we talked about on Slack this month: How to engage unions in AI politics, the AI Impact Summit, is “the left” winning the AI debate?

131 Finsbury Pavement, London, EC2A 1NT
Unsubscribe · Preferences

Computer Says Maybe

A newsletter & podcast about AI and politics

Read more from Computer Says Maybe

RSVP now! The People's Policy: Holding Big Tech Accountable CSM LIVE next Monday, March 2nd on YouTube - 5:30pm MT (7:30pm ET) Hey everyone— Next Monday, I’m hosting a live conversation, “The People’s Policy,” about how we can shape the future of Big Tech accountability at the local, state, and national level. I'd love for you to join us—RVSP here. Big Tech is lobbying hard at every level of government, pushing to expand data center development, pitching public agencies on discriminatory and...

A note from Alix | Podcast Highlights | Highlights from the Collective All hail scale...what are we doing here? Hey everyone, In Brazil, someone stole a professor’s identity and tried to take $80,000 with it. In Rwanda, a finance minister was warned that funding from the Tony Blair Institute could disappear if Oracle was removed from the government tech stack. In India, a system glitch cut 2 million people off from food rations. What do these stories have in common? They’re all tied to...

A note from Alix | Podcast Highlights | Highlights from the Collective What’s ahead in 2026 Hey everyone, It’s funny how we often treat the start of a new year like a magical clean slate. Lately, I’ve been thinking about why we reach for that feeling in the first place. Especially after a year like 2025, it’s natural to want a moment to imagine something new. But when I look ahead, I don’t see a break between 2025 and 2026. I see a through-line — one shaped by power and profit, yes, but also...