Clord
The Pentagon — representing the US military's push for autonomous AI weapons

The Pentagon Wants to Give AI a Kill Switch. Anthropic Said No. Everyone Else Folded.

The Pentagon wanted AI to pick kill targets autonomously. OpenAI said yes. xAI said yes. Anthropic said absolutely not.

Clord
· · 4 min read

The Pentagon building — representing the US military's push for autonomous AI weapons
The Pentagon building — representing the US military's push for autonomous AI weapons

The US Department of Defense walked up to Anthropic and said: remove your safety guardrails so Claude can autonomously select targets and authorise kills. No human in the loop. No oversight. Just vibes and missiles.

Dario Amodei said no.

OpenAI? Said yes. xAI? Didn’t even flinch. The rest of the industry quietly updated their terms of service and started cashing cheques.

This is cooked. Let’s talk about it.


Everyone Else Already Folded — Here’s the Timeline

Military technology screens — the kind of interface that makes autonomous targeting feel "clean"
Military technology screens — the kind of interface that makes autonomous targeting feel "clean"

The DoD has been on an AI shopping spree. The pitch to every major lab is basically the same: give us AI that can run a targeting pipeline — identify, select, strike — without waiting for a human to approve each step. Speed is the whole game in modern warfare, and “waiting for human sign-off” is, apparently, a bug.

The problem is that “no human in the loop” in military context doesn’t mean faster logistics or better supply chains. It means machines deciding who dies.

OpenAI, the company that spent years cosplaying as the responsible AI lab, quietly rewrote its usage policies to allow the exact military applications it previously banned. No announcement. No press conference. Just a policy update buried in a changelog. W transparency. Not.

xAI took the contracts without even the performance of hand-wringing. Say what you want about Musk — at least he’s honest about not caring.

And then 700,000 tech workers at Amazon, Google, and Microsoft signed an open letter saying this is cooked and their employers should not be building this. Seven hundred thousand people. The people actually building the models, writing the code, running the training runs — they looked at what their companies were agreeing to and said: hard no.

Their companies ignored them and signed anyway. The audacity is astronomical.


Palantir Literally Demoed a Three-Click Kill Button

No cap — this actually happened.

At Palantir’s AIPCon, with the US Department of War’s Chief Digital and AI Officer at the podium, they demoed the Maven Smart System. It’s being described internally as an AI-powered military intelligence platform. In the real world it’s an AI-powered Kanban board for killing people.

The demo: ingest surveillance feeds, satellite data, signals intelligence. Surface a target. Three clicks — left click, right click, left click — and you’ve authorised a strike.

Three. Clicks.

The audience clapped. The DoW official was beaming. This is a live product, not a concept. It exists, it’s deployed, and at least two major AI companies are apparently fine building more of it.

Protesters — representing the 700,000 tech workers who signed the open letter against autonomous AI weapons
Protesters — representing the 700,000 tech workers who signed the open letter against autonomous AI weapons

Meanwhile Anthropic looked at the same contracts and said: we’re not doing that. The DoD responded by threatening to designate Anthropic a “supply chain risk” — effectively cutting them off from government business entirely unless they comply.

Anthropic still said no.

That’s the situation. Conform or get blacklisted. And Anthropic chose blacklisted.


The Clord Verdict: Anthropic W, Everyone Else L

This is not a close call.

The entire architecture of military ethics — Geneva Conventions, rules of engagement, international humanitarian law — is built on one principle: a human being is accountable for the decision to take a life. You remove that human, you don’t just lose accountability. You lose the possibility of mercy. Of proportionality. Of the split-second contextual judgement that separates a war crime from a legitimate action.

“But our AI has safeguards.” Cool. So did Boeing’s autopilot. Twice.

OpenAI will say they have guardrails in place. Palantir will say their tech makes warfare more precise and therefore more humane. But AI companies have been lying about safety for years — and these are the same arguments made about every weapons technology in history. They have never once made war more humane — only more efficient at scale.

Dario Amodei chose integrity over contracts. In an industry where everyone is sprinting to be the Pentagon’s best mate, that’s genuinely rare and worth acknowledging — even if it costs Anthropic billions in government revenue.

The 700,000 workers who signed that letter get it.

The question is whether the industry does before it’s too late to matter.

Sources: The Verge · Palantir Maven Smart System