According to reports from the Wall Street Journal, the U.S. military crossed a massive threshold this week. They used Anthropic’s Claude AI to assist with “strategic decision-making” and operational planning during the recent strikes on Iran.

For two years, Silicon Valley has sold Large Language Models as friendly assistants that write your emails and debug your Python code. Now, they are officially processing battlefield intelligence, and the military loves the idea of using AI for “operational efficiency”.

Anthropic, a company that built its brand on “Constitutional AI” and safety, seems to realize the nightmare they’ve stumbled into. They are reportedly pushing back hard, refusing Pentagon requests to use Claude for mass surveillance or autonomous weapons.

The Pentagon is not taking it well. Defense officials are now threatening to officially label Anthropic a “supply chain risk ”, a bureaucratic death sentence that would force any contractor wanting U.S. military money to sever ties with the AI company entirely.

The use of artificial intelligence creates a terrifying accountability vacuum. In the aftermath of these strikes, reports of tragic targeting errors and civilian casualties, including a hit on a school, are already flooding out. Who knows whether it was human error in the fog of war? Or was it the hallucination of AI? Or maybe even intentional – based on the reputation of deliberate civilian targeting by Isreal in Gaza. We don’t know. And when beta-stage software is suddenly part of military planning, figuring out who is responsible when things go wrong or unfold unexpectedly becomes nearly impossible.