Anthropic Sues the Pentagon: When Saying No Becomes a Crime

5 min read
Article

Anthropic just filed two federal lawsuits against the U.S. government. The same legal weapon used against Huawei and Kaspersky has been turned on an American company for refusing mass surveillance.

The free AI newsletter
Anthropic Sues the Pentagon: When Saying No Becomes a Crime

Ten days ago, we covered how AI ethics became a loser's luxury. The Pentagon had blacklisted Anthropic for daring to set two boundaries: no domestic mass surveillance, no autonomous weapons. End of story? Turns out that was just act one.

On March 9, Anthropic filed two federal lawsuits. The company is suing the United States government. The detail that stands out: the Department of Defense used the same legal tool against Anthropic that it reserves for Huawei and Kaspersky. The "supply chain risk" designation. Except Anthropic isn't a Chinese or Russian company. It's a California outfit, founded by Americans, that simply said "no."

Ten days from ban to courthouse

The speed of escalation is dizzying. Quick rewind.

On February 24, Defense Secretary Pete Hegseth delivered an ultimatum to Dario Amodei, Anthropic's CEO. On the 25th, Amodei refused. Two days later, Trump signed an order halting all federal use of Anthropic products. On the 28th, OpenAI and xAI got their classified clearances. Like the replacements were warming up on the bench.

On March 5, the DoD dropped the heavy weapon: formal "supply chain risk" designation. It's a legal missile designed after Huawei to protect the U.S. from foreign espionage. This is the first time it's been aimed at an American company. The message is crystal clear: refuse to cooperate without limits, and we'll treat you like an enemy.

Four days later, Anthropic fired back.

Two lawsuits, two angles of attack

Anthropic didn't file a token lawsuit. The company opened two simultaneous legal fronts, and the arguments aren't trivial.

The first lawsuit, filed in California, goes for the jugular: First Amendment violation. Anthropic's thesis fits in one sentence: "The Constitution doesn't let the government use its power to punish a company for taking a position." Under U.S. law, refusing mass surveillance is protected speech. Punishing a company for that speech is unconstitutional.

The second lawsuit, filed in Washington, is more technical but equally devastating. Anthropic attacks on procedure. The DoD didn't follow its own rules: no documented risk assessment, no prior notice to the company, no chance to respond, no written determination, no notification to Congress. In short, the Pentagon skipped every legal step to move faster.

These two angles reinforce each other. Even if a judge hesitates on the First Amendment claim, the procedural violation is factual. The DoD cut corners. It's documented.

The price of rebellion

Krishna Rao, Anthropic's CFO, put a number on the table: the government's actions could cut the company's 2026 revenues by "several billion dollars." The GSA terminated the OneGov contract, which blocks Anthropic from all federal government branches, not just Defense.

One contradiction deserves attention. On March 6, Dario Amodei went on CBS News and said "the impact is pretty small" and Anthropic will "be fine." Three days later, the lawsuit talks about "immediate and irreparable harm." Both can't be true at the same time. Most likely explanation: the public messaging was meant to reassure investors and employees while the legal team prepared the real damage assessment.

Hard to play both sides. But Anthropic doesn't really have a choice. Show weakness publicly, and you're inviting the government to squeeze harder.

Cracks in the wall

The power balance isn't as lopsided as it looks. Three signals show the government's position has its own weaknesses.

First signal: Caitlin Kalinowski, OpenAI's head of robotics, resigned on March 7. She cited the same concerns as Anthropic about military AI use. When the government's own allies start losing talent because of this strategy, that's a problem.

Second signal: Emil Michael, Deputy Defense Secretary, let slip a telling admission. In a Fortune interview, he acknowledged Anthropic is "deeply integrated" into DoD systems. His real fear: "What if the software goes down during combat? We're leaving our people in danger?" He contacted OpenAI and xAI urgently to find alternatives. That sounds more like panic than a planned transition.

Third signal: the Streisand effect. The Guardian reports Claude, Anthropic's model, is exploding in popularity since the conflict began. Every article, every tweet about this works as free advertising. The company that says no to the Pentagon becomes, in the collective imagination, the one with principles. Whether that's justified or not, it's brand equity money can't buy.

The real stakes: who's next on the list?

Beyond Anthropic, this case sets a precedent that should worry people well outside Silicon Valley.

The mechanism is simple. The U.S. government uses a national security label to punish a company that exercised a constitutional right. If this precedent holds, any tech company that sets ethical limits on a government contract faces the same treatment. The "supply chain risk" label becomes a compliance weapon, not a security tool.

The question extends to Europe too. Our companies, our governments use American AI models. If the U.S. government can force its own companies to drop all ethical limits under threat of retaliation, how much trust should we put in those models? This isn't an anti-American argument. It's a tech governance question nobody's asking loudly enough.

What happens next

Anthropic requested immediate suspension and permanent invalidation of the measures. The court calendar will take months, probably longer. But the real battle is happening right now, in public opinion and in the halls of Congress.

This case forces a conversation everyone was dodging: does an AI company have the right to refuse certain uses of its technology? Or does national security automatically erase all ethical considerations?

Ten days ago, we wrote here that AI ethics had become a loser's luxury. Anthropic just decided that luxury was worth a lawsuit against the world's leading superpower. What comes next will tell us if that was courage or naïveté. But at least someone asked the question in front of a judge.

Topics covered:

GeopoliticsAnthropicNews

Frequently asked questions

Why is Anthropic suing the U.S. government?
Anthropic filed two federal lawsuits after being blacklisted by the Pentagon for refusing domestic mass surveillance and autonomous weapons. The company claims First Amendment violations and procedural failures.
What's the 'supply chain risk' label used against Anthropic?
It's a legal tool created after the Huawei case to protect the U.S. from foreign espionage. This is the first time it's been used against an American company, treating Anthropic like a foreign adversary.
What's the financial impact on Anthropic?
The CFO estimates the government's actions could cut 2026 revenues by several billion dollars. The terminated OneGov contract blocks Anthropic from all federal government branches.
What are Anthropic's two legal arguments?
The first lawsuit claims First Amendment violation: refusing mass surveillance is protected speech. The second targets procedure: the DoD didn't follow its own legal rules.
What precedent could this case set?
If the government wins, any tech company that sets ethical limits on government contracts faces the same treatment. The national security label becomes a compliance weapon instead of a protection tool.
What are the weaknesses in the government's position?
Three signals: an OpenAI robotics lead resigned for the same reasons, the DoD admitted Anthropic is 'deeply integrated' into its systems, and the Streisand effect is boosting Claude's popularity.
The free AI newsletter