Anthropic blacklisted by the Pentagon: AI ethics meets military realpolitik

4 min read
Article

An appeals court upholds the Pentagon's blacklisting of Anthropic. For the first time, an anti-China law is being turned against an American company.

The free AI newsletter
Anthropic blacklisted by the Pentagon: AI ethics meets military realpolitik

The April 8th paradox

On April 8, 2026, Anthropic lost its appeal before a federal appeals court in Washington. The ruling confirms that the Pentagon can maintain its "supply chain risk" designation against the California-based company.

What makes this verdict striking: less than a year ago, Anthropic was the first AI company to deploy its models on the Pentagon's classified networks. In July 2025, Claude became the first large language model authorized to operate in highly secured military environments. Today, the same company appears on the same list as Huawei.

You'd be right to wonder what went wrong between then and now.

$200 million and the clause that changed everything

It started with a $200 million contract signed in July 2025. Anthropic integrated Claude into classified operational workflows. In return, the Pentagon agreed to an acceptable use policy: Claude would not be used for fully autonomous weapons systems or mass surveillance of American citizens.

That's not a trivial clause. It's the difference between supplying a tool and supplying a tool on your own terms. Anthropic imposed conditions, and the DOD accepted them.

By early 2026, the situation had changed. The Trump administration pushed to renegotiate. The Pentagon now wanted to use Claude "for all lawful purposes," without restriction. Anthropic refused. No guarantees on autonomous weapons, no deal.

That's when Pete Hegseth brought out the heavy artillery.

A law designed for Huawei

10 U.S.C. § 3252 is a provision of US law created to protect military supply chains from hostile foreign actors. In practice, it had mostly been used to exclude Chinese telecom equipment makers from US government contracts.

Anthropic is the first American company to fall under it. According to legal expert Tess Bridgeman of Just Security, designating an American company under this authority is a precedent without equal. And the actual scope of the law may be narrower than Hegseth claims: it prohibits use in national security systems, not across all commercial activity.

The appeals court, for its part, didn't rule on the merits. It simply refused to suspend the designation while the trial continues. Its reasoning: "on one side, a risk of financial harm limited to one company. On the other, judicial management of AI procurement critical to the military during an active conflict."

In plain terms: when in doubt, the Pentagon gets the benefit of it, not Anthropic.

The irony of the split outcome

The current situation is schizophrenic. The Washington appeals court lets the blacklist stand. A federal judge in San Francisco, in a parallel proceeding, granted Anthropic a preliminary injunction blocking the ban on Claude across other federal agencies.

The practical result: Anthropic cannot work for the Pentagon. But the CIA, the State Department, or any other federal agency can still use Claude. Contractors working for the DOD must certify that they do not use Anthropic's models for that specific scope.

It's like being banned from the meat aisle of a supermarket but free to shop everywhere else in the store. Uncomfortable, but not fatal.

Acting Attorney General Todd Blanche hailed "a resounding victory for military readiness," adding that "military authority belongs to the commander in chief, not a tech company." That's not wrong. It just carefully sidesteps the real question.

The real question being avoided

Anthropic isn't trying to dictate American military policy. The company set two conditions: no fully autonomous weapons, no mass surveillance of American citizens. Two lines that appear in the debates around the European AI Act, in the ethical principles of a dozen governments, in UN recommendations.

The Pentagon agreed to those same conditions in July 2025. Anthropic is not the party that changed its position.

The concrete question this case poses to the AI industry: can you set contractual usage limits on a government client and survive politically if that client changes administrations?

The answer, for now, appears to be no. Or at least: not without paying a price.

What comes next

This ruling is not final. The appeals court only refused the provisional suspension. The merits of the case — including Anthropic's constitutional arguments under the First and Fifth Amendments — remain to be decided.

Anthropic has stated it remains "confident that the courts will recognize these designations as unlawful." That's the position you have to hold publicly in a situation like this. What it's worth, we'll know when the ruling on the merits comes down.

Until then, every AI company considering government work is watching this case closely. Because the real precedent being set here isn't just legal — it's political. And it will define what "AI ethics" can still mean when the client is the Pentagon.

Topics covered:

GeopoliticsAnthropicNews

Frequently asked questions

Why did the Pentagon blacklist Anthropic?
The Pentagon designated Anthropic as a supply chain risk after the company refused to remove its ethical use restrictions (particularly around autonomous weapons and mass surveillance). The Trump administration wanted to use Claude without conditions.
What is 10 U.S.C. § 3252?
It's a US law designed to exclude hostile foreign suppliers (like Huawei) from military procurement. Anthropic is the first American company to face this designation, setting an unprecedented legal precedent.
Can Anthropic still work with the US government?
Yes, partially. The blacklist only applies to the Pentagon. A federal judge in San Francisco granted an injunction protecting Anthropic's access to other federal agencies (CIA, State Department, etc.).
Is this ruling final?
No. The appeals court only refused to temporarily suspend the designation. The core case — including Anthropic's First and Fifth Amendment constitutional arguments — has yet to be decided.
What does this mean for other AI companies?
This case creates a political precedent: an AI company that imposes contractual usage limits on a government client risks being sanctioned if the administration changes. Every company working with governments is watching this case closely.
The free AI newsletter