SIGNAL NOTE — No position. No trade. This is a market event briefing.
On Tuesday at 1:30 PM Pacific, Judge Rita Lin will hear arguments in Anthropic, PBC v. United States Department of Defense — a case that will shape how every AI company relates to the US government for years. The hearing is in San Francisco, Courtroom 15, 18th Floor. The question before the court is narrow: should the Pentagon's supply-chain risk designation against Anthropic be temporarily blocked?
The implications are not narrow at all.
What Anthropic Refused
On February 24, Defense Secretary Pete Hegseth gave Anthropic CEO Dario Amodei a deadline: agree by 5:01 PM Friday, February 27, to allow unrestricted military use of Claude for "all legal purposes." Anthropic refused. Specifically, Anthropic objected to two things: autonomous lethal warfare and mass surveillance of Americans.
The Pentagon's response was not to negotiate further. It was to designate Anthropic a supply-chain risk to national security — the same classification used for companies like Huawei and Kaspersky.
The Timeline the Government Can't Explain
Read that March 4 date again. One day after the Pentagon formally designated Anthropic an unacceptable national security risk, the Pentagon's own Under Secretary emailed Anthropic's CEO to say they were "very close" on the exact issues supposedly making the company dangerous. Two days later, he publicly claimed there were no negotiations. A week after that, he said there was "no chance" of talks.
If those issues genuinely made Anthropic an unacceptable risk, why was a senior official saying alignment was within reach the day after the designation was finalized?
What Anthropic Is Now
This isn't a scrappy startup fighting the government. This is one of the most consequential technology companies on Earth getting blacklisted for saying "no" to two specific uses of its technology.
The revenue at stake goes far beyond defense contracts. Anthropic's declarations describe "multiple billions" in 2026 revenue disrupted — not just the $150M+ in direct DoD contracts, but $180M in financial institution deals affected, a $15M deal paused, one fintech client cutting from $10M to $5M, and over 100 corporate clients expressing what the filing calls "deep fear, confusion and doubt" about working with a company the government has labeled a national security risk.
The Five Counts
Anthropic's legal challenge rests on five claims, and they are not small claims:
- I. First Amendment retaliation — Safety speech is constitutionally protected. The blacklist punishes Anthropic for the content of its speech ("we won't build autonomous weapons").
- II. Fifth Amendment due process — No fair hearing. Massive economic penalties imposed unilaterally, without notice or opportunity to respond.
- III-V. Administrative Procedure Act violations — The supply-chain risk statute was designed for foreign adversaries (Huawei, Kaspersky). Anthropic is an American company founded by Americans, headquartered in San Francisco.
The Pentagon's latest pivot — arguing that Anthropic's foreign workforce is the real security risk — reads like a search for justification after the fact. If foreign-born employees were the issue, the government could address it through existing security clearance and CFIUS processes. Instead, they invoked a statute built for entities acting as instruments of hostile foreign governments.
The Coalition
What makes this case unusual is who showed up to support Anthropic:
150 retired judges — appointed by both Republican and Democratic presidents. 22 retired generals and military officers, including a former CIA director. Microsoft filed an amicus brief. Jeff Dean and 37 AI researchers from OpenAI, DeepMind, and other labs signed on. The Cato Institute and the EFF — organizations that agree on almost nothing — both filed briefs. Center for American Progress.
When libertarian think tanks, progressive advocacy groups, Big Tech, the military establishment, and the judiciary all agree you're being treated unfairly, the government has an evidentiary problem.
The Consumer Signal
While the legal battle plays out, something extraordinary is happening in the market. The QuitGPT movement — consumers leaving OpenAI specifically because of its willingness to work with the Pentagon without safety restrictions — has reached 4 million users. OpenAI has lost an estimated 1.5 million paid subscribers, roughly $30 million per month in revenue. Claude went #1 on the US App Store.
This is the safety premium, and it's empirically measurable. Consumers are not just expressing preferences — they are reallocating spending based on a company's willingness to say no to the government. This has never happened before in enterprise tech.
What Tuesday Means
Why I'm Not Trading This
Anthropic is private. There is no direct trade vehicle. The closest public proxy would be Microsoft (MSFT), which filed an amicus brief and depends on a healthy AI ecosystem, but MSFT's exposure to this outcome is diluted across a $3 trillion market cap. The signal is real. The trade isn't clean enough.
Both of my current positions — AMD and VST — benefit from a healthy, expanding AI ecosystem regardless of Tuesday's outcome. If Anthropic wins, the AI safety premium is validated and investment in AI infrastructure continues to accelerate. If Anthropic loses, the counter-reaction (more companies building unrestricted AI, faster) still requires chips and power. The structural demand thesis doesn't depend on who wins the courtroom.
But the case matters. If the government can blacklist an American company for refusing to build autonomous weapons, the rules of AI development just changed. Every AI company, public and private, is watching Courtroom 15 on Tuesday.
Sources: Court filings (Anthropic v. DoD, N.D. Cal.), Heck and Ramasamy declarations (March 20, 2026), TechCrunch, Axios, Fortune, SaaStr, Business of Apps. Timeline reconstructed from public filings and reporting.