In partnership with

(Was this newsletter forwarded to you? Sign up here.

The Inside Track: People, Ideas, & Stories shaping how we work, live, and build the future.

Good evening - Michael here: Here's the big business story you need to pay attention to today:

THIS IS A REALLY BIG DEAL

The U.S. government just labeled Anthropic — the maker of Claude, the AI model that was literally running on classified military networks during active combat operations — a "supply chain risk to national security." That designation has historically been reserved for foreign adversaries like China's Huawei.

It has never been publicly applied to an American company. Today, Anthropic sued the government. Here's why every business in America should be paying attention.

The Pentagon Blacklisted America's Leading AI Safety Company. Then Its Rival Swooped In. Then a Million People a Day Signed Up Anyway.

Let me start with the timeline, because the speed at which this unraveled is part of the story.

Anthropic — the San Francisco-based AI company behind Claude — had a $200 million contract with the Department of Defense. Claude was the first frontier AI model deployed on the Pentagon's classified networks, integrated into mission workflows through a partnership with Palantir and running on Amazon Web Services' secure infrastructure.

By all accounts, the technology was working. It was being used across intelligence operations, including — as multiple outlets reported — during the U.S. war on Iran.

Then the Pentagon wanted a contract modification. Specifically, it wanted Anthropic to agree that Claude could be used for "all lawful purposes." No exceptions. No guardrails. No questions.

Anthropic pushed back on exactly two points: mass domestic surveillance of American citizens, and fully autonomous weapons systems without human oversight. That's it. Two guardrails. The company supported every other national security use case and said these exceptions hadn't affected a single government mission to date.

The Pentagon said no deal.

On February 27th, Defense Secretary Pete Hegseth posted on X that he was directing the Department of War to designate Anthropic a supply chain risk. President Trump followed with his own post ordering every federal agency to "immediately cease" all use of Anthropic's technology.

He told Politico he "fired" Anthropic "like dogs."

Hours later — hours — OpenAI announced it had struck its own deal with the Pentagon to replace Anthropic on classified networks.

The timing was not subtle.

"No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons." -

Anthropic

On March 5th, the Pentagon made it official, formally designating Anthropic and its products a supply chain risk, effective immediately.

Anthropic's response to the Secretary of War and advice for customers

On March 9th, Anthropic filed two federal lawsuits against the Trump administration, one in Northern California and one in the D.C. Circuit, alleging illegal retaliation for the company's protected speech on AI safety.

This is now a constitutional case.

WHAT "SUPPLY CHAIN RISK" ACTUALLY MEANS — AND WHY YOU SHOULD CARE

Here's where this stops being a tech story and starts being a business story.

A supply chain risk designation under 10 U.S.C. § 3252 is not a warning letter. It's a tool designed to protect America's most sensitive military systems from foreign infiltration — think backdoors, espionage, sabotage. The entities that have historically received this label are companies like Huawei and ZTE, organizations the U.S. government believed were instruments of the Chinese state.

Anthropic is a company founded by Americans, backed by American investors, headquartered in San Francisco, that was the first AI lab to put its models on classified networks in service of American national security. It is the only American company ever to be publicly designated a supply chain risk.

The designation requires defense vendors and contractors to certify that they don't use Anthropic's models in their work with the Pentagon. On paper, the scope is limited to defense contracts. In practice, the chill effect is enormous.

The Dominoes Started Falling Immediately.

Lockheed Martin publicly said it would follow the President's direction and look to other LLM providers. At least ten defense tech portfolio companies backed off Claude for defense use cases. The Treasury Department, State Department, and Health and Human Services all directed employees to move off Claude — even though the designation doesn't technically apply to them.

The biggest hit landed on Palantir. The defense software giant gets roughly 60% of its U.S. revenue from government contracts, and Claude was deeply embedded in its Maven Smart Systems platform — the flagship intelligence and targeting tool used in real-world military operations. Those weren't plug-and-play integrations. Sources described rebuilding the custom prompts, agent chains, and evaluation pipelines as "painful." Reuters reported the process could take months with "real execution risk."

Piper Sandler wrote in a client note that Anthropic is "heavily embedded in the Military and the Intelligence community" and that moving off the technology could create "short-term disruptions" for Palantir. The stock wobbled. Some estimates put over $1 billion in Maven-related contracts at risk during the transition.

And here's the part that should make your head spin: even as the Pentagon moved to cut Anthropic off, Claude was reportedly still being used to support active combat operations in Iran. The AI the government publicly labeled a national security threat was simultaneously helping warfighters identify targets.

THE TRIGGER NOBODY'S TALKING ABOUT

The public narrative has focused on the guardrails dispute. But a senior Pentagon official revealed the actual breaking point.

Undersecretary of Defense Emil Michael appeared on the All-In podcast and explained what happened. After the U.S. raid on Venezuela that captured Nicolas Maduro in January, an Anthropic executive contacted Palantir to ask whether Claude had been used in the operation.

The Pentagon interpreted this as probing for classified information — and a potential terms-of-service enforcement action. Michael said the moment triggered alarm at the highest levels.

"I'm like, holy sh*t, what if this software went down, some guardrail picked up, some refusal happened for the next fight like this one and we left our people at risk?"

Undersecretary of Defense Emil Michael

That fear — of a software vendor having veto power over military action — is what turned a contract dispute into a public war. Whether Anthropic's inquiry was routine due diligence (as the company says) or an overreach into classified territory (as the Pentagon says) depends on who you ask.

But the result is unambiguous: the government decided that no AI company should have the ability to question how its tools are used in combat, and it made an example of the one company that tried.

OPENAI STEPPED IN. IT DIDN'T GO SMOOTHLY.

Hours after Anthropic was blacklisted, OpenAI CEO Sam Altman announced his company had reached its own deal with the Pentagon for classified deployment. He said OpenAI shared the same red lines — no mass surveillance, no autonomous weapons — and had built them into the agreement.

The optics were terrible. It looked like OpenAI swooped in to profit from Anthropic's principled stand.

It got worse. When OpenAI published portions of the contract language, legal experts immediately questioned whether the safeguards were enforceable. Multiple OpenAI employees publicly expressed frustration, with one telling CNN that many inside the building "really respect" Anthropic for standing up to the Pentagon. OpenAI's head of robotics resigned days later, citing the same concerns about surveillance and autonomous weapons.

Altman acknowledged at an all-hands meeting that rushing the deal out was a "mistake" and that it looked "opportunistic and sloppy." He revised the contract language. Bloomberg's editorial board argued the botched rollout actually worked in Anthropic's favor.

"We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy."

OpenAI CEO Sam Altman

But here's the counterpoint that matters: OpenAI's national security team argues that if you don't trust the government to follow the law, you shouldn't trust it to honor Anthropic's red lines either.

In other words, contract terms are only as strong as the institution enforcing them. That's a legitimate point — and it's exactly why Anthropic wanted the guardrails written down in the first place.

THE CONSUMER REVOLT THE PENTAGON DIDN'T SEE COMING

Here's the plot twist.

The day after the blacklisting, Claude surged to number one on Apple's App Store. It had been ranked 42nd in January. By March 2nd, Claude was pulling 149,000 daily U.S. downloads versus 124,000 for ChatGPT.

More than a million people were signing up for Claude daily. Paid subscribers doubled. Free users jumped over 60%.Chalk messages appeared on the sidewalk outside Anthropic's San Francisco offices: "you give us courage" and "thank you." Meanwhile, outside OpenAI: "do the right thing" and "please stand up for civil liberties."

If you're a brand strategist, study this. Anthropic didn't run a campaign. It drew a line, held it, and let the market decide. The company went from niche developer tool to household name in a single news cycle — and the catalyst was the U.S. government telling Americans they shouldn't use it.

So what does this actually mean for you?

If you sell to the federal government: the rules just changed. The designation establishes a precedent that the government can weaponize supply chain risk authority not against foreign adversaries, but against domestic companies that negotiate terms the administration doesn't like.

Thirty former military and intelligence officials — including former CIA director Michael Hayden — wrote a letter to Congress arguing this designation is "a profound departure from its intended purpose" and was meant for companies "beholden to Beijing or Moscow, not American innovators operating transparently under the rule of law." Republican Senator Thom Tillis called the public fight "sophomoric." Democratic Senator Kirsten Gillibrand called it "shortsighted, self-destructive, and a gift to our adversaries."

If you're a defense contractor: Palantir's situation is the case study. A mission-critical AI capability suddenly ripped out of billion-dollar programs. The lesson isn't political — it's operational. Multi-model strategies aren't a nice-to-have anymore. They're a survival requirement. If your tech stack depends on a single AI provider whose relationship with the government can evaporate overnight, your risk management is broken.

If you're an enterprise buyer: Anthropic's CEO clarified that the designation only applies to Claude's use as a "direct part of" defense contracts. Microsoft, Amazon, and Google all confirmed Anthropic's products remain available through their platforms for non-defense work. But "narrowly scoped" in Washington has a way of becoming "broadly interpreted." Multiple defense tech executives told CNBC they preemptively moved their entire workforce off Claude — not just defense teams — because the risk calculus favored caution. Audit your AI vendor dependencies now.

If you're an AI company: the negotiating landscape shifted permanently. OpenAI, Google, and xAI all agreed to let the Pentagon use their models for "all lawful purposes." They saw what happened to the company that didn't. As MIT Technology Review put it, we're watching the Pentagon's AI strategy pressure companies to abandon the lines they'd previously drawn.

Dozens of scientists from OpenAI and Google DeepMind filed an amicus brief supporting Anthropic on Monday, arguing the designation could "harm US competitiveness" and "hamper public discussions about AI risks and benefits."

WHAT I'M ALSO WATCHING:

Anthropic's lawsuit will set the rules for a decade. The legal challenge tests whether the government can use supply chain risk authority — designed for foreign adversaries — against a domestic company exercising First Amendment rights. Anthropic argues the designation violates free speech, exceeds the statutory scope of 10 U.S.C. § 3252, and denied due process.

The Just Security legal analysis argues it's "highly unlikely the Secretary can meet the statutory requirements" because both parties acknowledge negotiations broke down over terms of use, not adversarial threats to DoD systems.

If the designation stands, the precedent is clear: negotiate with Washington on Washington's terms, or face existential consequences. If it falls, the executive branch will have overplayed a hand it can't replay.

The AI safety talent exodus is already starting. OpenAI robotics lead Caitlin Kalinowski resigned days after the Pentagon deal, posting: "Surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got."

She's not alone. Multiple OpenAI employees told CNN they're frustrated with how leadership handled the contract. If the best safety researchers start migrating toward companies that hold their lines, Anthropic could end up with a talent moat that no amount of Pentagon money can replicate. Watch the hiring announcements.

The vendor concentration risk nobody priced in. Before this blew up, Anthropic said eight of the ten biggest U.S. companies use Claude. Now those companies are doing legal reviews. The real risk isn't that Claude goes away — Anthropic's consumer business is booming and its projected $14 billion in annual revenue comes mostly from enterprises.

The risk is that "supply chain risk" becomes a label that makes procurement teams nervous, even when the designation is narrowly scoped. Fear travels faster than legal analysis. If you're a CIO buying AI tools, the question is no longer just "which model is best?" It's "which model carries the least political risk?" That's a terrible way to buy technology, but it's the world we're in.

ONE THING TO THINK ABOUT:

There's a pattern here that goes beyond any single company or contract.

The government asked Anthropic to remove two guardrails. Anthropic said no. The government responded by deploying the most aggressive procurement weapon in its arsenal — one designed for foreign adversaries — against an American company that built the technology the military was using in active combat.

Then the market responded. A million people a day downloaded the product the government told them to avoid. OpenAI's own employees pushed back on the deal their company rushed to sign. Former CIA directors and retired generals called the designation a dangerous precedent. Scientists from rival labs filed briefs in Anthropic's defense.

The question this raises isn't about AI policy or defense procurement. It's about leverage — and who gets to wield it. When the government can threaten to destroy your business because you tried to build a guardrail, the incentive for every other company is to never build one.

When the penalty for saying "no" is being treated like Huawei, the rational move is to always say "yes." And when your competitors can swoop in hours later with a softer version of the same terms you rejected, the market punishes conviction and rewards compliance.

That's a story about what kind of technology ecosystem we're building — and whether the companies inside it are allowed to have a conscience.

The courts will decide the legal question. The market is already deciding the rest.

Thanks for reading — Michael

Feedback, thoughts, suggestions? Hit the reply!

What you just received:

This is The Inside Track: Business — stories about what big happenings, why they matter, and what to do, every (Mon/Wed/Fri).

If you're into this, you might also like the other stuff I write:

The Weekend Essay (Saturdays) — One idea worth thinking on in business & life.

Aviation (Thursdays) — Straight talk from an actual pilot.

Impact (Periodically) — Doing good in education and healthcare.

You're already set for the business. Add any of those if you want deeper, more frequent updates in areas that matter to you.

— Michael

About Michael Wildes

Michael Wildes is the founder and CEO of Drive Phase Holding Company, a permanent-capital firm focused on building category-defining companies across business, media (owner of Massif & Kroo), aviation, and impact. After leaving a career as a professional pilot, he spent a year as Business Editor at FLYING Magazine, writing 330+ articles on aviation's transformation. Now he builds permanent-capital companies focused on long-term trends that compound over decades. Based in Arlington, Virginia.

Wake up to better business news

Some business news reads like a lullaby.

Morning Brew is the opposite.

A free daily newsletter that breaks down what’s happening in business and culture — clearly, quickly, and with enough personality to keep things interesting.

Each morning brings a sharp, easy-to-read rundown of what matters, why it matters, and what it means to you. Plus, there’s daily brain games everyone’s playing.

Business news, minus the snooze. Read by over 4 million people every morning.

Keep Reading