© 2026 WSKG

Please send correspondence to:
601 Gates Road
Vestal, NY 13850

217 N Aurora St
Ithaca, NY 14850

FCC LICENSE RENEWAL
FCC Public Files:
WSKG-FM · WSQX-FM · WSQG-FM · WSQE · WSQA · WSQC-FM · WSQN · WSKG-TV · WSKA
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Judge temporarily blocks Trump administration's Anthropic ban

Anthropic is an American artificial intelligence (AI) company founded in 2021.
RICCARDO MILANI/Hans Lucas
/
AFP via Getty Images
Anthropic is an American artificial intelligence (AI) company founded in 2021.

Updated March 26, 2026 at 9:14 PM EDT

A federal judge in San Francisco ordered a preliminary injunction against the Pentagon on Thursday that temporarily puts on ice its potentially-crippling decision to label Anthropic a "supply chain risk." The tech company and the Pentagon are in the middle of a dispute over how the military might use the company's artificial intelligence model, Claude.

Judge Rita F. Lin of the District Court for the Northern District of California also temporarily halted a directive from President Trump ordering all federal agencies to stop using Anthropic's technology.

These actions pause the government's ban until the court can decide on the merits of the underlying case.

In the order, Lin wrote that the supply chain risk designation is usually reserved for foreign intelligence agencies and terrorists, not for American companies. "These broad measures do not appear to be directed at the government's stated national security interests," Lin wrote. "If the concern is the integrity of the operational chain of command, the Department of War could just stop using Claude."

"Instead," she continued, "these measures appear designed to punish Anthropic."

The injunction stems from a contract spat between Anthropic and the Pentagon that went public in February and has escalated in the weeks since.

Anthropic CEO Dario Amodei said he would not allow Claude to be used for autonomous weapons or to surveil American citizens. The Pentagon says it's up to the military to decide how to use the tools it buys from contractors, not the companies.

President Trump upped the ante by ordering all federal agencies to stop using Claude.

The Pentagon designated Anthropic as a "supply chain risk" earlier this month, citing national security. In a statement at the time, it said that the military "will not allow a vendor to insert itself into the chain of command by restricting the lawful use of a critical capability and put our warfighters at risk."

Anthropic filed two cases in federal court alleging that this designation amounts to illegal retaliation for its stance on AI safety, and that the label will cost it both customers and revenue, since it will bar Pentagon contractors from doing business with the company, as well. The suits also allege that the Trump administration violated the company's First Amendment right to speech.

A wide range of organizations, including Microsoft, the ACLU and retired military leaders, have filed amicus briefs with the court in support of Anthropic.

At a hearing on Tuesday, Lin appeared to lean heavily toward granting the preliminary injunction, saying her initial impression was that the ban on Anthropic looked like punishment for openly disagreeing with the government's position.

In court, lawyers for the Pentagon argued that Anthropic's actions rendered it untrustworthy, and that the supply chain risk designation stemmed from the company's decision to try to hem in the military's use of its AI models, rather than from openly opposing the Pentagon's position on the matter.

They also argued that, theoretically, the company could update Claude in a way that endangers national security.

In her order, Lin called the supply chain risk designation "likely both contrary to law and arbitrary and capricious."

Nothing in the statute for applying the supply chain risk designation supports "the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for exposing a disagreement with the government," she wrote.

She also wrote that the Pentagon had previously praised Anthropic as a partner and put it through rigorous national security vetting. But it was not until the company publicly raised concerns about how its technology could be used, she wrote, that the Pentagon "announced a plan to cripple Anthropic: to blacklist it from doing business with any company that services the U.S. military, to permanently cut off its ability to work with the federal government, and to brand it an adversary that could sabotage [the Department of War] and that posed a supply chain risk."

"This appears to be classic First Amendment retaliation," she continued.

Anthropic welcomed the judge's decision. "We're grateful to the court for moving swiftly, and pleased they agree Anthropic is likely to succeed on the merits. While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI," a spokesperson wrote in an email to NPR.

The Pentagon did not immediately respond to a request for comment, but has previously told NPR that the agency's policy is not to comment on ongoing litigation.

Jennifer Huddleston, a senior fellow in technology policy at the Cato Institute, a libertarian think tank, said the preliminary injunction reads as though the judge believes Anthropic is likely to succeed on the merits.

Huddleston said the decision is significant and has broader implications than just this case.

"This preliminary injunction is really diving into some of those classic questions of ensuring that there's not retaliation against a company or an individual for exercising their First Amendment rights, and also ensuring that when such significant decisions are made, things that could be potentially crippling to a business that the adequate due process is followed," she said.

Copyright 2026 NPR

John Ruwitch
John Ruwitch is a correspondent with NPR's international desk. He covers Chinese affairs.