Pentagon says it is labeling AI company Anthropic a supply chain risk ‘effective immediately’

The Trump administration is following through with its threat to designate artificial intelligence company Anthropic as a supply chain risk in an unprecedented move that could force other government contractors to stop using the AI chatbot Claude.

The Pentagon said in a statement Thursday that it has “officially informed Anthropic leadership the company and its products are deemed a supply chain risk, effective immediately.”

The decision appeared to shut down the opportunity for further negotiation with Anthropic, nearly a week after President Donald Trump and Defense Secretary Pete Hegseth accused the company of endangering national security.

Trump and Hegseth announced a series of threatened punishments last Friday, on the eve of the Iran war, after Anthropic CEO Dario Amodei refused to back down over concerns the company’s products could be used for mass surveillance of Americans or autonomous weapons.

The San Francisco-based company didn’t immediately respond to a request for comment Thursday. It has previously vowed to sue if the Pentagon pursued what the company described as a “legally unsound” action “never before publicly applied to an American company.”

The Pentagon didn’t reply to questions in time for publication.

Some military contractors were already cutting ties with Anthropic, a rising star in the tech industry that sells Claude to a variety of businesses and government agencies. Lockheed Martin said it will “follow the President’s and the Department of War’s direction” and look to other providers of large language models.

“We expect minimal impacts as Lockheed Martin is not dependent on any single LLM vendor for any portion of our work,” the company said. It’s not yet clear if the designation aims to block Anthropic’s use by all federal government contractors or just those that partner with the military.

The Pentagon’s decision to apply a rule designed to address supply threats posed by foreign adversaries was quickly met with criticism from both opponents and some supporters of Trump’s Republican administration. Federal codes have defined supply chain risk as a “risk that an adversary may sabotage, maliciously introduce unwanted function, or otherwise subvert” a system in order to disrupt, degrade or spy on it.

U.S. Sen. Kirsten Gillibrand, a New York Democrat and member of the Senate Armed Services Committee and Senate Intelligence Committee, called it “a dangerous misuse of a tool meant to address adversary-controlled technology.”

“This reckless action is shortsighted, self-destructive, and a gift to our adversaries,” she said in a written statement Thursday.

Neil Chilson, a Republican former chief technologist for the Federal Trade Commission who now leads AI policy at the Abundance Institute, said the decision looks like “massive overreach that would hurt both the U.S. AI sector and the military’s ability to acquire the best technology for the U.S. warfighter.”

Earlier in the day, a group of former defense and national security officials sent a letter to U.S. lawmakers expressing “serious concern” about the designation.

“The use of this authority against a domestic American company is a profound departure from its intended purpose and sets a dangerous precedent,” said the letter from former officials and policy experts, including former CIA director Michael Hayden and retired Air Force, Army and Navy leaders.

They added that such a designation is meant to “protect the United States from infiltration by foreign adversaries — from companies beholden to Beijing or Moscow, not from American innovators operating transparently under the rule of law. Applying this tool to penalize a U.S. firm for declining to remove safeguards against mass domestic surveillance and fully autonomous weapons is a category error with consequences that extend far beyond this dispute.”

While losing its big partnerships with defense contractors, Anthropic experienced a surge of consumer downloads over the past week due to people siding with its moral stance. Anthropic has boasted of more than a million people signing up for Claude each day this week, lifting it past OpenAI’s ChatGPT and Google’s Gemini as the top AI app in more than 20 countries in Apple’s app store.

The dispute with the Pentagon has also further deepened Anthropic’s bitter rivalry with OpenAI, which it announced a Friday deal with the Pentagon to effectively replace Anthropic with ChatGPT in classified environments.

OpenAI CEO Sam Altman later said he’s saying he shouldn’t have rushed a deal that “looked opportunistic and sloppy.”

Copyright © 2026 The Associated Press. All rights reserved. This material may not be published, broadcast, written or redistributed.

Federal News Network Logo
Log in to your WTOP account for notifications and alerts customized for you.

Sign up