
President Donald Trump has accused Anthropic of endangering troops and jeopardizing national security, but CEO Dario Amodei said his company is patriotic.
In an interview with CBS News soon after Trump ordered the federal government to stop working with Anthropic, Amodei pointed out that the AI startup was the first to serve the defense community in a classified setting.
“I believe we have to defend our country from autocratic adversaries like China and like Russia,” he said. “And so we’ve been very lean forward. We have a substantial public sector team.”
While Anthropic has provided its AI to the government, the Pentagon demanded unfettered use in all legal scenarios. But the company maintained it has “red lines,” namely its use in domestic mass surveillance and autonomous weapons.
Talks failed to produce an agreement, leading Trump to ban Anthropic from government agencies, while giving the Pentagon a six-month phaseout period.
Defense Secretary Pete Hegseth also called the company a “supply-chain risk,” meaning other contractors working for the Pentagon would not be allowed to use Anthropic’s AI for military work.
Amodei told CBS that Anthropic is onboard with 98%-99% of the military’s use cases. But his concern with mass surveillance is that the latest AI is a game-changer, even within current legal bounds.
“That actually isn’t illegal. It was just never useful before the era of AI. So there’s this way in which domestic mass surveillance is getting ahead of the law,” he explained. “The technology’s advancing so fast that it’s out of step with the law.”
As for autonomous weapons, Amodei said AI isn’t reliable enough to take humans completely out of the loop, pointing to the technical problem of “basic unpredictability” in today’s models.
So far, he is not aware of any real-world examples of a user running up against Anthropic’s red lines but acknowledged that it’s not tenable over the long term for a private company to decide these issues.
Ultimately, Congress must set guardrails on AI’s use, but lawmakers are slow to act, Amodei pointed out. The company is also “not categorically against fully autonomous weapons,” but believes AI’s reliability isn’t there yet.
In the meantime, Anthropic is still open to working with the government and suggested both sides remain in contact.
“We are willing to provide our models to all branches of the government, including the Department of War, the intelligence community, the more civilian branches of the government under the terms that we’ve provided under our red lines,” he said.
Trump’s and Hegseth’s blacklisting of Anthropic came hours before the U.S. and Israel launched widespread airstrikes on Iran, in what is shaping up to be a prolonged conflict aimed at regime change.
AI has emerged as a critical tool for the military, especially in identify targets and predicting an adversary’s behavior by quickly analyzing intelligence.
When asked by CBS what he would tell Trump now, Amodei replied, “I would say, we are patriotic Americans. Everything we have done has been for the sake of this country, for the sake of supporting U.S. national security. Our leaning forward in deploying our models with the military was done because we believe in this country.”
But he added, “The red lines we have drawn we drew because we believe that crossing those red lines is contrary to American values. And we wanted to stand up for American values.”
Hanging over Anthropic is the supply-chain risk designation from the Pentagon chief, an unprecedented move against an American company that could dent its growth.
Amodei called it punitive but downplayed the eventual damage, saying it won’t affect non-defense work that Anthropic’s customers perform.
“We’re gonna be fine,” he said. “The impact of this designation is fairly small. Now, the nature of the tweet that the secretary put out was designed to create uncertainty, was designed to create a situation where people believed the impact would be much larger, was designed to create fear, uncertainty, and doubt. But we won’t let that succeed. We will be fine.”
