The controversy related to the U.S. military’s contract with Anthropic for the use of its AI escalated.
After a couple days of intense negotiations between Anthropic and the Department of War, President Trump ordered all federal agencies to stop using Anthropic’s AI, Claude, with a phase out in the next 6 months:

How Did We Get Here?
Anthropic has taken a stand against use of its AI for (1) autonomous warfare and (2) mass surveillance of people.
On Feb. 24, Defense Secretary Hegseth reportedly threatened to use the Defense Production Act (DPA), which would authorize the federal government to deem the technology “critical and strategic” to national security and therefore subject to the federal government’s national defense. (For more on the history of the DPA, enacted during the Korean War, visit here.)
On Feb. 26, CEO Dario Amodei explained Anthropic’s position in a statement published on the company’s website, which stated in part:
“I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.
“Anthropic has therefore worked proactively to deploy our models to the Department of War and the intelligence community. We were the first frontier AI company to deploy our models in the US government’s classified networks, the first to deploy them at the National Laboratories, and the first to provide custom models for national security customers. Claude is extensively deployed across the Department of War and other national security agencies for mission-critical applications, such as intelligence analysis, modeling and simulation, operational planning, cyber operations, and more.
“Anthropic has also acted to defend America’s lead in AI, even when it is against the company’s short-term interest. We chose to forgo several hundred million dollars in revenue to cut off the use of Claude by firms linked to the Chinese Communist Party (some of whom have been designated by the Department of War as Chinese Military Companies), shut down CCP-sponsored cyberattacks that attempted to abuse Claude, and have advocated for strong export controls on chips to ensure a democratic advantage.
“Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner.
“However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values. Some uses are also simply outside the bounds of what today’s technology can safely and reliably do. Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now:
- Mass domestic surveillance. We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values. AI-driven mass surveillance presents serious, novel risks to our fundamental liberties. To the extent that such surveillance is currently legal, this is only because the law has not yet caught up with the rapidly growing capabilities of AI. For example, under current law, the government can purchase detailed records of Americans’ movements, web browsing, and associations from public sources without obtaining a warrant, a practice the Intelligence Community has acknowledged raises privacy concerns and that has generated bipartisan opposition in Congress. Powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person’s life—automatically and at massive scale.
- Fully autonomous weapons. Partially autonomous weapons, like those used today in Ukraine, are vital to the defense of democracy. Even fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America’s warfighters and civilians at risk. We have offered to work directly with the Department of War on R&D to improve the reliability of these systems, but they have not accepted this offer. In addition, without proper oversight, fully autonomous weapons cannot be relied upon to exercise the critical judgment that our highly trained, professional troops exhibit every day. They need to be deployed with proper guardrails, which don’t exist today.
“To our knowledge, these two exceptions have not been a barrier to accelerating the adoption and use of our models within our armed forces to date.
“The Department of War has stated they will only contract with AI companies who accede to “any lawful use” and remove safeguards in the cases mentioned above. They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a “supply chain risk”—a label reserved for US adversaries, never before applied to an American company—and to invoke the Defense Production Act to force the safeguards’ removal. These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.
“Regardless, these threats do not change our position: we cannot in good conscience accede to their request.
* * *
End of Statement
Amodei took to CBS to explain Anthropic’s position:
On Feb. 27, Pres. Trump ordered federal agencies to stop using Claude, Anthropic’s AI. See above.
Defense Secretary Hegseth also posted a scathing rebuke of Anthropic:

Anthropic then published a response to Hegseth on its website.
Meanwhile, afterwards, OpenAI CEO Sam Altman posted on X that OpenAI struck a deal with the Department of War.

In a memo to employees, Altman explained: “Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems.”
“We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions. These are our main red lines.”
CNBC reported that Altman’s memo said these conditions were accepted by the Department of War in their agreement with OpenAI.