Power used for personal grudges. That’s exactly what’s happening with Trump’s latest clash against Anthropic, the AI company that refused to give him “dictator-style praise.”
Now, Anthropic, the team behind the AI chatbot Claude, is suing the Trump administration over what it calls an “unlawful campaign of retaliation.”
The dispute began after Anthropic refused to allow unrestricted military use of its technology. The company says it wanted safeguards in place for how the AI system could be deployed.
But the disagreement quickly escalated.
Stay up-to-date with the latest news!
Subscribe and start recieving our daily emails.
Last week, the Pentagon designated Anthropic a “supply chain risk,” a rare move that could cut the company off from defense partnerships and contractors. Now the fight may be about to intensify even further.
According to Axios, the White House is considering an executive order that would remove Anthropic’s AI model from all federal government operations. If issued, the order would block agencies from using the company’s AI chatbot Claude.
Anthropic argues the move is retaliation.
“These actions are unprecedented and unlawful,” the company’s lawsuit states. “The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech.”
The tension worsened after comments from Anthropic CEO Dario Amodei.
In a staff memo that later leaked, Amodei wrote that the conflict with the administration stemmed from the fact that “we haven’t given dictator-style praise to Trump.” The remark soon became a major talking point in Washington and the tech world.
Amodei later apologized for the wording.
“It does not reflect my careful or considered views,” he said in a statement.
Meanwhile, the White House defended the administration’s position. A spokesperson said the president cannot allow a “radical left, woke company to jeopardize our national security by dictating how the greatest and most powerful military in the world operates.”
The dispute has also pulled other tech companies into the spotlight.
OpenAI, the maker of ChatGPT, struck a deal with the Pentagon shortly after the government moved against Anthropic.
That development shines a light on how artificial intelligence companies are balancing national security demands with concerns about the technology’s potential misuse.
For now, Anthropic is asking the courts to block the government’s actions and reverse the Pentagon’s “supply chain risk” designation. The company argues the administration is abusing its power to punish a firm that refused to bend. Judges will now decide whether the government crossed the line.
Featured image via Political Tribune Gallery