AI and national security: US government accuses Anthropic of posing risks to the Pentagon
10 March 16:23
The American company Anthropic, which developed the chatbot Claude, has filed a lawsuit against the administration of US President Donald Trump.
The reason for this was the government’s position, which recognized the company as a risk to the defense department’s supply chains and ordered federal agencies to stop working with it.
This was reported by The Wall Street Journal, according to "Komersant Ukrainian".
What the company accuses the government of
In its lawsuit, Anthropic claims that the government exceeded its authority and effectively punished the company for disagreeing with the military’s policy on the use of artificial intelligence.
The company is asking the court to
- declare the government’s decision illegal;
- allow it to continue working with federal agencies.
The statement emphasizes that the case is important not only for Anthropic, but also for other technology companies that may have a different position from the government on the use of AI.
Who is involved in the case
The defendants in the case are:
- The US Department of Defense (Pentagon);
- Defense Secretary Pete Hagseth;
- Several federal agencies and administration officials.
The White House has stated that the president “will not allow the company to jeopardize national security.”
Support from scientists
After the lawsuit was filed, 37 artificial intelligence researchers from OpenAI and Google filed a statement in court in support of Anthropic.
Among the signatories is Jeff Dean, chief scientist at Google DeepMind.
He works with the lab’s CEO, Demis Hassabis, on Google’s artificial intelligence development strategy.
The researchers warned that punishing one of America’s leading AI companies could harm the scientific and technological competitiveness of the United States.
Why the conflict arose
Previously, the Pentagon and Anthropic planned to sign a cooperation agreement.
However, negotiations broke down due to disputes over the use of the company’s technologies.
Anthropic insisted on two key restrictions:
- the Claude model should not be used in autonomous weapons;
- it should not be used for mass surveillance of US citizens.
These conditions did not suit the military.
Further escalation
On February 27, Defense Secretary Pete Hagseth said he considered Anthropic a risk to the defense sector’s supply chains — accusations usually levelled at companies from hostile countries.
After the negotiations broke down, the Pentagon signed an agreement with another company, OpenAI.
This happened on the same day that President Donald Trump ordered federal agencies to stop using Anthropic’s technologies.
OpenAI subsequently announced that it would amend its contract with the Pentagon to ensure that its technology would not be used for mass surveillance of Americans.