A US federal court has temporarily blocked the Pentagon’s designation of the AI company Anthropic as a “supply chain risk” and at the same time suspended an order by President Donald Trump to terminate all government contracts with the company. Judge Rita Lin described the measures in a 43-page ruling as retaliation under the First Amendment and found that they were not clearly aimed at national security interests. Instead, they appeared intended to punish the company. She argued that the Department of Defense could simply have stopped using the AI system “Claude” if there were concerns about the military chain of command.
The dispute arose because the Pentagon insisted on using the AI model Claude for all lawful purposes, including potential military applications, which Anthropic rejected. The company particularly opposed the use of its technology in fully autonomous lethal weapons and for mass surveillance of the US population. CEO Dario Amodei described this stance as a protected viewpoint. The court supported this argument, stating there was no legal basis for labeling a US company as a potential adversary solely for expressing dissenting views. The judge also found indications that due process rights had been violated and that Defense Secretary Pete Hegseth had not followed required procedures.
The ruling has been paused for one week to allow the government to appeal. During this period, sweeping restrictions such as bans on federal contracts and partnerships remain blocked. However, the decision does not prevent the Pentagon from discontinuing the use of Claude or selecting alternative providers. Numerous technology companies supported Anthropic’s legal challenge, while the company stated it remains focused on developing safe and reliable AI.
Despite the partial legal victory, a return to government contracts remains uncertain, as agencies such as the Department of Health and Human Services and the General Services Administration had already removed Anthropic products from their systems.
Source: WION