Currency
  • Loading...
Weather
  • Loading...
Air Quality (AQI)
  • Loading...

The artificial intelligence lab Anthropic has filed a lawsuit challenging the U.S. Department of Defense’s designation of the company as a “supply chain risk,” a move that came after Anthropic refused to allow unrestricted military use of its AI system, Claude. The Pentagon had demanded the removal of guardrails blocking AI functions for autonomous weapons or domestic surveillance, setting the stage for a legal confrontation with the Trump administration.

Prior to the February 27 deadline for a deal, Anthropic CEO Dario Amodei warned U.S. Defense Secretary Pete Hegseth about the dangers of deploying untested AI in autonomous warfare and declined to lift usage restrictions. The Pentagon, however, argued that technology companies should not dictate warfare matters, with President Donald Trump reportedly labeling the company as run by “left wing nutjobs” and issuing a government-wide ban on Anthropic technology.

In complaints filed in California and Washington D.C., Anthropic contends that the Trump administration’s actions are “unprecedented and unlawful,” alleging that the company is being penalized for “expressing the principle” that AI should be used “in the safest and the most responsible manner” to “maximize positive outcomes for humanity.” The company further stated that even the most advanced AI models remain unreliable for automated weapons systems and that using its AI in surveillance would violate fundamental rights.

The Pentagon has insisted on full access to AI-powered functionality for “any lawful” use, claiming that Anthropic’s refusal amounts to a private entity imposing policy restrictions on defense matters. This step is viewed as extreme, as Anthropic is the first U.S.-based tech company to receive the “supply chain risk” designation, previously applied only to foreign firms like China’s Huawei deemed security threats. Under U.S. law, such a designation applies to systems that could “sabotage” or “maliciously introduce” unwanted functions.

The designation effectively bars Anthropic from federal agency business and may impact dealings with contractors, though CEO Amodei noted it has a “narrow scope” allowing non-Defense Department projects. Despite the legal dispute and risk status, Claude remains embedded in the Pentagon’s operational intelligence systems, reportedly used in planning the recent U.S.-Israel attack on Iran, highlighting the ongoing tensions between ethical AI governance and military demands.

Source: www.dw.com