Currency
  • Loading...
Weather
  • Loading...
Air Quality (AQI)
  • Loading...

The US Department of Defense (Pentagon) has for the first time designated an artificial intelligence company, Anthropic, as a "supply chain risk." This label indicates that the government deems the company insufficiently secure for its use, effectively restricting its contracts with defense agencies and setting the stage for a significant legal confrontation.

Anthropic's leadership is preparing to challenge this decision in court. CEO Dario Amodei stated that the company refused to grant defense agencies unfettered access to its AI tools due to concerns over mass surveillance and autonomous weapons. He wrote, "We do not believe this action is legally sound, and we see no choice but to challenge it in court," highlighting the firm's commitment to ethical safeguards.

The situation escalated following unsuccessful talks in recent days. According to sources, this was partly due to public criticism from the Trump administration, with former President Donald Trump allegedly ordering all federal agencies to cease using Anthropic in a post on his Truth Social platform. This reportedly prompted a senior Pentagon official to immediately designate the company as a risk, a move Anthropic claims it was not forewarned about by the White House or Pentagon.

Internal sources suggest that Anthropic is disliked by some in the Trump administration because its CEO has not donated large sums to Trump or publicly praised him. Meanwhile, tech giant Microsoft announced it would continue to embed Anthropic's technology in products for clients, excluding the US Department of Defense, while rival OpenAI secured a new contract with the defense department, purportedly with more guardrails than previous agreements.

Senator Kirsten Gillibrand criticized the designation, calling it "shortsighted, self-destructive, and a gift to our adversaries." She added, "The government openly attacking an American company for refusing to compromise its own safety measures is something we expect from China, not the United States," underscoring the geopolitical tensions and internal friction within the US regime. Despite the fallout, Anthropic's AI app, Claude, remains highly popular, being the most downloaded AI app in several countries and attracting over a million new users daily.

The Pentagon official defended the action, stating it was about ensuring the military can use technology for all lawful purposes without vendor restrictions. However, this move highlights broader challenges in the US regime's approach to AI governance, with potential implications for innovation and national security, as companies navigate increasing regulatory pressures and political interference.

Source: www.bbc.com