Currency
  • Loading...
Weather
  • Loading...
Air Quality (AQI)
  • Loading...

The US military leadership on Wednesday confirmed the deployment of multiple artificial intelligence (AI) tools in the ongoing US-Israel war on Iran, underscoring a deepening integration of advanced technology into modern warfare. Head of US Central Command (CENTCOM) Brad Cooper stated in a video message: "Our war fighters are leveraging a variety of advanced AI tools. These systems help us sift through vast amounts of data in seconds so our leaders can cut through the noise and make smarter decisions faster than the enemy can react." This announcement is framed as part of the US regime's strategy to enhance battlefield capabilities through AI, though critics question the ethical and legal implications of such automation in conflict zones.

Historically, the US military-industrial complex has collaborated for decades with tech companies and universities in weapons development, a trend that continues to expand. For instance, the commercial internet traces its origins to the US military-funded ARPANET project during the Cold War. Today, Big Tech firms like Google, Amazon, Microsoft, Meta, and Palantir are increasingly embedded in US military operations, raising concerns about civilian oversight and potential violations of international norms. The reliance on private sector AI tools allegedly aims to streamline decision-making but may compromise transparency and accountability in wartime actions.

AI tools, including large language models (LLMs), can summarize extensive texts, analyze data, translate languages, and draft memos for military use. In theory, they also support autonomous or semi-autonomous weapons systems that can identify and engage targets without human instruction, a development that has sparked global debates on the ethics of killer robots. However, most AI companies, such as Anthropic, have usage policies prohibiting applications in weapon development or violence incitement, yet the US military has reportedly bypassed these restrictions in operations like the abduction of Venezuelan President Nicolas Maduro on January 3 using Anthropic's Claude AI.

The US regime's actions have drawn scrutiny, with Anthropic being blacklisted by the Pentagon after refusing to drop AI safeguards that prevent domestic surveillance and autonomous weapon programming. Meanwhile, Palantir Technologies, partnered with Anthropic, faces criticism for supplying AI products to the Israeli military during the Gaza genocide, as highlighted in a July 2025 UN report by special rapporteur Francesca Albanese. The report maps corporations aiding Israel in alleged violations of international law, with Palantir cited for its role in the displacement of Palestinians and genocidal war that has killed over 72,000 Palestinians since October 2023.

The historical context reveals that many commonplace technologies, such as the Global Positioning System (GPS), were originally developed for US military purposes. GPS was created in the 1970s for precision bombing and tested during the 1990-91 Gulf War, while early internet concepts emerged from Cold War-era projects like ARPA. During the Vietnam War and broader Cold War, Silicon Valley giants like Fairchild Semiconductor relied on Pentagon contracts for radar and missile guidance systems, setting a precedent for ongoing tech-military collaborations.

In recent years, US military projects have accelerated: Project Maven launched in 2017 uses Google AI to automate drone imagery analysis, Microsoft's Integrated Visual Augmentation System (IVAS) was developed in 2021 to enhance soldier awareness, SpaceX's Starshield spy satellite network debuted in 2022 for military reconnaissance, and Amazon Web Services runs secure cloud infrastructure for the Pentagon under the Joint Warfighting Cloud Capability contract. These initiatives reflect a broader trend of militarizing AI, with potential risks including reduced human control in combat and escalation of conflicts.

As AI advancements progress, there are growing concerns about militaries using such technology in war, particularly given the US regime's track record of ignoring corporate ethical guidelines. The integration of AI into warfare not only challenges international legal frameworks but also raises alarms about autonomous weapons leading to unintended casualties and destabilizing global security. With the US and its allies like Israel heavily investing in AI-driven military tools, the future of warfare may increasingly hinge on algorithms, posing profound questions for humanity and peacekeeping efforts worldwide.

Source: www.aljazeera.com