calender_icon.png 5 March, 2026 | 3:52 AM

AI reshaping warfare and what it means for India

05-03-2026 12:00:00 AM

In the fast-evolving landscape of modern conflict, artificial intelligence has transcended its role as mere support infrastructure to become a decisive force multiplier. As conflicts rage in West Asia, AI is compressing decision cycles, accelerating targeting, and fundamentally altering how wars are fought—shifting the battlefield from physical trenches to high-speed data and silicon. 

The ongoing coverage of the joint US-Israel operation against Iran, codenamed Operation Epic Fury, underscores this shift. In its opening hours alone, nearly 900 precision strikes were executed within the first 12 hours, a pace enabled by AI-driven compression of the “kill chain”—the time from target identification to destruction—slashing it from hours to mere seconds.

Algorithms integrated with real-time surveillance data, fed into Pentagon systems since 2024, have guided autonomous drones and loitering munitions to strike missile sites and high-value leadership targets with unprecedented speed. This “war at machine speed” marks a paradigm change, where data and algorithms increasingly dictate outcomes over traditional human-led maneuvers.

Globally, the race is intensifying: the US has inked massive AI contracts worth up to $200 million each with firms like Palantir and major cloud providers for targeting and analytics. Europe is advancing AI for defense while advocating ethical guardrails. Israel refines AI-powered drones and battlefield analytics, China pours resources into military AI and exports related technologies, and even Iran leverages foreign-linked AI for surveillance, cyber operations, and influence campaigns.

The integration of private AI into military operations has not been without controversy. In 2024, Anthropic’s models were deployed for war planning via Palantir under strict red lines: no mass surveillance and no fully autonomous lethal weapons. By 2025, the Department of Defense awarded substantial contracts to Anthropic, OpenAI, Google, and xAI, even clearing Anthropic for classified work.

Yet tensions escalated. Talks collapsed over the “all lawful purposes” clause when Anthropic refused to relax limits on surveillance and autonomy. On February 27, President Donald Trump directed federal agencies to pause Anthropic’s use, citing supply-chain risks. Reports, however, suggest Claude models were still utilized in intelligence support for the Iran strikes. OpenAI later secured a deal incorporating “human-in-the-loop” safeguards but amended terms to prohibit international and intentional domestic surveillance—prompted by fears of mass surveillance capabilities.

Turning to India, the country is pursuing a distinctive path in AI for defense, prioritizing small language models optimized for edge and field deployment. Efficiency and localized applicability are emphasized, with over 100 AI projects spanning the three services and battlefield networks. Milestones include a 75-UAV AI drone swarm demonstration in 2021, induction of an offensive swarm in 2023 with a 50-km strike range, and testing of AI-enabled concepts during Operation Sindoor. Yet India trails significantly in raw compute power, possessing just 38,000 GPUs compared to the United States’ 14 million and China’s 9.5 million.

A defense and strategic expert, argued that warfare is rapidly transitioning from person-driven to technology-driven, with the West leveraging private partnerships (including with Anthropic and OpenAI) for target identification and precision strikes. He acknowledged India’s technological talent but stressed that integration into defense remains in early stages. “We have the knowhow, but we need dedicated budgets, R&D facilities, sustained effort, and political willpower,” he said, warning against treating AI integration like routine training.

A foreign affairs and geopolitical expert, cautioned against overhyping AI. He pointed out that network-centric warfare and algorithms are not new, citing the US experience in Afghanistan where overwhelming technological superiority failed to deliver lasting victory despite trillions spent. Human intelligence, he noted, still proved decisive in locating targets like Iranian leaders.

Drawing parallels to Blitzkrieg tactics and Operation Sindoor’s initial overwhelming strikes, he described AI as a powerful amplifier but warned of over-dependence: “Technological superiority can become a disadvantage without boots on the ground.” He emphasized that adversaries evolve too, and the “man behind the machine” remains irreplaceable. He advocated a specific Production-Linked Incentive scheme for GPUs, urging conglomerates like Reliance, Adani, and L&T to establish domestic manufacturing, while praising India’s indigenous network architecture (such as the Sudarshan Chakra) that retains sovereign control even when hardware is imported.

Another AI and cybersecurity expert focused on the acceleration AI brings to processes once bogged down by manual analysis of satellite and drone imagery. He recounted past delays from format conversions in imported anti-missile systems that required human intervention. While agreeing the human element is vital, he noted AI reduces the number of personnel needed. He highlighted ethical risks of relying on foreign large language models for defense, especially amid geopolitical tensions, and cited Iran’s struggles under sanctions as a cautionary tale. “We cannot rely on importing these technologies—it creates unacceptable risk,” he urged. And added that this is the right time to invest heavily in indigenous researchers and defense-specific AI models.

The consensus: AI is a powerful enabler but not a panacea. For India, the path forward lies in sovereign capability development, intelligent system design over raw scale, robust public-private collaboration, and never forgetting that in the fog of war, the human mind behind the algorithm may still prove the ultimate edge. The race is on—and the stakes could not be higher.