This revelation indicators a major shift in India’s navy technique, integrating synthetic intelligence throughout the total spectrum of warfare. Past deadly strike capabilities, the initiative encompasses command-and-control methods, cybersecurity, and even human-behaviour evaluation to bolster nationwide safety.
Technological developments highlighted within the report embrace underwater autonomous autos engineered for stylish goal classification and AI-enabled missile methods. One of many extra controversial developments is the “Face Recognition beneath Disguise” (FRSD) system, designed to establish people at delicate areas even after they try and obscure their options.
This broad AI push goals to attain “power multiplication,” extending the attain of the battlefield whereas theoretically decreasing the bodily publicity of Indian troops to direct fight.
Nonetheless, the Ministry of Defence has candidly acknowledged the inherent volatility of those applied sciences. The report admits that AI and machine studying strategies are “not amenable for verified choice making,” a technical limitation that would result in catastrophic “unintended outcomes.” To handle this danger, the ministry steered a semi-automatic framework the place AI supplies suggestions fairly than ultimate execution orders, although it famous a worldwide lack of consensus on what actually defines “ranges of autonomy.”
On the worldwide stage, India’s stance stays complicated and considerably immune to restrictive regulation. Whereas taking part in UN discussions beneath the Conference on Sure Standard Weapons, New Delhi has constantly argued in opposition to a legally binding ban on autonomous weapons, viewing such proposals as untimely.
India’s voting report on the UN displays this warning; it helps progress in discussions however has abstained from or voted in opposition to resolutions that mandate strict human management norms or impartial reporting, fearing the stigmatisation of rising tech.
Navy leaders have expressed sharp considerations relating to the “cognitive offloading” of deadly choices to algorithms. On the latest India AI Influence Summit, high-ranking officers harassed that command accountability is absolute and can’t be transferred to a machine.
Examples have been cited the place commanders intervened to cease machine-recommended strikes that did not account for civilian evacuations, highlighting the important necessity of human judgment within the face of algorithmic pace.
The stakes of this technological race are underscored by latest international conflicts the place AI-assisted concentrating on, such because the Lavender system, resulted in vital civilian casualties attributable to restricted oversight.
As India advances its autonomous capabilities, the present absence of a devoted oversight physique, particular home laws, or a formalised navy doctrine stays a vital hole. With out these safeguards, the chance stays that algorithmic errors may translate into irreversible deadly power with out a clear line of accountability.
Standing Committee On Communications And Data Expertise




















