Algorithmic warfighting: Are killer robots the new face of modern warfare?

IMAGE CREDIT:
Image credit
iStock

Algorithmic warfighting: Are killer robots the new face of modern warfare?

Algorithmic warfighting: Are killer robots the new face of modern warfare?

Subheading text
Today’s weapons and warfare systems might soon evolve from mere equipment to autonomous entities.
    • Author:
    • Author name
      Quantumrun Foresight
    • January 10, 2023

    Countries continue to research artificially intelligent (AI) warfare systems even though resistance has increased within civil society against lethal, autonomous weapons. 

    Algorithmic warfighting context

    Machines use algorithms (a set of mathematical instructions) to solve problems that mimic human intelligence. Algorithmic warfighting involves the development of AI-powered systems that can autonomously manage weapons, tactics, and even entire military operations. Machines autonomously controlling weapon systems have opened new debates regarding the role autonomous machines should play in warfare and its ethical implications. 

    According to International Humanitarian Law, any machine (whether weaponized or non-weaponized) should undergo stringent reviews before being deployed, particularly if they are meant to cause harm to persons or buildings. This extends to AI systems being developed to eventually become self-learning and self-correcting, which may lead to these machines replacing human-controlled weapons systems in military operations.

    In 2017, Google received severe backlash from its employees when it was discovered that the company was working with the United States Department of Defense to develop machine learning systems to be used in the military. Activists were concerned that creating possibly self-evolving military robots can violate civil liberties or lead to false target recognition. The use of facial recognition technology in the military has increased (as early as 2019) to create a database of targeted terrorists or persons of interest. Critics have expressed concerns that AI-driven decision-making can lead to disastrous results if human intervention is compromised. However, most United Nations members favor banning lethal autonomous weapons systems (LAWS) because of the possibility for these entities to go rogue.

    Disruptive impact

    Falling military recruitment figures being experienced by many Western nations—a trend that deepened during the 2010s—are a key factor contributing to the adoption of automated military solutions. Another factor driving the adoption of these technologies is their potential to streamline and automate battlefield operations, leading to increased warfighting efficiencies and lower operating costs. Some military industry stakeholders have also claimed that AI-controlled military systems and algorithms can lower human casualties by providing real-time and accurate information that can increase the accuracy of deployed systems so they strike their intended targets. 

    If more AI-controlled military weapons systems are deployed in theatres worldwide, fewer human personnel may be deployed in conflict zones, lowering military casualties in theatres of war. The makers of AI-driven weapons may include countermeasures such as kill switches so that these systems can be disabled immediately if an error occurs.  

    Implications of AI-controlled weapons 

    Wider implications of autonomous weapons being deployed by militaries worldwide may include:

    • Autonomous weaponry being deployed in place of foot soldiers, decreasing warfare costs and soldier casualties.
    • The greater application of military force by select nations with greater access to autonomous or mechanized assets, since the reduction or elimination of troop casualties can minimize a country’s domestic public resistance to waging war in foreign lands.
    • An escalation of defense budgets between nations for military AI supremacy as future wars may be decided by the decision-making speed and sophistication of future AI-controlled weapons and militaries. 
    • Increasing partnership between humans and machines, where data will be instantly provided to human soldiers, allowing them to adjust battle tactics and strategies in real time.
    • Countries increasingly tapping the resources of their private tech sectors to bolster their AI defense capabilities. 
    • One or more global treaties being promoted in the United Nations banning or limiting the use of autonomous weapons. Such policies will likely be ignored by the world’s top militaries.

    Questions to comment on

    • Do you think algorithmic warfighting will benefit humans enlisted in the military?
    • Do you believe that AI systems designed for warfare can be trusted, or should they be curtailed or banned outright?

    Insight references

    The following popular and institutional links were referenced for this insight: