Risk control key in AI-guided weapon system
Editor's note: The booming AI industry has not only created opportunities for economic and social development but also brought some challenges. Further developing AI standards can help promote technological progress, enterprise development and industrial upgrading. Three experts share their views on the issue with China Daily.
The development of artificial intelligence (AI) and autonomous weapon systems will boost the defense capability of some countries, changing traditional strategic and tactical landscapes, giving rise to new challenges and making risk management more difficult.
Generally, the application of AI technology in the defense sector can increase the precision of attacks, but it could also lead to more misunderstandings and misjudgments, escalating disputes and confrontations. For instance, precision-guided weapon systems can strike targets more efficiently and effectively, and AI's fast decision-making can improve combat efficiency but increase the chances of contextual misjudgment.
Particularly, AI's application in information warfare, including "deepfakes", can spread false information and thus increase the chances of the "enemies" making the wrong decision, rendering conflicts more unpredictable and uncontrollable.
First, AI-guided weapon systems may misjudge and misidentify targets, mistaking civilians or friendly and non-military forces as targets, causing unnecessary deaths and destruction. Such misjudgments can provoke strong reactions or retaliation from the "enemies".
If AI-guided weapon systems cannot understand the complex battlefield environment, they could make wrong decisions. For example, AI might fail to understand the enemy's tactical intentions or background information, leading to failed action or excessive use of force.
Second, AI-guided weapon systems may lack flexibility in decision-making or fail to handle the complex dynamics of the battlefield, because of their dependence on preset rules and algorithms.
Third, AI-guided weapon systems can be vulnerable to hacking attacks and carry cybersecurity risks. For example, hackers can disrupt defense systems or, worse, manipulate them into taking wrong decision or launching attacks. In fact, the AI-guided weapon systems, once they go out of control, could pose a serious threat to their own side. Also, AI-guided weapon systems could fall into the hands of terrorists.
And fourth, although AI apps can hasten the decision-making process it is doubtful whether AI can make fast decisions after assessing the consequences of its decisions. This calls for establishing strict international rules and oversight mechanisms to minimize the risks posed by AI-guided weapon systems, and to ensure AI is used for the betterment of the people.
To make sure AI-guided weapon systems are controllable, they should be designed with human-machine collaboration, with humans controlling the process. There should be interfaces and feedback mechanisms to give the operators regular updates on the developments, so timely intervention can be made if and when needed.
As for the countries collaborating to formulate international rules and regulations, they should highlight the importance of global treaties and agreements — similar to the Biological Weapons Convention and the Chemical Weapons Convention — in regulating the use of AI in military applications. Setting technological standards and promoting best practices can help ensure the AI weapon systems meet the safety and ethical requirements.
Besides, AI systems need to be more transparent, and countries need to follow an open and transparent development process, which can be reviewed and verified if and when such a need arises. This will help identify potential problems and make the systems more reliable. And independent auditing firms should be hired to annually audit the manufacturers' accounts and regularly conduct inspections to make sure they are complying with international regulations.
There is also a need to strengthen the ethical and legal frameworks to ensure the application of AI systems for military use aligns with humanitarian and international laws, and to hold the manufacturers of AI weapon systems accountable if they violate the established norms.
More important, control and monitoring mechanisms should be established to ensure human supervision in AI decision-making processes, because only humans can effectively prevent automated systems from going rogue.
Measures should also be taken to strengthen cybersecurity in order to thwart hacking bids and tampering of multi-layered security arrangements such as encryption, authentication, intrusion detection, and emergency response mechanisms and, if need be, quickly restore operations after a cyberattack or system failure.
In short, countries should engage in international cooperation and information sharing in AI military applications to collectively overcome the technological challenges, while taking measures to ensure rules and regulations are not breached, because they could create chaos, leading to misjudgments, wrong decisions which could trigger conflicts. Global efforts should be aimed at reducing the risks of conflict and ensuring the control of AI's military applications is in human hands.
The author is the director at the Laboratory of Human-Machine Interaction and Cognitive Engineering, Beijing University of Posts and Telecommunications.
The views don't necessarily represent those of China Daily.
If you have a specific expertise, or would like to share your thought about our stories, then send us your writings at [email protected], and [email protected].