Japanese academic: Some concerns with China's military AI advancements
A possible concern on the military use of AI in China, where the party controls the People's Liberation Army, is that political rationality may take precedence over military rationality. This could heighten the risks of accidental escalation or a lack of assurance of control and safety when using such technology. Japanese academic Masaaki Yatsuzuka delves into the issue.
As various countries accelerate the military use of AI, one concern is the potential risks associated with authoritarian states like China utilising these technologies.
President Xi Jinping announced a policy of accelerating "military intelligentisation" with the goal of building a strong military at the 19th National Congress of the Chinese Communist Party (CCP) in 2017. Since then, China's People's Liberation Army (PLA), researchers, and munitions companies have been actively discussing the military use of AI, and are working to update equipment and revamp military strategies.
The CCP appears to have an advantage over democracies in the military use of AI in terms of mobilising the legacy of the socialist science and technology (S&T) system. Some statistics indicate that China has already produced more AI research outcomes than the US. However, the risks of military use of AI due to China's political system and military strategic culture are still unclear.
The ultimate issue in the military use of AI is the balance between human-machine collaboration.
Balancing human-machine collaboration
The main effect of the military use of AI is to accelerate the tempo of military operations. If the functionality of machine learning algorithms improves, AI will surpass human cognitive abilities. AI will accelerate the tempo of military operations by supporting the entire process of operations including reconnaissance, detection, information processing, analysis, decision making, and targeting. Machine learning algorithms can also be used to further automate command and control (C2) processes.
The ultimate issue in the military use of AI is the balance between human-machine collaboration. In other words, the issue is to what extent AI should be involved in human strategic decision-making processes, or to what extent AI autonomy should be given to unmanned weapons.
If AI can demonstrate its ability to outperform humans in complex military situations, military authorities will be more likely to trust the decisions generated by algorithms. The risk in this case is that there will be insufficient human supervision over the AI processing, and the sense of shared responsibility in human-machine collaboration may be weakened.
The risk of military operational errors and accidents may increase as a result of reduced human vigilance. In particular, AI is said to be poor at understanding human signal transmission, especially de-escalation signals.
AI modifies power distribution
The balance between human-machine collaboration is a difficult issue for the CCP in terms of maintaining the political regime. In China, the CCP controls its army, the PLA, so political rationality takes precedence over military rationality.
Xi Jinping, who values the security of the regime, has increased the number of political commissars in the Central Military Commission and emphasised political education in the military to strengthen the party's control over the military.
As military use of AI leads to labour-saving weapons, the military power possessed by each soldier tends to increase, and the PLA has issued a warning that political education needs to be reinforced even more.
From the perspective of party control over the military, the CCP has no choice but to proceed with the military use of AI at the expense of increased military rationality.
... there is an incentive to incorporate immature, accident-causing AI into offensive weapons.
Win at all costs?
On the other hand, Xi Jinping's demand for the PLA to build a strong military puts political pressure on the military personnel. Soldiers face threats to their political lives if they go against achieving ambitious strong military targets, which increases the risks involved in the military use of AI.
In this environment, PLA leaders seeking political achievements may push ahead with the introduction of AI into advanced weapons without fully assuring control and safety issues. AI is a force-enabler for existing weapons such as cyber capabilities, hypersonic vehicles, precision-guided missiles, robotic weapons, and anti-submarine warfare.
Therefore, with the military use of AI, the balance of offence and defence against national defence systems tends to shift in favour of the attacking side. As a result, there is an incentive to incorporate immature, accident-causing AI into offensive weapons.
Unmanned aircraft operated by the PLA are rapidly appearing in the East China Sea and Taiwan Strait, and are expected to perform not only patrolling but also missile guidance and attack missions. If artificial general intelligence (AGI) is introduced to drones operating in surrounding waters, it will not only pose a threat to neighbouring countries but also increase the risk of unintended collisions.
Dialogue is needed to develop international norms before offensive AI weapons and counter-AI weapons have an irreversible impact on the international security system.
Improving counter-AI capabilities
Besides achieving the balance between strong military objectives and military allegiance to party authority, the PLA is also focusing on enhancing its counter-AI capabilities to thwart other countries' military use of AI.
The PLA will advance the acquisition of defensive counter-AI capabilities, such as AI-based detection evasion and concealment technologies. It is also acquiring capabilities to counter offensive AI, such as data poisoning, which degrades the machine learning capabilities of adversaries, and cyber attacks, which add noise to inferred data. Such efforts would impose costs on other countries in the military use of AI, including increasing the risk of misidentification and error.
China's PLA will continue to pursue "military intelligentisation" to achieve Xi Jinping's military targets. Dialogue is needed to develop international norms before offensive AI weapons and counter-AI weapons have an irreversible impact on the international security system.