News | Jan. 27, 2025

Is the PLA Overestimating the Potential of Artificial Intelligence?

By Koichiro Takagi Joint Force Quarterly 116


Download PDF

Chinese soldiers browse news on desktop computers at People’s Liberation Army garrison in Chongqing, China, November 14, 2013(Imaginechina/Alamy)
Colonel Koichiro Takagi, Japan Ground Self-Defense Force, is Chief of the Defense Policy Section J5, Japan Joint Staff.

The Chinese People’s Liberation Army (PLA) is using artificial intelligence (AI) to build a world-class military. It describes the concept of using weapons systems based on AI as “intelligentization,” which has been the focus of China’s military reforms in recent years.1 At the 20th National Congress of the Chinese Communist Party (CCP) on October 16, 2022, Xi Jinping mentioned the word intelligentization three times and stated that he would more quickly raise the PLA to a world-class military.2 The concept of intelligentization was not mentioned at the 19th Congress in 2017. Chinese researchers argue that the PLA can overtake the U.S. military by using AI to intelligentize.3

The argument for the high potential of AI is not necessarily unique to the PLA. In May 2017, Deputy Secretary of Defense Robert Work stated that AI may change the nature of warfare.4 Does the PLA, like Work, believe that AI will fundamentally change even the nature of warfare? Or is the PLA overestimating the potential of AI? Does the PLA intend to use AI to strengthen its capability and is this reform even feasible. Indeed, some U.S. experts suggest that Chinese theorists overlook the inherent vulnerabilities of AI and autonomous systems and overestimate their capabilities.5

Chinese President Xi Jinping talks with officers and soldiers during inspection of People’s Liberation Army division in Central Theater Command,January 3, 2018, in Baoding, Hebei, China (Xinhua/Alamy/Li Gang)

This study reviews PLA Daily articles on the PLA’s use of AI and explores how they intend to use it. It then examines what the PLA thinks the possibilities and limitations of military use of AI are. Is the PLA overlooking the problems and vulnerabilities of AI? Or is it deeply aware of those vulnerabilities and still betting on AI’s potential? By looking at the breadth and depth of these perspectives, it’s possible to examine the viability of the PLA’s AI-focused military reforms.

This study surveys articles on military theory published in the PLA Daily, the official newspaper of the Central Military Commission of the CCP, and examines all articles on military theory published from January to December 2023, a total of about 370 articles, following Xi’s mention of intelligentization at the CCP Congress in October 2022. The PLA Daily has a long history and is an authoritative official newspaper, typically publishing four articles on military theory every Tuesday and Thursday.

How Will the PLA Use AI?

Of the 370 articles published in the PLA Daily in 2023, 132 of them refer to intelligentization, or the military use of AI. This represents much of the PLA discussion focused on the topic. Many of the articles pointed to four areas in which AI could be effectively applied:

  • situational awareness (10 articles)
  • military decisionmaking (40 articles)
  • unmanned weapons (27 articles)
  • cognitive domain operations (14 articles).6

In addition, some articles discussed military training, logistics, and weapons development using AI.

Situational Awareness. Situational awareness on the battlefield means understanding the situation of the enemy and friendly forces, the battlefield environment such as weather and terrain, and changes in the combat process, which is the basis for commanders’ decisionmaking. Sun Tzu stated that if you know your enemy and know yourself, you will not be in danger in a hundred battles. Correct situational awareness is an extremely important factor in winning a battle.

The information-gathering systems used in modern warfare include satellites, manned and unmanned vehicles, and ground-based and undersea radars and sensors. The numbers of types and quantities are increasing. Modern warfare also uses open-source data that can be collected from the Internet and other sources. Furthermore, the volume of data sent from each of these information-gathering devices is expanding—for example, as the resolution of images taken by satellites has dramatically improved.

AI is indispensable to properly process such enormous amounts of data. AI has an overwhelming advantage over humans in terms of memory and computational power. It therefore provides a reliable, highly accurate, and fast source of information for complex battlefield situational awareness today and in the future.7

Until now, data overload—the existence of large amounts of data that could not be processed—has been a problem in battlefield situational awareness. AI can mitigate data overload and enhance information-processing capabilities.As technology advances, future wars will occur simultaneously in multiple domains, and AI can enhance human cognitive abilities by collecting, integrating, and analyzing complex battlefield data. In addition, the powerful linguistic analysis capabilities of large language models (LLMs), such as ChatGPT, can extract useful information in real time from open-source information.9

Military Decisionmaking. Many articles have argued that the use of AI can be expected to speed up the decisionmaking speed of commanders.10 Rapid decisionmaking using AI can be expected to realize rapid action on the battlefield and win initiatives.11 Some articles suggest that AI can also be useful in the complex environment of modern warfare by controlling and targeting firepower quickly, accurately, flexibly, and adaptively.12 LLMs such as ChatGPT can also be used for basic tasks, such as including data analysis and natural language processing to help commanders and to contribute to qualitative improvements in decisionmaking.13

Other articles discuss the use of AI to predict battlefield conditions in supporting human decisionmaking.14 AI can predict the success rate and effectiveness of the actions chosen by the commander in battle and suggest modifications and improvements to high-risk and low-effectiveness actions, thereby ensuring that the objectives in battle are achieved. In addition, AI can grasp real-time data on the status of each combat unit and weapon, accurately predict battlefield conditions, and suggest timely adjustments to the deployment of units.15 However, some articles pointed out that current AI has limitations and cannot completely replace humans.16 Therefore, what can be expected from current AI is to assist in the creation, simulation, and optimization of operational plans.17

Even though there are limitations of AI at present, some consider that AI-based decisionmaking will develop in phases.18 In the first phase, AI will be used to improve the quality of human decisionmaking. For example, deep learning will be carried out using LLMs with a large input of ancient war cases. During the precombat mission analysis process, commanders can ask the model about similar wars in history.19

In the second phase, AI will be applied to the entire operational planning process. This means that LLMs generate reconnaissance plans, firing plans, and even entire operational plans by inputting the commander’s guidance. In the third phase, AI will be applied throughout the entire process from planning to combat implementation. In this phase, AI will dominate battlefield decisionmaking.20

AI is likely then to reach the third phase.21 AI has scored higher than 90 percent of examinees who took the U.S. Bar Exam and 99 percent of examinees on the USA Biology Olympiad, and it achieved nearly perfect scores on the Graduate Records Examination language exam. Similarly, there will be a good chance that AI could surpass the level of most commanders.22

In the future, moreover, a situation could arise where only AI will be able to defeat such AI.23 AlphaGo beat humans through deep learning in millions of human-versus-human games. AlphaZero, on the other hand, beat AlphaGo by co-evolving millions of system-to-systems matches without learning any human knowledge. Similarly, AI can predict future wars without learning from human experience of warfare.24 In other words, future AI will be able to understand the potential intentions and possible actions of the enemy from complex and changing battlefield situations.

However, in today’s complex battlefields, the number of nodes in the battlefield information network is enormous, and countless interactions occur. Hence, some point out that achieving a learning rate of a few minutes or even seconds to keep up with the tempo of the battlefield would require enormous computational power, and only quantum computing would meet this requirement.25

Unmanned Weapons. Many articles point to the use of more unmanned systems in future warfare. For example, multilegged, quadrupedal, and bipedal humanoid robots have been used to assist soldiers on the battlefield. Unlike humans, these humanoid robots have no physiological limitations, are not affected by emotional factors, can adapt to dangerous environments, and can work long hours without sleep or rest.26 Unmanned weapons are generally smaller and stealthier than manned weapons, making them more suitable for surprise attacks. With the addition of AI, these weapons can act flexibly and autonomously in complex battlefield environments, enabling surprise attacks in a wider range of missions.27 Unmanned weapons acting autonomously are also capable of penetrating deep into enemy territory to attack key targets.28

Unmanned weapons can operate for long periods in areas where manned weapons cannot operate, such as at high altitudes and in deep waters. Deep-sea combat by unmanned weapons would overturn the rules of conventional naval warfare.29 Using autonomously controlled unmanned weapon groups to control the deep sea and then attacking the sea surface from the deep sea could achieve a new battle method that allows forces to challenge superior forces at sea.30 Unmanned weapons also save human resources and are useful in monitoring and protecting China’s long borders.31 Some articles suggest that the use of AI improves the accuracy and speed of precision strikes because it improves target detection, accurate targeting, and weapons guidance techniques.32

Furthermore, the PLA is looking at operating numerous unmanned weapons in groups. Even though their individual combat capability is limited, using a network of AI-equipped unmanned weapons with different functions aggregates the AI to form a collective intelligence.33 This forms a decentralized, self-organizing behavior based on the interaction of individuals, like biological groups such as a flock of birds or a colony of bees. This collective intelligence could then realize emergent phenomena in the combat system and form a distributed intelligence that no single AI could ever possess.34

On the other hand, some pointed out that future warfare would include not only regular warfare using high-tech weapons but also irregular warfare using low-tech weapons. Therefore, close coordination between operations using unmanned and manned operations will be important.35 Other articles argue that in high-intensity regular warfare, a combination of manned and unmanned weapons is useful from the perspective of complementing the shortcomings of both sides.36

Some articles discuss the dangers of unmanned weapons going out of control. One article insists on the need to prevent flaws in design and manufacture and points to the downside of completely divorcing the weapon from human control. It also warns that advanced technology, including AI, is a double-edged sword and must be used while considering its advantages and disadvantages.

Cognitive Domain Operations. The PLA holds that cognition includes human knowledge, experience, consciousness, emotions, and psychology and that the target of cognitive domain operations is the human being who is in a proactive position in a war.37 In the information age, Internet media, radio, television, newspapers, and magazines are used to influence human cognition instead of violence. Furthermore, in the age of intelligence, where AI is used, advanced technologies such as brain-computer interfaces have been developed.38 Social media has also become one of the major battlegrounds in cognitive domain operations.39

Many articles have presented similar methods for cognitive domain operations.40 Malicious narratives and disinformation can work on the emotions of the target population to induce a desired shift in people’s values as well as to sway public opinion and control behavior.41 Cognitive domain operations also fabricate algorithms to recommend deep fakes on the target population’s devices, creating psychological and emotional internal conflicts. In addition, the method uses big data to analyze the cognition of social groups to control and guide their consciousness.42

Accurate targeting is an important element in cognitive domain operations.43 Information in cognitive domain operations is equivalent to ammunition in physical domain operations. By analyzing a large amount of data on the Internet, it is possible to design effective ammunition that matches the target.44 Furthermore, cognitive domain operations can use the same method that companies in Western countries use to understand customers’ Internet browsing history, shopping records, and so forth, and to distribute advertisements that match the customer’s needs.45 Cognitive domain operations identify the characteristics of the target accurately and attack the target according to those characteristics to continuously influence it. In this situation, LLMs such as ChatGPT are useful in analyzing public opinion, extracting important information, and creating disinformation based on that information.46

AI-Based Military Training, Logistics, and Weapons Development

Many articles point out that AI will improve the efficiency of training by enabling rational and effective training planning as well as enriching training content.47 AI can assist in analyzing the training status of units and reduce the amount of manual work required to create training plans, thereby creating them with greater timeliness and accuracy.48 In addition, training equipment that uses AI can construct a more realistic virtual battlefield environment to train troops. By collecting and analyzing large amounts of training data, it is possible to effectively manage the progress and quality of training.49 LLMs such as ChatGPT will be useful in improving the effectiveness of training. By feeding training data, the model can accumulate training experiences and share them among units, and the model can generate concrete training tasks and scenarios automatically to improve training effectiveness.50

Some PLA articles point out that AI can improve the efficiency of logistical support operations. With the development of science and technology, the battlespace has greatly expanded, and many combat forces have been dispersed and deployed, making logistical support operations more diverse and complex.51 Under these circumstances, deep learning of data, such as the quantity of supplies in storage, actual usage, delivery routes, and delivery units, can lead to automatically predicting the demand for supplies and generating optimal supply delivery plans.52 Similarly, AI can realize efficient maintenance of weapons.53 Given that logistics is a science of computation—and the more sophisticated the computation the more efficient and effective it becomes—applying AI to logistics would have great benefits.54 Some articles in PLA Daily also suggest that AI could be useful in weapons development.55 LLMs can generate control codes for robots, which can make the production process of weapons more accurate and efficient, reduce labor costs, and improve the efficiency of weapons research and development.

Chinese President Xi Jinping listens to military history presentation during visit to Second Artillery Corps military base in Kunming, YunnanProvince, January 21, 2015 (Xinhua/Alamy/Li Gang)

PLA’s View on the Potential and Limitations of AI

Dangers and Limitations of AI. Many of the reviewed PLA Daily articles make the following broad range of points about the dangers of the military use of AI: There is a risk of unmanned weapons losing control and acting contrary to the human operator’s intentions. There is also risk of malfunction if there is accidental or artificial bias in the AI’s training data. Unmanned weapons increase the risk of unintended first strikes in war. The speed of military operations can outpace the speed of political decisionmakers, causing crises to spiral out of control and resulting in unintended escalation of war. Violent conflicts may become more frequent and violate the principles of the ethics of war and the provisions of the international laws of war.56

Some articles discuss the limitations of LLMs such as ChatGPT.57 The essence of ChatGPT is deep learning, which requires huge and high-quality training data. It cannot understand errors, and it cannot be updated in real time. ChatGPT only works by predicting what the next word will be according to the probability distribution of the training set based on the huge amount of training data and powerful computing power. Therefore, the peculiarity of the military operation limits its application.58 ChatGPT can give only random answers in military fields that are not part of the training data set. Furthermore, the risk exists that an adversary could carry out malicious attacks, such as data contamination.

There are also arguments for vulnerability due to AI’s reliance on data.59 Machine learning requires large amounts of labeled data. In military operations, large amounts of information are obtained from reconnaissance operations and intercepted communications, but these data are unlabeled.60 In addition, the deception and camouflage of adversaries result in a mixture of true and false data, but it is difficult to verify its accuracy.61 Still, in the complex battlefield environment, large amounts of anomalous data can arise.62 Similarly, the complex interactions in multidomain operations can lead to data distortion and corruption.63 Reading incomplete data is difficult and leads to mislearning of AI.

Current deep learning and reinforcement learning methods are essentially data-driven algorithms that seek correlations among heterogeneous data. As such, there are inherent limitations in deriving correlations and regularities from incomplete and error-prone data derived from complex military operations.64 This will introduce new complexities in future warfare using AI.65 Deep learning and machine learning methods that have proved successful based on vast amounts of data in the civilian sector, such as Internet data analysis, speech recognition, image recognition, and autonomous driving, have limitations in military operations.66

Some argue that AI is dangerous as it does not have a human-like consciousness, does not understand the laws and rules of war, and has no ethics, morality, or empathy.67 In complex battlefield environments, there is a fear that the system could get out of control and indiscriminately kill innocent people. For this reason, people should pay more attention to the legal and ethical issues posed by AI and regulate its behavior through a human-in-the-loop approach.68

Ultimately, based on current computer technology, there are limits to the development of AI. Therefore, until the technological singularity arrives, AI can continue to approach, but not surpass, human intelligence. For the foreseeable future, humans will remain the supreme rulers of the battlefield.69 In other words, humans will always remain the decisive factor in winning and losing wars.70

The Potential of AI. Recognizing the wide range of problems mentioned, why does the PLA believe that it can achieve AI-enabled military reform? Because they believe AI will bring about new forms of warfare.

Current AI is simply a set of neural network parameters. It is not conscious and cannot understand the motivations and causal relationships behind the conclusions it draws. However, if the amount of data and the number of neurons are sufficiently large and the model sufficiently complex, quantitative changes may lead to qualitative changes, causing the system to emerge with some magical functionality.71 The fact that ChatGPT, which has orders of magnitude more training data than previous AI, responds as if it were human is ample evidence of this. Therefore, it is necessary to think about AI that brings about new forms of warfare rather than looking only at current AI and downplaying its potential.72

In future wars, AI will control unmanned weapons to fight autonomously as well as find and attack weaknesses in the enemy that humans may not be aware of.73 These wars will use novel tactics, be complex, fast-paced, and defy human common sense.74 AI will then upgrade itself at high speed. This will create situations that are difficult to explain with common sense and existing theories of warfare.75

Behind the many references to new forms of warfare is the idea that first-class armies design wars and second-class armies respond to wars.76 From this perspective, designing the forms of war is the condition for being a world-class army. Designing forms of warfare requires foresight and asymmetry in identifying the enemy’s weaknesses as well as playing to its own strengths.77

No system can solve all its problems within its system. Venturing outside the system and seeking a higher and greater outlet can only find radical solutions.78 In early 20th century London, horse-drawn carriages were the main means of transport for people. With 300,000 horses on the streets, the resulting piles of horse manure on both sides of the road were a major social problem. This seemingly insoluble problem lost its meaning when the horse-drawn carriage disappeared from the face of history with the advent of the motor car. Many of the problems of conventional warfare may also lose their meaning with the advent of new forms of warfare that utilize AI.79

Conclusions

The PLA is deeply aware of the dangers and limitations of AI. It extensively discusses the risks of unmanned weapons losing control, malfunction due to biased learning data, unintended preemptive strikes and escalation, and violating the ethics of war. These points are like those made in U.S. articles discussing the dangers and limitations of AI.80 The PLA is aware of the problems with AI. It is also aware that its AI-centered military reform could be based on an overestimation of the new technology.

Military theories that overestimated the potential of new technologies have often failed. For example, in the 1920s, a radical theory appeared that overestimated the potential of aircraft. Strategists, led by the Italian Giulio Douhet, argued that strategic bombing could terrorize enemy civilians and that aircraft alone could achieve strategic effectiveness by subduing the enemy’s will. However, in World War II, no country surrendered solely based on strategic bombing. Furthermore, means of intercepting enemy aircraft, such as antiaircraft guns and radar, were developed. Blinded by the potential of this new weapon, the extreme theory that it alone could win a war failed.

The failure of such theories stems from their overestimation of the potential of new technology and their insistence that new technology alone could have a strategic impact that could win or lose a war. Warfare is based on the complex interplay of myriad elements, and no single new technology can have an unlimited impact on the war. Theories that downplay these complexities and place too much faith in a single new technology will fail.

The PLA’s vision of the military use of AI can be summed up in the phrase “first-class armies design war, second-class armies respond to war.” They point out that AI will bring new forms of warfare that will not be governed by the conventional wisdom and rules. It seems that the PLA itself, by designing this new warfare, intends to win against the U.S. military based on the rules it has set.

Chinese-made CH-5 (Caihong-5) reconnaissance and combat drone and its compatible missiles on display during 11th China InternationalAviation and Aerospace Exhibition in Zhuhai, Guangdong Province, November 1, 2016 (Imaginechina/Alamy)

An army that sets new rules will achieve overwhelming victory. Germany’s swift victory over France at the beginning of World War II is an example of this logic. The reason for the victory was Germany’s innovative military theory of blitzkrieg warfare, one of the core technologies of which was the tank.81 The French had more tanks that performed better than those of the Germans.82 However, French military theory, which had not changed since World War I, treated tanks as support weapons for infantry. They were unable to respond to the blitzkrieg assault of German armored divisions formed of tanks from the Ardennes Forest.

Similarly, an AI-enabled PLA might breach the future Ardennes of the United States and its allies based on newly designed rules. The potential for this is unknown. However, it should not be forgotten that Germany developed the theory of blitzkrieg warfare in the 1920s when tank technology was still in its infancy. The United States and its allies must closely scrutinize the potential of AI and develop a military theory that incorporates a wide range of its advantages. At the same time, China’s discussions on the military use of AI must be closely monitored. JFQ