Download PDF
Frank G. Hoffman is a Distinguished Research Fellow in the Center for Strategic Research, Institute for National Strategic Studies, at the National Defense University.
The rollout of Chat GPT-3 by OpenAI in late 2022 caused a storm of controversy. The new software created seemingly authentic and detailed answers to queries, generated passable drafts of student essays, and even managed to pass a college exam at the Wharton Business School. But some of the chatbot’s responses were also inaccurate, inappropriate, and deeply flawed. The updated version GPT-4, released in March 2023, did little to alleviate concerns about how far and how fast this technology could take us.
Here again the rapidly developing field of artificial intelligence (AI) brought out a spate of spurious claims and serious concerns. Given the purported progress being made in computational intelligence, it is imperative that the Armed Forces be attentive to understanding what AI can and cannot do within our professional sphere. There is little doubt that AI will bring about profound changes in the conduct of warfare, and equally little agreement on just what those changes will be.
Two recent books, Four Battlegrounds and I, Warbot, will help readers sort out the hype from the hysteria. Both address the state of the art of today’s AI and machine-learning technology with interesting anecdotes and insights drawn from intensive interaction with leading laboratories, critics, and scientists around the globe. Most importantly, they underscore what we should be wary of when incorporating AI into military institutions and operational practice.
Four Battlegrounds, penned by Paul Scharre from the Center for a New American Security, blends a pragmatic approach borne from his days as an Army infantryman with the perspective of a veteran Pentagon policy wonk. This is Scharre’s second major work on the topic. His initial book Army of None: Autonomous Weapons and the Future of War (W.W. Norton, 2018) was widely acclaimed. That volume zeroed in on the ethical implications of autonomous weapons. His grasp of the implications was not only sobering but also overly hopeful about international norms and arms control. It was an impressive effort that this reviewer did not think would be soon surpassed.
Four Battlegrounds proved that wrong in short order. Scharre covers the exciting advances of the last 5 years in an accessible style. He has produced a well-balanced and detailed assessment of the state of the art and a useful critique of just how fast and far the Pentagon is moving. Overall, the author takes a prudent approach when it comes to AI’s dramatic potential.
His title is drawn from four key considerations that will determine the pace and scale of our ability to leverage AI productively. Success will require progress in each of the four “battlegrounds”: data, computing power, human talent, and institutions. Large-scale models are now fueled by massive amounts of data, hoovered up and stored for training algorithms. Data is the fuel for the AI revolution. Powerful computers with ever more sophisticated chips are coming online, but the fabrication of these slivers of silicon depends on a fragile production chain. While silicon wafers constitute a critical element of the cyber ecosystem, the most precious asset is human talent. Scharre argues that developing human capital should be a higher priority for U.S. strategy, reinforcing a point made by the National Security Commission on Artificial Intelligence.
The final battlefield is also an area ripe for reform. Ultimately, it is not the technology itself that will determine success; it depends on institutions that adapt their processes, metrics, and structures to best apply AI and machine learning. The author suggests we have a ways to go if we want to move past hardware and platforms and accept data and algorithms as units of combat power. More critically, he excoriates the bureaucratic processes that retard agile development of AI capabilities. The greatest barrier to adoption is not computing power or creative new algorithms. The most significant hurdle is the government’s own acquisition bureaucracy and red tape. An encrusted system designed to eradicate risk and curtail budgetary fraud extends the proverbial “valley of death” for startup companies and strangles them in the cradle as they try to scale up. To Scharre, the government’s own system is more lethal to our success at innovation than any “pacing threat.”
Scharre warns that “If the United States moves too slowly it could cede military dominance in a critical new technology to a rising and revisionist China” (6). At the same time, the clearest message in Four Battlegrounds is a warning: We should not let the fear of falling behind leading countries alter our risk tolerance about “the appropriate balance between fielding new AI systems and ensuring that these systems are robust and reliable” (257).
I, Warbot is more philosophical but no less insightful. The author, Kenneth Payne, works for the United Kingdom defense establishment and previously penned an intriguing book on AI’s impact on strategy titled Strategy, Evolution, and War: From Apes to Artificial Intelligence (Georgetown University Press, 2018). He brings a unique perspective on the nexus of psychology and strategy, which is a valuable lens for seeing the benefits and barriers of employing AI and machine learning in the military.
Payne explores the creative capacity of AI programs with a typology of three different kinds of creativity. He finds that AI supports only the first two types: exploratory and combinatorial. In these two forms, algorithms examine patterns and assess probabilities from existing data. This is the kind of creativity exhibited by the winning poker-playing computer program Libratus or the earlier AlphaGo program that beat a world champion Go player convincingly. Where computers and AI systems fall short is in the third category—transformative creativity. This is the kind of intelligence needed when facing a novel problem or when an old problem requires solutions that have not yet been conceived. These situations require more than predictive computation and more imagination. As Payne stresses, AI programs may be tactically brilliant in the narrow task each is designed for, but they cannot connect dots or “understand” a novel situation that they have not been programmed for or provided a data set to learn from.
Both authors promote human-machine teaming instead of the overdramatized fascination with autonomous systems. “The most effective military systems,” Scharre concludes, “will be those that successfully combine human and machine decisionmaking and the most effective militaries will be those that find ways to optimally employ human-machine teaming” (264). Payne readily agrees at the tactical level but argues they will be strategically naïve due to their lack of empathy and transformational imagination.
The best and most challenging chapter in I, Warbot deals with human-machine teaming. Payne goes deep into the potential of centaur teams, which combine human decisionmakers and AI support systems, initially advanced by Gary Kasparov (the Russian chess master who famously lost a match to IBM’s Deep Blue a quarter-century ago). Payne recounts several experiments and government wargames where such centaur teams engaged in strategic interactions. Payne’s speculations are not conclusive, but he suggests that a pairing of a human and AI systems may produce synergistic advantages along with some detracting interactions. He aptly perceives the dramatic acceleration of tactical activity in war that can be matched only by machine speeds, but he is also hopeful that AI can aid strategy formulation since it offers more time for collaborative and creative deliberation between senior leaders and augmenting support systems. But the interaction between human commanders (the source of curiosity, intuition, and transformational creativity) may be inhibited by decision support systems that might be reluctant to accept or interact productively with an AI system. This, he argues, warrants far more study. For now, and in the immediate future, “Warbots will make incredible combatants, but limited strategists” (181).
There are common conclusions between this engaging pair of authors. Both suggest that the introduction of autonomous systems is unlikely to change the nature of war. It is axiomatic to the U.S. military that war’s essential nature is immutable, while the character of warfare (how war is conducted) is always changing. Scharre notes that the increased reliance on drones, uncrewed systems, and swarms reduces the role of humans at some levels of war. Yet humans will still initiate war, set out the policy aims, develop strategies, employ machines, make decisions, and even fight. Not surprisingly, Payne agrees. He does not envision the human element of war disappearing any time soon. “Even if machines make more decisions at the tactical level,” Payne concludes, “war will remain something that is done by and to humans” (84).
Four Battlegrounds and I, Warbot are each outstanding, but together they offer complementary insights. Both authors raise the kind of hard questions and uncomfortable issues that we must face as this technology evolves. Both books will improve readers’ AI literacy and deepen their critical thinking about how we approach AI in our respective domains. Accordingly, both books are highly recommended to the joint community and the larger strategic studies field on both sides of the Atlantic. The introduction of AI-enabled support has huge potential benefits to training for war; in the conduct of warfighting; and many support functions including intelligence, logistics, and cyber security. But real progress will be made only by seeking to employ AI responsibly, with rigorous attention to validation, and a healthy appreciation for how brittle the technology is today. This pair of books offers a valuable guide to the revolution that will increasingly define our economies and security in the coming years. JFQ