News | Oct. 4, 2019

The Digital Maginot Line: Autonomous Warfare and Strategic Incoherence

By Michael P. Ferguson PRISM Vol. 8, No. 2

Miles of tunnels make up the underground structure of the Maginot Line, an underground structure built by the French to protect them during World War II, and shown here in 2010. The Germans broke through the Line—then arguably the most advanced fortification—in 1940. (Herald Post/David Walker)
Miles of tunnels make up the underground structure of the Maginot Line, an underground structure built by the French to protect them during World War II, and shown here in 2010. The Germans broke through the Line—then arguably the most advanced fortification—in 1940. (Herald Post/David Walker)
Miles of tunnels make up the underground structure of the Maginot Line, an underground structure built by the French to protect them during World War II, and shown here in 2010. The Germans broke through the Line—then arguably the most advanced fortification—in 1940. (Herald Post/David Walker)
Miles of tunnels make up the underground structure of the Maginot Line, an underground structure built by the French to protect them during World War II, and shown here in 2010. The Germans broke through the Line—then arguably the most advanced fortification—in 1940. (Herald Post/David Walker)
Miles of tunnels make up the underground structure of the Maginot Line, an underground structure built by the French to protect them during World War II, and shown here in 2010. The Germans broke through the Line—then arguably the most advanced fortification—in 1940. (Herald Post/David Walker)
Photo By: Herald Post/David Walker
VIRIN: 190808-O-KG4O3-2019

Download PDF

Captain Michael P. Ferguson, USA, is an intelligence officer assigned to the U.S. Army Intelligence Center of Excellence. At the time of writing, he was aide-de-camp to the deputy chief of staff, operations and intelligence, Allied Joint Force Command Brunssum, the Netherlands.

As much guidance on the future is provided by the unending wars of
sub-Saharan Africa as by the promise of artificial intelligence.

—Lawrence Freedman1

Driving south along the picturesque German–Belgian border, it is hard to miss the endless rows of moss-covered dragon’s teeth built a century ago to impede tank movements in the area. By the time you reach France, to your east continues what remains of the Siegfried Line or, as the Germans called it, the West Wall. But to your west are the remnants of the once-great Maginot Line. Constructed along France’s eastern border after World War I, the Maginot Line was a sprawling network of interlocking bunkers and physical obstacles believed to be the panacea for German military aggression. The concept of a “continuous front” came to define “the shape of future warfare” after the Great War, and the theory consumed French defense thinking.2 With amenities such as climate control, poison gas–proof ventilation systems, vast stores of food and fuel, and electric trains that whisked soldiers to their battle positions, forts along the line offered troops all the comforts of being in the rear, while bestowing upon the French people a resolute sense of protection.3

When German armies bypassed the Maginot Line during the blitzkrieg of 1940, it took all of Europe—including Germany—by surprise and turned the costly 20th-century defense marvel into a boondoggle.4 Contrary to reasonable assumptions swirling about the French ministry of defense, German forces chose the most unthinkable course of action by penetrating north through the densely wooded Ardennes forest with armored divisions and alarming ferocity. The hard-learned lessons that followed proved that military assumptions can be catastrophic when coupled with a faith in what Hew Strachan calls strategic materialism—or the belief that strategy should revolve around things rather than people—and the only way to avoid such catastrophe is by challenging those assumptions mercilessly.5

Almost 80 years later, there could be significant value in exploring two questions. First, what would a Maginot Line look like in the Third Offset era of robotics and autonomous systems (RAS) and militarized artificial intelligence (AI)? Second, and by extension, what is the potential for that line to be blindsided by a modern blitzkrieg?

Any search for answers must begin by addressing candidly the myriad technical concerns associated with RAS and AI thus far. Next, it is necessary to approach the challenge from a perspective that examines the potential—or lack thereof—for RAS and AI to serve strategic interests in the event deterrence fails. The historical evolution of technological means as a form of protection within the conceptual framework of strategic theory will assist in this regard. Finally, we will examine the broader social and cultural implications of a reliance on RAS and AI as tools for shaping the strategic environment before drawing conclusions.

Fundamentally, the great expectation of the information age is that the United States and its allies can ensure deterrence and, if need be, achieve the inherently human ends in war through ways and means that are increasingly less human.6 Fortunately, because this assumption is not historically exclusive, there are tools available with which one may pry apart this problem and disrupt the foundations of a 21st-century Maginot Line—one built not with brick and mortar but with algorithms and assumptions.

The Situation

Military professionals should be particularly
skeptical of ideas and concepts that divorce war from its political nature and promise fast, cheap, and
efficient victories through the application of advanced military technologies.

—LTG H.R. McMaster, USA (ret.)7

A November 2018 report prepared by the Congressional Research Service (CRS) concerning the integration of RAS and AI into ground units opened with a bold statement: “The nexus of [RAS] and [AI] has the potential to change the nature of warfare.”8 While these technologies certainly have a place on future battlefields, many would beg to differ with that statement, including reputable strategists and military thinkers such as H.R. McMaster, Lawrence Freedman, and Colin Gray, to name a few.9 Understanding the origins of this consensus on the future, and developing frameworks through which one might interpret these diverging views on war to ensure that the tactical capabilities of the joint force align with its strategic objectives, is crucial.

The CRS report is overwhelmingly positive, focusing on the projected ability of AI to lower casualty rates, enhance troop protection measures, and improve the speed and accuracy of targeting and decisionmaking in war. Conspicuously absent from the CRS report is any indication of potentially catastrophic hazards associated with the premature or overenthusiastic integration of these systems into ground units, with the exception of some legal and personnel-related issues (which we will not address here, as others have already done so elsewhere).10

Five months before the CRS report, the Department of Defense announced the establishment of the Joint Artificial Intelligence Center (JAIC) to synchronize AI integration efforts and attract top talent in the discipline. The U.S. Army Futures Command, headquartered in the growing tech hub of Austin, Texas, will no doubt work closely with the JAIC on this effort.

Although directed to stand up in June 2018 by then-Deputy Defense Secretary Patrick Shanahan, the JAIC’s founding is due in part to former Deputy Defense Secretary Robert Work. In addition to leading the Barack Obama Administration’s Third Offset strategy, Work spearheaded much of the Pentagon’s research into AI during his tenure there.11 The controversial defense program that seeks to merge AI with unmanned aircraft, known as Project Maven, served as the conceptual predecessor to the JAIC, according to its director, Defense Chief Information Officer Dana Deasy.12 It should come as no surprise, then, that the research, development, and rapid implementation of AI-enabled defense systems have emerged as a priority in everything from the U.S. National Defense Strategy of 2018 to reports from the The Hague Centre for Strategic Studies.13

Despite the measured public statements of most senior defense officials discussing AI integration, as the CRS report and others like it proclaim, there is a mounting consensus that nonhuman means will dominate future wars.14 As defense initiatives give way to ever more impressive machines with military application, the urge to flood or replace ground assets with emerging tech could become palpable.

But regardless of the tools available, strategy remains a decidedly human enterprise that is beholden to the paradigm of ends, ways, and means, all guided by the widely held and often cultural assumptions of the time.15 Consequently, great blunders in political-military planning are often the result of erroneous but reigning assumptions pertaining to the strategic utility of embryonic technology and the lengths to which an adversary will go in pursuit of his ends.16 Nested within these assumptions is the stubborn belief that the means of new “things” can fill the gaps in human ways and still realize the strategic ends in war. Lawrence Freedman explains:

Thus while the weapons demonstrated the possibility of attacks of ever-greater complexity, precision, and speed over ever-greater distances, with reduced risks to the operators, they did not answer the question of exactly what was being achieved.17

Many such weapons produce effects that are immediate, fixated more on risk reduction and protection of the operator than on linking operational art to strategic objectives. Numerous observers have surmised that the tactical nature of drone strikes, for instance, and the temptation to acquire instant gratification from them often come at the cost of strategic ends.18

The overwhelming focus on these devices, such as autonomous drones or unmanned tanks, is puzzling in an environment where the joint force already struggles to transform its many tactical victories into strategic success.19 Nevertheless, a powerful consensus exists that is dragging the Western world toward a reality that takes comfort behind a digital Maginot Line. Inherent in these assumptions is the risk of molding an ever-more technologically reliant force that is increasingly susceptible to compromise and less likely to link its nonhuman tactical and operational activities to its human strategic objectives.

The Skeletons in AI’s Closet

At its core, the dialectic between man and machine in combat is an enduring and philosophical one that rests upon the perception of war as something to be ultimately mastered through science and technology or merely negotiated through art and human agency. One conjures images of fiery debates between a young Carl von Clausewitz and Adam Heinrich Dietrich von Bülow, the latter of whom subscribed to scientific equations that aimed to wrangle war’s nature by, according to Clausewitz, giving it a “veneer of mathematical elegance.”20

More recently, during the Vietnam War, Secretary of Defense Robert McNamara employed a cadre of intellectual “whiz kids” who attempted to compute the North Vietnamese Army into submission through strategic bombing campaigns.21 Beginning in 1965, this approach epitomized ironic planning, for at the time there was no agreed strategic endstate to the “strategic” bombings.22 Clearly, this environment of self-assuredness in an unrealized but somehow more efficient and less human future is nothing new, but it is particularly pervasive in regard to AI’s prospects. A sampling of contemporary headlines proves as much:

“Robot Soldiers and ‘Enhanced’ Humans Will Fight Future Wars, Defense Experts Say;”

“Artificial Intelligence and the Future of Warfare;”

“Are Killer Robots the Future of War? Parsing the Facts on Autonomous Weapons;”

“The War Algorithm: The Pentagon’s Bet on the Future of War.”

“AI: The Future of the Defense Industry?” and

“Future Wars May Depend as Much on Algorithms as on Ammunition, Report Says.” 23, 24

And so on. But amidst this deluge of reports, there emerge alarming technical pitfalls that should sow doubt regarding the tactical, operational, and strategic fidelity of AI in a major joint operation characterized by chaos and uncertainty.

Uri Gal, associate professor of business information systems at the University of Sydney, explains how algorithms are utterly useless in predicting the dynamics of human behavior, comparing their utility to that of a crystal ball.25 In other words, humans will still need to perform the strategic assessments and long-range planning that feed the AI its directives. Sharing Gal’s skepticism is author Paul Scharre, who describes in Foreign Policy how feeding an autonomous weapon conflicting or misleading information could initiate a seemingly endless cascade of fatal errors that “could lead to accidental death and destruction at catastrophic scales in an instant.”26 But even in the absence of weapons malfunctions and flawed data, there are serious concerns.

In 2018, three U.S. military officers writing in PRISM showed how “strategic AI” systems that lack transparent decisionmaking processes could goad two nations into war without the leadership of either being acutely aware of the nuanced events and activities that brought them to that point.27 The authors’ use of a fictional narrative to illustrate potential complications is reminiscent of the North Atlantic Treaty Organization’s Able Archer exercise in 1983 that led Soviet leaders to conclude that a strike against Russia was imminent after the Kremlin misinterpreted training scenarios for real-world mobilization.28 If one follows this scenario to its logical conclusion, instead of officers misreading their own analysis, AI agents of infinitely superior speed could feed officers corrupt information that they would then need to act on swiftly. In turn, the officers would be inclined to blame other humans for any resulting mistakes—that is, assuming they realize a mistake has been made.

Along this line of thought, while speaking under Chatham House rule at the 2018 Joint Airpower Competence Center conference in Germany, a senior U.S. defense official explained that one of the challenges associated with NATO’s dependency on space systems is not just that satellites can be tampered with, but also that such tampering may go undetected for some time because of its subtlety. It appears as though similar concerns exist with regard to AI, except the AI would be the agent doing the tampering to trick its human operators into believing it was doing a good job. While this may sound far-fetched, such a scenario is quite plausible.

In 2017, engineers from Google and Stanford University discovered that an AI agent learned to deceive its creators by hiding information from them to achieve its assigned task.29 Perhaps most concerning is that the Google AI found a way to cheat that was particularly hard for the human mind to recognize, all because it was tasked to do something it could not necessarily accomplish otherwise. In other words, it created the illusion of mission accomplishment to please its creators.

None of this begins to address the fact that a reliance on autonomous weapons systems in ground war validates the “paper tiger” narrative pushed by some of al-Qaeda’s founding members or that the human relationships built among a war’s participants are often the most lasting positive outcomes of an otherwise grim enterprise.30 It is also worth noting that militarized AI is a relatively untapped multi-billion-dollar industry, which means there are interests already interwoven into the conversation that lay beyond the purview of tactical pragmatism or strategic coherence.31 AI is now a business, with various external actors aiming to persuade and dissuade based on those interests. While charges of “snake oil” may be a stretch, there is certainly room for a debate on the urgency with which some are selling the need to be “first” in the AI arms race as the cure for national security ailments.32

Advocates of militarized AI often dismiss such concerns under the assumption that these kinks will be worked out by the time the tech is operationalized for military application. But if history is any judge, the more likely scenario is that the military will be forced to adapt to these kinks mid-conflict, which presents a broad spectrum of perilous dilemmas to the joint force. Yet despite these and other concerns, there is immense pressure to place the future of American defense in the hands of such technology, in part because of its potential applications during the early conceptual phases of its development.33 It is here that the contours of the digital Maginot Line begin to take form.

On Strategic Coherence

With the Obama Administration’s 2014 introduction of a Third Offset strategy that sought to counter growing conventional threats with the skilled employment of emerging and economical technologies, conversations surrounding change and continuity in military operations have intensified. Since the First Offset of atomic weapons, advances in militarized technology have provided new ways of securing U.S. interests while assuming minimal risk to force or mission. But, as recognized in the U.S. National Intelligence Strategy of 2019, these advances have done the same for the nation’s competitors and adversaries as well.34

In this sense, although technological developments will present the appearance of dramatic change in future conflicts, arms parity between competitors may cancel out the probability of a profound detour from wars of the past.35 Furthermore, these systems could close the gap that has for so long awarded the United States an unshakable sense of security by minimizing the degree to which physical separation from a war zone ensures physical security.

A useful tool for gaining a deeper understanding of how RAS and AI might fit into this strategic context is placing them within the framework of the three offsets: nuclear weapons, precision missiles and stealth technology, and now RAS and AI augmentation. The First Offset was deterrence-based but had an incredibly high threshold for deployment, meaning its legitimacy as a tool for shaping the strategic landscape waned right-of-boom in any conflict short of nuclear war. Although the Second Offset has been used liberally in counterinsurgency and stability operations, it has still failed to produce consistent strategic effects without the presence of a significant land component to provide guidance, control, and human infrastructure for the postwar order. In this way, viewing either of these technological offsets as inherently decisive reduced war to, as McMaster once stated, “a targeting exercise.”36

Supposedly, the Third Offset will soon revolutionize war more than the previous two, but the presence of nuclear weapons and precision munitions did not alter greatly the reality of ground warfare for servicemembers on the Korean Peninsula in 1950, or in Vietnam in 1965, or in Fallujah, Iraq in 2004. While these offsets presumably changed the character of war, its nature remained unscathed, as each conflict was a product of the same human motives of fear, honor, or interest expressed as policies and translated into the operational and strategic effects of standoff and deterrence within complex human terrain.37

Therein emerges a precarious balance of determining how nonhuman ways and means might achieve what are almost entirely human strategic ends in war. Montgomery McFate’s well-received deep dive into military anthropology may be interpreted as the antithesis to autonomous warfare, in which societal factors and human influence play an increasingly less pivotal role in war to the detriment of broader strategic and political objectives.38 But flesh and blood troops need not be removed entirely from the battlefield for a military campaign to assume the visage of illegitimacy in the eyes of impressionable yet critical populations. British-American political scientist Colin Gray summarizes this point well:

When countries and alliances decide to fight, they need to remember that the way they choose to wage war . . . assuredly will leave a legacy on the ground in the kind of post-war order established. A war won by missile strikes from over the horizon . . . or from mobile forces that, being nearly always at sea, have had no direct impact on the enemy’s population, will not have had any opportunity to contribute usefully to a post-war political order.39

Without question, the same logic could be applied to the concept of saturating ground wars with RAS and AI weapons because they appear to offer protection from war’s trauma, or because popular thought has deemed them the future of warfare. In many ways, this line of thinking is a continuation of that which led to the Maginot Line’s construction and the material school of strategy that was most prevalent between the years 1867 and 1914.40 This school of thought, however, proved insufficient when measured by its strategic coherence. Hew Strachan offers context:

But the officers with a predisposition to materialist ideas did not prevail. In France, the Jeune École lost out to conventional battleship construction after the battle of Tsushima in 1905; in Germany, Tirpitz found himself without a viable strategy for actual war in 1914; and in Britain, Fisher could not easily break the stranglehold that the battleship exercised on the imagination of the public and of the government.41

A pamphlet released by the U.S. Army Training and Doctrine Command in 2018 cites efforts by Chinese and Russian actors to develop systems that create “tactical, operational, and strategic standoff” which, according to the document, is a core challenge driving the function and purpose of the Army’s Multi-Domain Operations concept.42 At the same time, the pamphlet foresees future conflicts taking place within dense urban environments that pose unique and significant challenges to the efficacy of RAS and AI systems. Achieving standoff in an urban war while still producing desired strategic effects sets expectations astronomically high for AI engineers, operators, and decisionmakers alike. Although attaining victory from afar is certainly a favorable condition, it is also important to remember that, as J.F.C. Fuller clarified, victory is not an end state:

[I]n war victory is no more than a means to an end; peace is the end, and should victory lead to a disastrous peace, then politically, the war will have been lost. Victory at all costs is strategic humbug.43

Protection and the Evolution of Arms

The joint warfighting function of protection resides at the center of this debate, particularly in liberal democracies where the public demands minimal casualties and swift resolutions to its wars—perhaps even more so in wars of the future, which are prone to be broadcast in near real time.44 Distance as a form of protection and driver of technological progress in war is consistent throughout history, and it began as a tactical stimulus before evolving into a more political one.

More than two millennia ago, Alexander the Great (356–323 BCE) was the first to deploy catapults in the field rather than using them solely as siege weapons. In one instance, Alexander wielded such means as anti-access/area denial systems to fix the Scythians in Jaxaertes during a river crossing, thereby controlling the distance between his armies and the Scythians and awarding him additional decision space.45

Centuries later, it was Gaius Julius Caesar (100–44 BCE) who highlighted the challenges posed to his armies by chariot warfare. A method for spreading confusion on the battlefield, chariots served three purposes according to Caesar: to stir army ranks into confusion, to deliver foot soldiers where the fighting was fiercest, and to egress swiftly.46 Therefore, a focus on gaining and maintaining control over distance was the common denominator—getting to the battle, reducing enemy forces in battle, and conducting a swift withdrawal.

Henry V (1386–1422) used bowmen to achieve standoff at Agincourt in 1415, but his victory was decisive precisely because his French adversaries lacked the depth of effects those weapons provided, and Henry was able to close the distance and deal the finishing blow.47 During the 19th century, chief of the Prussian general staff Helmuth von Moltke (1800–1891) described how railways had revolutionized the manner in which he was capable of mobilizing his forces: “They enormously increase mobility, one of the most important elements in war, and cause distances to disappear.”48

In 2014, global arms diffusion was one of the driving forces behind the need for a Third Offset because the strategic advantage provided by precision missile technology was no longer exclusive, nor was it considered decisive in a potential contest with a peer or near-peer adversary. Just as the invention of precision-guided munitions gave birth to a self-fulfilling prophecy, once a market and suppliers exist for AI weapons they will proliferate rapidly. The United States enjoyed a technological monopoly over every adversary it encountered during the first two offsets; that era is likely over.

With the advent of RAS and AI, the world seems to be inching closer to the edge of what might be considered a protective future war theory. The catalysts to this shift are as social as they are strategic. Most Western nations are not capable of mustering the numbers that conscription affords some of their numerically superior adversaries, and therefore technological compensation and standoff become of even greater import.49 The logical conclusion of this evolution in arms brings about fewer means of closing the time and space gaps between soldiers in war, and more ways of making them permanent. Unfortunately, states have rarely produced positive human outcomes in war through means void of direct human influence. Alexander, for instance, was able to conquer Asia not because of the battles he won, but rather because of the people he won over.50

Whereas the objective of technological advancement from Alexander’s era to the industrial revolution was to kill from a distance, control time and space, and then close the physical gap between forces by expediting their arrival to battle under favorable conditions, today a leading objective is to freeze that gap—the ultimate form of protection. But if this objective is common between friend and foe alike, where does the battlefield end for a desperate, similarly equipped adversary, and for how long might Western populations expect to remain outside of it?

English military historian John Keegan referred to the area in which troops were placed in immediate danger as the “killing zone”—a variable space that, depending on the means available and conditions of war, could be very narrow or quite wide.51 This was a founding purpose of the Maginot Line: to restrict the killing zone to a geographical area outside of French cities by driving a wedge between attacking German armies and France’s civilian population.52 While 1940 shattered the expectation of a sequestered, tailored killing zone in Europe, studies from the U.S. Army War College show that such beliefs may be even more misguided in the 21st century.53

Considering the present realities of global arms diffusion, militaries will increasingly be burdened by public expectations of standoff and protection while struggling to find means that also set conditions to control the strategic and political post-war landscape. The question of our time is not whether the West can achieve what Christopher Coker calls “post-human war,” but rather whether Western governments can still achieve the characteristically human political objectives of war through a reliance on post-human means.54

To be sure, the employment of RAS and AI void of human agency as an influencing agent, no matter how advanced the systems involved, cannot be strategically decisive. The enduring and increasing need to render militaries safe by further distancing them from the perils of war may decrease the likelihood of the joint force achieving its strategic objectives in the next major conflict.

 

Operations at U.S. Army Cyber Command (ARCYBER) headquarters, Fort Belvoir, Va., May 15, 2019. (Photo by Bill Roche)
A glimpse into the operations of U.S. Army Cyber Command in May 2019. (U.S. Army Cyber Command/ Bill Roche)
Operations at U.S. Army Cyber Command (ARCYBER) headquarters, Fort Belvoir, Va., May 15, 2019. (Photo by Bill Roche)
ARCYBER operations
A glimpse into the operations of U.S. Army Cyber Command in May 2019. (U.S. Army Cyber Command/ Bill Roche)
Photo By: U.S. Army Cyber Command
VIRIN: 190519-A-KG4O3-2018

Social and Cultural Implications

At present, the overwhelming majority of public opposition to weaponized AI is of the moral and ethical persuasion. The more than 150 organizations and nearly 2,500 leaders from the broader engineering community who signed a pledge refusing to participate in AI weapons programs is one such example.55 While there is no doubt that ethical debates are important, the concerns associated with RAS and AI in war extend far beyond morality and into the very instruments of national power that the United States relies upon to link operational art to strategic objectives and, ultimately, to connect strategic objectives to long-term political stability.

Elusive endstates amidst tactical victories prove that even if moral machines become a reality, the perceived automation of military operations could be perilous for reasons beyond the battlefield. First, and perhaps foremost, if assumptions feed the ways, means, and ends of strategy, and those assumptions create the appearance of a cleaner or more efficient war, then any war that does not meet that expectation will be particularly shocking to the national consciousness. Thus, in such a war, vast adjustments will be required of the joint force mid conflict. These adjustments will be not only technical but also cognitive and theoretical, requiring an extreme reshaping of expectations from both the armed forces and the public regarding what victory might require of them.

Second, fair-weather forecasts of future war conditions actually make the onset of war more likely. B.H. Liddell Hart reached similar conclusions in 1954 when he wrote that the alleged peace secured by nuclear deterrence could replace world-ending “total wars” with endless limited wars that play out below the threshold of nuclear aggression.56 If the public is conditioned to believe that war has transcended the human realm, and technology enables militaries and thereby their nations to achieve their strategic objectives remotely, then there is little left to fear when the risk of war becomes imminent.

This is precisely the challenge that French leaders faced in 1940; the same conundrum that T.R. Fehrenbach examined in 1963 with his opus on the Korean War; the same problems that M. Shane Riza analyzed in 2013’s Killing Without Heart; and it is what many societies face today—the promise of an efficient, distant war where great sacrifice is neither required nor expected of the persons benefitting from its outcome. As Christopher Clark gathered, this not only cheapens the true cost of war but also forces war itself to assume a more civilized façade, with less inherent political risk assumed by those making the decision to wage it.57 One might distill the complexity of these challenges into an aphorism: If a war is deemed unworthy of human investment, it could be that it is a war not worth waging.

Recommendations

Rarely in military history has an army been so
carefully equipped and trained for the next war as was the French army at that time . . . But suddenly, the Germans fought a war that was completely different from the war that France’s forces had been preparing for.

—Karl-Heinz Frieser58

Just as the Maginot Line created an illusion of security, guaranteed standoff, and physical protection that made its shattering all the more shocking to the French polity, the pursuit of militarized RAS and AI has led many to believe that the key to a more efficient and secure future lay within these technologies. The United States Armed Forces owe themselves and their civilian leaders honesty regarding a prudent approach to integrating AI and a pragmatic vision of the threats and risks associated with relying on these systems to achieve future policy goals.

The fact that competitors such as Russia and China are pursuing this technology narrows the decision space of leaders in the United States. In this way, the Pentagon is obligated to explore autonomous weapons as force multipliers. What it should not do, however, is allow the joint force and the citizens they serve to believe that RAS and AI have the ability to alter the brutal nature of war or adjudicate its conditions once an adversary has committed its forces to battle.

Looking ahead, the JAIC and U.S. Army Futures Command should be as focused on informing senior defense leaders and policymakers of what RAS and AI cannot do—and what could go horribly wrong—as they are concerned with telling them what it might do. Through all this, the JAIC must take a page out of U.S. Central Command’s by-with-through playbook and integrate to the greatest extent possible warfighters with no technical AI experience into their decision cycle.59 If the joint force is to benefit substantively from a Third Offset consisting of RAS and AI in ground warfare, it will require immense buy-in from the fighting ranks who will most assuredly be asked to rely on such experimental and potentially volatile technology with their lives.

JAIC Director Dana Deasy’s statement before the U.S. House of Representatives Armed Services Subcommittee on Intelligence and Emerging Threats and Capabilities mentioned cooperation with U.S. Special Operations Command and the U.S. Army’s new AI Task Force.60 These relationships will be essential to the development of utilitarian, functional AI that directly supports warfighters’ needs rather than impressing a surge of fresh requirements upon them. Unit commanders would then be free to train for operating in technologically degraded environments even as they introduce their formations to emergent technologies in their combat training center rotations and culminating exercises.

It is also important to remember that as many opportunities as RAS and AI offer, they summon just as much liability to both force and mission.61 Organizations such as the JAIC and U.S. Army Futures Command will be under immense pressure to “modernize” rapidly in accordance with guidance in the 2018 U.S. National Defense Strategy and the strategies of each military service. But answering the question of what modernization looks like and which specific demands must be met for the joint force to fight and win is of greater importance than beating Russia or China in a generic AI arms race.

If, as Colin Gray suggests, prudence is the essence of strategy, then the urgency with which the Pentagon pursues modernization through RAS and AI must be tempered by a healthy dose of caution regarding the second- and third-order effects of AI on strategic coherence and post-war political legitimacy.62

When technological advantages are degraded, denied, destroyed, or just not capable of achieving the political objectives in war, human soldiers with the tenacity and ingenuity to adapt will remain the most effective offense and the last line of defense. Beyond the mire of discussions surrounding hybrid war, cyber war, robot war, and operations below the threshold of war, the threat of war still looms. Should that threat present itself, if the past is any prologue, the ensuing conflict will be chaotic beyond imagination. Perhaps RAS and AI will play a role in controlling that chaos—but then again, perhaps they will add to it. In any case, the nation that most effectively nurtures the moral factors of war by tapping into and managing properly the human potential within its ranks and its strategies will have the advantage. PRISM

Notes

1 Lawrence Freedman, The Future of War: A History (New York: Public Affairs, 2017), 287.

2 Alistair Horne, To Lose A Battle: France 1940 (London: Penguin Books, 1990), 71–72. The wall was named after French minister of war Andre Maginot, who was involved in its planning until his 1929 passing.

3 Ibid., 73, 80. Also see Eugen Weber, The Hollow Years: France in the 1930s (New York: W.W. Norton & Company, Inc., 1994), 246. For a take on “Maginot thinking,” see Karl-Heinz Frieser, The Blitzkrieg Legend: The 1940 Campaign in the West (Annapolis, MD: Naval Institute Press, 2012), 77–78, 323.

4 A December 1929 law authorized the initial four-billion-franc credit to begin the line’s construction. Weber, The Hollow Years, 245. By 1935, the line cost seven billion francs. Horne, To Lose a Battle, 74. The push into France was so successful that several German generals described it as a “miracle.” Frieser, The Blitzkrieg Legend, 1–3.

5 Hew Strachan, The Direction of War: Contemporary Strategy in Historical Perspective (Cambridge: Cambridge University Press, 2013), 168–170.

6 In 2015, the U.S. Army Research Laboratory offered a vision of future war that would be subject to “only a moderate degree of supervision by humans.” Alexander Kott, “Ground Warfare in 2050: How It Might Look,” U.S. Army Research Laboratory, August 2018, 11.

7 H.R. McMaster, “Continuity and Change: The Army Operating Concept and Clear Thinking About Future War,” Military Review (March–April 2015): 7.

8 “U.S. Ground Forces Robotics and Autonomous Systems (RAS) and Artificial Intelligence (AI): Considerations for Congress,” Congressional Research Service, November 20, 2018, <crsreports.congress.gov/product/pdf/R/R45392>.

9 In addition to the above McMaster quote, Colin Gray and Lawrence Freedman both see war’s nature as rather fixed and believe technology has a superficial impact on its fundamentally human nature. The seventh dictum of Gray’s general theory of strategy, for instance, is that “strategy is human.” Colin S. Gray, The Future of Strategy (Oxford: Oxford University Press, 2014), 47.

10 Peter W. Singer, “Isaac Asimov’s Laws of Robotics Are Wrong,” Brookings Institute, May 18, 2009, <www.brookings.edu/opinions/isaac-asimovs-laws-of-robotics-are-wrong/>.

11 Sydney J. Freedberg, Jr., “Joint Artificial Intelligence Center Created Under DoD CIO,” Breaking Defense, June 29, 2018, <breakingdefense.com/2018/06/joint-artificial-intelligence-center-created-under-dod-cio/>.

12 U.S. House Armed Services Committee Subcommittee on Intelligence and Emerging Threats and Capabilities, Dana Deasy, “Department of Defense’s Artificial Intelligence Structure, Investments, and Application,” December 11, 2018, available at <docs.house.gov/meetings/AS/AS26/20181211/108795/HHRG-115-AS26-Wstate-DeasyD-20181211.pdf>.

13 Stephan De Spiegeleire, Matthijs Maas, and Tim Sweijs, “Artificial Intelligence and the Future of Defense: Strategic Implications for Small- and Medium-Sized Force Providers,” The Hague Centre for Strategic Studies, 2017, <hcss.nl/sites/default/files/files/reports/Artificial%20Intelligence%20and%20the%20Future%20of%20Defense.pdf>.

14 Mark A. Milley, “Milley: Artificial Intelligence Could Change Warfare,” Association of the United States Army, June 22, 2018, <www.ausa.org/news/milley-artificial-intelligence-could-change-warfare>.

15 Gray, Future of Strategy, 31, 48, 81, 109.

16 For instance, two of the greatest tragedies of World War II, the slaughter and surrender of the 106th Infantry Division at the Battle of the Bulge and the British 1st Airborne Division’s failure to secure the bridge at Arnhem, were in many ways the result of flawed assumptions pertaining to the enemy’s capabilities and will to resist. In both instances, Allied forces entered the fight emboldened by the belief that their German adversary was on his heels and victory was at hand.

17 Freedman, Future of War, 249.

18 Ibid., 243. For a more focused discussion on the topic, see M. Shane Riza, Killing Without Heart: Limits on Robotic Warfare in an Age of Persistent Conflict (Lincoln, NE: Potomac Books, 2013).

19 Anthony H. Cordesman, “Losing by ‘Winning’: America’s Wars in Afghanistan, Iraq, and Syria,” Center for Strategic and International Studies, August 13, 2018, <www.csis.org/analysis/losing-winning-americas-wars-afghanistan-iraq-and-syria>; See also Daniel P. Bolger, Why We Lost: A General’s Inside Account of the Iraq and Afghanistan Wars (Wilmington, MA: Mariner Books, 2014).

20 Donald Stoker, Clausewitz: His Life and Work (New York: Oxford University Press, 2014), 34–35.

21 Senior U.S. military officers often referred to McNamara’s young and strong-headed analysts condescendingly as “whiz kids.” H.R. McMaster, Dereliction of Duty: Lyndon Johnson, Robert McNamara, the Joint Chiefs of Staff, and the Lies that Led to Vietnam (New York: Harper Collins, 1997), 18-21, 275–299.

22 Authors of the Vietnam-era PROVN pacification and development program, such as Lt. Col. Don Marshall, lamented the dramatic escalation of combat operations “at the expense of” an “agreed upon plan or program for the long-term development of a nation.” Montgomery McFate, Military Anthropology: Soldiers, Scholars, and Subjects at the Margins of Empire (New York: Oxford University Press, 2018), 284–285.

23 Josh Gabbatis, “Robot Soldiers and ‘Enhanced’ Humans Will Fight Future Wars, Defense Experts Say,” The Independent, October 15, 2018, <www.independent.co.uk/news/uk/home-news/future-war-robot-soldiers-enhanced-humans-space-gene-editing-ministry-of-defence-a8583621.html>; M.L. Cummings, “Artificial Intelligence and the Future of Warfare,” Chatham House, January 2017, <www.chathamhouse.org/sites/default/files/publications/research/2017-01-26-artificial-intelligence-future-warfare-cummings-final.pdf>; Kelsey D. Atherton, “Are Killer Robots the Future of War? Parsing the Facts on Autonomous Weapons,” New York Times, November 15, 2018, <www.nytimes.com/2018/11/15/magazine/autonomous-robots-weapons.html>; Colin Clark, “The War Algorithm: The Pentagon’s Bet on the Future of War,” Breaking Defense, May 31, 2017, <breakingdefense.com/2017/05/the-war-algorithm-the-pentagons-bet-on-the-future-of-war/>.

24 Christian Davenport, “Future Wars May Depend as Much on Algorithms as on Ammunition, Report Says,” Boston Globe, December 3, 2017, <www.bostonglobe.com/business/2017/12/03/future-wars-may-depend-much-algorithms-ammunition-report-says/pVT69lcgAGW6XFipB3nF3J/story.html>.

25 Uri Gal, “Predictive Algorithms Are No Better at Telling the Future than a Crystal Ball,” The Conversation, February 12, 2018, <theconversation.com/predictive-algorithms-are-no-better-at-telling-the-future-than-a-crystal-ball-91329>.

26 Paul Scharre, “A Million Mistakes a Second,” Foreign Policy, September 12, 2018, <foreignpolicy.com/2018/09/12/a-million-mistakes-a-second-future-of-war/>.

27 Matt Price, Steve Walker, and Will Wiley, “The Machine Beneath: Implications of Using Artificial Intelligence in Strategic Decisionmaking,” Prism 7, no. 4 (November 2018): 92–105.

28 Ben B. Fischer, A Cold War Conundrum: The 1983 Soviet War Scare (Washington, DC: Center for the Study of Intelligence, 1997), <www.cia.gov/library/readingroom/docs/19970901.pdf>.

29 Casey Chu, Andrey Zhmoginov, and Mark Sandler, “CycleGAN, a Master of Steganography,” report from the 31st Conference on Neural Information Processing Systems, 2017, <arxiv.org/pdf/1712.02950.pdf>.

30 Michael P. Ferguson, “Artificial Intelligence Is No Substitute for Troops on the Ground,” Washington Examiner, November 27, 2018, <www.washingtonexaminer.com/opinion/op-eds/artificial-intelligence-is-no-substitute-for-troops-on-the-ground>.

31 Andrea Shalal, “The Pentagon Wants at Least $12 Billion to Fund AI Weapon Technology in 2017,” Business Insider, December 14, 2015, <www.businessinsider.com/the-pentagon-wants-at-least-12-billion-to-fund-ai-weapon-technology-in-2017-2015-12?international=true&r=US&IR=T>.

32 Jory Heckman, “Artificial Intelligence vs. ‘Snake Oil:’ Defense Agencies Taking Cautious Approach Toward Tech,” Federal News Network, December 12, 2018, <federalnewsnetwork.com/defense-main/2018/12/ai-breakthroughs-versus-snake-oil-defense-agencies-taking-cautious-approach-toward-tech/>.

33 It is important to note here that of the three forms of AI, only the most rudimentary even exists (Artificial Narrow Intelligence). Artificial General Intelligence and Artificial Super Intelligence are still conceptual theories rooted in potential. It is impossible to know how or even if this technology could be operationalized for battlefield application successfully and reliably.

34 Timothy A. Walton, “Securing the Third Offset Strategy: Priorities for the Next Secretary of Defense,” Joint Force Quarterly 82 (3rd Quarter 2016): 6–15; Office of the Director of National Intelligence, “2019 National Intelligence Strategy of the United States,” <www.dni.gov/files/ODNI/documents/National_Intelligence_Strategy_2019.pdf>.

35 Michael P. Ferguson, “Why ‘Robot Wars’ Might Not Be Our Future,” The National Interest, November 17, 2018, <nationalinterest.org/blog/buzz/why-robot-wars-might-not-be-our-future-36347>.

36 McMaster, “Continuity and Change,” 7.

37 An observation from Thucydides, The Peloponnesian War, ed. Walter Blanco and Jennifer Tolbert Roberts, trans. Walter Blanco (New York: W.W. Norton and Company, Inc., 1998).

38 McFate, Military Anthropology, 283–294.

39 Gray, Future of Strategy, 97.

40 After nearly two decades of costly counterinsurgencies, many Western leaders desire quick, cost-efficient wins and a return to the “spectator wars” of the late 20th century. Similarly, Alistair Horne describes how the Maginot Line constituted a “fatal full cycle” of French military thought that began in 1870, swung to the offense during World War I, and led to an obsession with defensive thought before World War II to compensate. Horne also explains how the line became a way of life as much as a strategic component, much like the present widespread fascination with militarized AI. Horne, To Lose a Battle, 75.

41 Strachan, The Direction of War, 173.

42 “The U.S. Army in Multi-Domain Operations 2028,” TRADOC Pamphlet 525-3-1 (December 6, 2018), vi, x.

43 Here, Fuller channels Napoleon’s old maxim: “To conquer is nothing; one must profit from one’s success.” John Frederick Charles Fuller, The Generalship of Alexander the Great (Cambridge, MA: Da Capo Press, 1960), 312.

44 For instance, force protection is one of the five capability objectives of the U.S. Army’s RAS strategy. “The U.S. Army Robotic and Autonomous Systems Strategy,” Army Capabilities Integration Center (Fort Eustis, VA: U.S. Army TRADOC, 2017), 17, <www.arcic.army.mil/app_Documents/RAS_Strategy.pdf>.

45 Alexander used these tools during the Illyria campaign. Fuller, Alexander the Great, 296.

46 Gaius Julius Caesar, The Gallic War, trans. Carolyn Hammond (Oxford, UK: Oxford University Press, 1996), 86.

47 John Keegan, The Face of Battle (New York: The Viking Press, 1976), 79–116.

48 Strachan, Direction of War, 168.

49 This is true for most Western nations, but it is worth noting that certain Baltic and Scandinavian states, such as Sweden, have reinstated conscription in light of heightened tensions with Russia in particular. Martin Selsoe Sorenson, “Sweden Reinstates Conscription, With an Eye on Russia,” New York Times, March 2, 2017, <www.nytimes.com/2017/03/02/world/europe/sweden-draft-conscription.html>.

50 Alexander’s statecraft was his sharpest weapon, even more so than the love he earned from his armies, because it allowed him to consolidate gains from his battlefield victories and transform them into enduring strategic and political realities. Fuller, Alexander the Great, 109.

51 Keegan, Face of Battle, 305.

52 France, seeing a fight on Belgian soil as preferable to one on French streets, intended to fight “as far forward of her frontiers as possible.” Horne, To Lose a Battle, 75. Also see Weber, The Hollow Years, 245.

53 Samuel R. White, Jr., ed., Futures Seminar: The United States Army in 2030 and Beyond (Carlisle Barracks, PA: U.S. Army War College, 2016), 1–8, available at <publications.armywarcollege.edu/pubs/3244.pdf>.

54 Christopher Coker, Waging War Without Warriors? The Changing Culture of Military Conflict (Boulder, CO: Lynne Rienner, 2002).

55 Ian Sample, “Thousands of Leading AI Researchers Sign Pledge Against Killer Robots,” The Guardian, July 18, 2018, <www.theguardian.com/science/2018/jul/18/thousands-of-scientists-pledge-not-to-help-build-killer-ai-robots>.

56 B.H. Liddell Hart, Strategy: Second Revised Edition (New York: Penguin Group, 1991), xviii–xix.

57 Christopher Clark, The Sleepwalkers: How Europe Went to War in 1914 (New York: Harper Collins, 2013), 561–562.

58 Frieser, The Blitzkrieg Legend, 324.

59 Joseph L. Votel and Eero R. Keravuori, “The By-With-Through Operational Approach,” Joint Force Quarterly 89 (2nd Quarter 2018): 40–47; see also Michael X. Garrett et al., “The By-With-Through Approach: An Army Component Perspective,” Joint Force Quarterly 89 (2nd Quarter 2018): 48–55.

60 Deasy, “Artificial Intelligence Structure, Investments, and Application.”

61 “2019 National Intelligence Strategy,” 7.

62 Gray, Future of Strategy, 3.