Download PDF
Lieutenant Colonel M.E. Tobin, USMC, is a Military Planner in the Europe-Eurasia Regional Center at the Defense Intelligence Agency. Commander William G. Coulter, USN, is the Joint Wargaming Experimentation Division Branch Chief at the Joint Staff J7. Major John P. Romito, USA, is the Fires Planner and Targeting Officer for Special Operations Command South. Major Derek R. Fitzpatrick, USA, is a Civil Affairs Officer in U.S. Southern Command J7/9.
On August 2, 2019, Secretary of Defense Mark Esper informed the military Services of a department-wide fiscal program review to better align the future joint force toward a near-peer threat environment, a process similar to the “night court” proceedings he held during his tenure as the Secretary of the Army. The directive memo states, “No reform is too small, too bold, or too controversial to be considered.”1 Concurrently, in anticipation of the government-wide fiscal tightening due to impending budget cuts, combatant commanders (CCDRs) are attempting to do more with less and critically analyzing all efforts focused on results. CCDR staffs are meeting the commanders’ intent by reviewing combatant command (CCMD) campaign plan efficacy via the current military assessment process while taking new, innovative approaches to assessment and accounting. The increased scrutiny of budgets and fiscal tightening require CCDRs to optimize investments; however, the current joint military assessments process is inadequate for evaluating campaign plans.
Assessments are inherent to both the joint operations process and the commanders’ decision cycle. But at the strategic level, assessments are often an afterthought and, even when applied, frequently lack structure and methodology. Assessment doctrine provides clearly articulated guidance on why assessments are crucial to the success of the joint force, but the same doctrine provides little insight into when and with what data assessments are most effective. With this minimal guidance, commanders and their staffs develop command-specific assessment methods that lack consistency from command to command, and decisionmakers are unable to see where investments are or are not fruitful. Including the concept of data-driven assessments is far from novel to the Department of Defense (DOD) and the joint force, as U.S. failures in Vietnam attest.
Data-driven corporate concepts such as return on investment (ROI) come from private-sector methodologies that do not directly translate to the military. Yet when those limitations are recognized, such concepts do have relevance and value when used to determine the most efficient use of limited resources for theater security cooperation (TSC) operations as elements of the CCMD campaign plan. Therefore, by examining the failures of data-driven analysis from Vietnam and reviewing private-sector methodology, the joint force can improve the model by which it conducts assessments.
Vietnam: Failure of Metrics-Driven Assessment
Concepts such as ROI and the implementation of assessments in DOD carry quite a bit of historical baggage; they are deeply associated with failures such as the quantitative assessments used in Vietnam. Data-heavy and computer-based quantitative analysis brings U.S. history to the forefront in the failures of highly technical military assessments of the Vietnam era. The Hamlet Evaluation System (HES) used in the Vietnam War was the gold standard for quantitative counterinsurgency assessment.2 HES was developed in 1966 by the Central Intelligence Agency and subsequently implemented by DOD in 1967 as part of the Pacification Evaluation System under the Office of Civil Operations and Rural Development Support (U.S. Military Assistance Command, Vietnam). Designed to be an automated system, the Pacification Evaluation System evaluated and determined, through data analysis, who controlled the Vietnamese populace. The core of HES was a questionnaire that rated six measures of performance and effectiveness, with associated indicators similar to those found in Field Manual 5-0, The Operations Process. According to Ben Connable:
By the end of the Vietnam war, it was clear that HES had not successfully informed policy. Since the data was presented as scientifically accurate, the quantitative results with their false precision misled the executive branch, Congress, and the American public as to how the United States was actually performing in Vietnam.3
Vietnam illustrates the limitations of data-driven analytics as the dominant factor in determining policy and strategy.
As Mark Twain famously stated, “Facts are stubborn, but statistics are more pliable.”4 During Vietnam, analysts in Washington, DC, “employed what were then cutting-edge computer programs to tabulate millions of reports of all kinds. . . ; the sheer amount of data collected in Vietnam is probably unparalleled in the history of warfare.”5 Backed by hard numbers collected from the field, the analysis resulted in assessment statistics presented as unassailable facts. No matter how comprehensive the process may be, the data and models are fallible, resulting in questionable assessments. Based on the U.S. history in assessments, one would assume there would be doctrine to address identified shortfalls nearly 45 years later; however, data-driven analytics are not the only shortfall in the current assessments process.
Joint Doctrine: Assessments
Current commanders and staff officers at all echelons of DOD appreciate the need to analyze the effectiveness of their operations. A recent Joint Doctrine Analysis Division special study found that “current assessment doctrine does not provide sufficient guidance and procedures on how to evaluate progress toward achieving objectives, creating desired conditions, and accomplishing tasks during joint operations.”6 Those gaps in guidance and evaluative processes essentially fall under three categories: lack of a prescribed process, heavy focus on “art” elements, and inadequately addressing noncombat operations.
Joint doctrine provides broad guidance on a subjective process but falls short in providing the CCDR and staffs the required tools to make an accurate assessment. Joint Publication (JP) 3-0, Joint Operations, focuses on the why of assessment but leaves the how largely undefined. JP 5-0, Joint Planning, warns planners that assessment models may be fallible and that “the presence of numbers or mathematical formulae in an assessment does not imply deterministic certainty, rigor, or quality.”7 The guidance to avoid a “solely numbers” approach toward assessment is a hard lesson learned from the Vietnam War. A handbook dedicated to assessments, Commander’s Handbook for Assessment Planning and Execution, is a pre-doctrinal handbook that is entirely descriptive, not prescriptive; it also contains overviews on the what and why of assessments, but again, the how is left to practitioners to determine. The most recent assessment publication is Multi-Service Tactics, Techniques, and Procedures for Operation Assessment, but it largely regurgitates Commander’s Handbook, failing to explain how to assess effects against expenditure of resources. Doctrine is only a starting point; it requires improvement to assist CCMDs in optimizing operations, because it cannot assess steady-state campaign plan investments that application of a methodology such as ROI would address, thus bringing assessments into the 21st century.
Return on Investment
Business frameworks and methodologies for analyzing DOD operations could be a potential bridge to the current doctrinal assessment gap. Recently, joint doctrine and multiple senior leaders have begun using the terms investment and return on investment to describe DOD actions and outcomes within the operational environment. The June 16, 2017, version of JP 5-0 introduced the phrase operations, activities, and investment (OAIs) to describe joint actions globally. The phrase replaced the previous term operations, actions, and activities in the 2011 version of JP 5-0. In 2017, a Government Accountability Office report similarly highlighted DOD’s increasing shift toward business models, noting, “According to DOD and CCMD officials we interviewed, readiness is their key performance measure and they have ongoing efforts to develop more tangible, quantifiable measures to determine . . . return on investment.”8 In 2018, a Chairman of the Joint Chiefs of Staff Instruction noted how evolving “analysis of alternatives methodologies . . . [seeks to] consider all alternatives for . . . meeting validated capability requirements . . . [while] determining the ‘point’ of diminishing return on investment with acceptable risk.”9 Likewise, ROI has recently entered the lexicon of senior leaders within U.S. Southern Command (USSOUTHCOM) and U.S. Africa Command (USAFRICOM). During hearings before the Senate Armed Services Committee, leaders from both commands used the term ROI to describe the assessed effectiveness of the congressionally funded operations of their commands.10 The addition of this new terminology to the joint lexicon has inspired joint planners to develop pilot programs to test the usefulness of data-centric assessment, modeled from the private sector, and the technical architecture necessary to manage data and execute various functions.
The How of Operational Assessment
ROI is associated with corporate finance, and there are several different methods of calculation, each with a different purpose. Businesses that must achieve productivity and profit goals use a defined assessments process. For example, human resources–based ROI formulas determine the value of increased performance by taking the increased productivity and/or output of the organization and dividing that by the cost of employee training.11 The formulas are deceptively simple to calculate, but the data collection can be much more difficult. Formulas to track progress and measurable results provide industry with analyzed information to plan and adjust; however, corporate finance equations, in their pure form, do not logically translate to military operations. The military does not make money; it spends it.
Financial costs captured can accurately calculate total resource investment in an operation but only insofar as it can be correlated to nonfinancial rates of return. Therefore, calculating “operating return” may be most applicable to military-related uses, wherein operating efficiency is a ratio between operating profit and assets committed toward earning that profit.12 Although ROI typically uses quantitative values, methods exist to incorporate qualitative and intangible elements into the calculations.13 Additionally, while there is not a mathematical substitute in military operations for operating profit, the principle is clear. Resources and assets committed are quantifiable, providing data to calculate achievement of military objectives or measurable change in the environment as the “profit” in the analysis.
When Is ROI Applicable?
Describing ROI through a data-centric, quantitative method may serve two important purposes: to enhance the commander and staff’s ability to understand the effects that committed resources are creating and to enable the commander’s decisionmaking process. ROI and its subordinate concepts are most applicable to geographic combatant commanders conducting TSC activities in their areas of responsibility. The U.S. Government invests sizable amounts of money, manpower, and time in an effort to build partner capacity (BPC), strengthen key relationships, and secure national interests.14 In these situations, it is both necessary and prudent to develop an understanding of the resources committed to U.S. objectives and evaluate the actual progress toward them. A commander can make the best resource-informed decisions when there is a more complete view of the resources applied to a problem and the outcomes achieved from and effects of those resources.
Joint planners have a variety of tools at their disposal to address wartime assessment. In general, ROI is not applicable as a basis for strategic or operational planning during wartime. In total war and limited conflict, the Relative Combat Power Assessment (RCPA) provides an evaluation of comparative friendly and enemy combat power, based on tangible and intangible factors at the onset of conflict. Throughout the conflict, combat effectiveness is determined through battle damage assessments, updated order of battle calculations, and other inputs to feed and update the initial enemy strength estimates and RCPAs for subsequent operational engagements. The combat assessment and RCPA provide the commander and staff with concrete data on enemy force assessment; these assessment tools contribute to measuring the achievement of overall campaign objectives related to the destruction of the enemy’s war-making capacity.
What Kind of Data?
Although ROI is a tool well suited for assessing geographic combatant commander security cooperation activities and operations to BPC, critical to its application is an understanding of what data are required and relevant for an estimation of returns. Half the data for this equation, the investment, is readily quantifiable through funding and appropriations—how much money DOD has spent on any given activity or program. The other half of the equation, the return, has endlessly frustrated joint planners. DOD Instruction 5132.14, Assessment, Monitoring, and Evaluation (AM&E) Policy for the Security Cooperation Enterprise, offers a framework for returns on data selection, collection, and assessment in support of ROI: “AM&E indicates returns on investment . . . and will help DOD understand what security cooperation methods work and why, and apply lessons learned and best practices to inform security cooperation resources and policy decisions.”15
The first step of the AM&E framework is a baseline assessment leveraging qualitative, quantitative, and perceptual data sets that detail “the extent to which an allied or partner nation shares relevant strategic objectives with the United States, . . . [the] partner’s current ability to contribute to missions to address such shared objectives, [and] a detailed holistic analysis of relevant partner capabilities.”16 The AM&E framework baseline provides outputs and outcomes as key qualitative and quantitative data sets. Outputs are the actions taken by a developed partner nation’s military forces after the application of DOD resources, such as training, equipping, and so forth, which can be both qualitative and quantitative, such as the number of operations conducted by a newly trained partner force. More important, yet more difficult to quantify, are outcome data sets tracking the employment of partner nation capabilities toward the achievement of objectives. In relation to initial assessments, outcomes focus on changes in the operational environment resulting from the application of enhanced partner capability.17 Taken together, these kinds of data—baseline assessment, investment, outputs, and outcomes—supply the framework for calculating ROI within DOD BPC and security cooperation activities.
Practical Examples with Hypothetical Data Sets
Two sets of hypothetical data from a psychological operation to influence behavior and BPC operations from USSOUTHCOM Special Operations Command South (SOCSO) provide a better understanding of how ROI analysis can inform a commander’s decisionmaking. In the first example, figure 1 represents an analysis of a hypothetical psychological operation that used multiple media platforms to advertise the existence of a tip hotline for local communities to report criminal activities and the resulting actionable tips received.18 The targeted messages were broadcast across digital, radio, and television platforms. The operational headquarters captured the number of broadcast hours per month and the amount of actionable information generated by the tip hotline. The resulting graphed data help to identify correlations between activities and the observable outcomes.
Figure 1 shows a clear correlation between digital media and elevated tip hotline activity; increased digital marketing efforts in September and December resulted in elevated tip hotline activity in October and January, respectively. Digital advertisement is more effective than television/visual or radio/audio to promote a desired behavior; therefore, the ROI for digital is greater than that for other media. Staffs can use such data to optimize use of resources—reducing investments across less effective mediums and increasing investment in more effective platforms. Furthermore, collection and analysis of data over time would allow the analysts to identify the point of diminishing returns, where further investment no longer corresponds to an increase in desired behaviors.
The second hypothetical example deals with decision support regarding resources applied to BPC operations. For background, in USSOUTHCOM, Central and South American nations and specific units benefit from multiyear persistent engagements. SOCSO participates in partner nation engagements, forward-deploying elements for training in various countries. To validate training program effectiveness, SOCSO conducts tactical unit assessments, largely along warfighting functions. The tactical unit assessments offer an excellent trend analysis of unit capability and capacity, and while the data are enormously valuable, they provide only an understanding of the training’s effectiveness. Without sufficient data and analysis, it is impossible to describe the ROI of U.S. Government OAIs in the region in real terms. Therefore, staff members have little data or specified analysis on which to base a recommendation to the commander when choosing to shift from persistent to periodic engagement or recommending complete termination of the engagement. The lack of data places an unnecessary burden on the commander to rely on instinct or to avoid a decision, resulting in ongoing engagement far past the point of efficacy. The current assessment process fails to provide a holistic understanding of the resources invested, the activities conducted, and the real-world application of the capabilities made possible by U.S.-led training.
In the hypothetical scenario depicted by figure 2, SOCSO captured additional data about resources invested and then compared them against broader categories of improvements to partner capability and capacity. The resulting data indicate that although resources invested (number of U.S. personnel deployed, funds expended, and partner forces trained) and partner nation unit proficiency remained the same from August to February, the quantity of unilateral targeted raids sharply decreased from September to November and remained consistently low through February. Therefore, even though unit proficiency is of particular importance when assessing progress in relation to partner nation units that enjoy a persistent, long-term engagement plan with U.S. forces, unit proficiency alone may be misleading. Based on this hypothetical example, SOCSO should look at shifting investments or changing to periodic engagements, as the current investment no longer produces as much return as it did in the months of July and September.
Conclusion
As staffs face the fiscal realities of constrained military budgets and the scrutiny of reshaping OAIs to focus on near-peer adversaries, CCDRs must ensure that they are making the best possible investments in their campaign plans to posture themselves for success. Joint doctrine does not provide adequate guidance to the joint force on campaign plan assessments, resulting in a less than optimal understanding of the resulting impacts. Informed decisions in this regard require data and focused analysis, especially when dealing with a complex operating environment. The objective of data-driven ROI analysis is to provide the commander with a tailorable decision support matrix that guides resource commitment and enables optimization. Just as the 2017 National Security Strategy emphasized the importance of economics in Great Power competition, increasingly CCDRs are incorporating ROI into their lexicons. Subsequently, USSOUTHCOM and USAFRICOM commanders are discussing activities and results in terms of investments and returns.
The use of ROI represents a shift in how the joint force measures results, forcing a reevaluation of the methods through which it conducts assessments. There is no doctrinal approach that guides ROI inclusion in the assessment process; however, current doctrine describes important considerations that inform those conclusions and recommendations. First, and perhaps most fundamental, assessment needs to begin with clear and measurable objectives. Joint doctrine describes theater-strategic and operational-level assessments as focused on effects, objectives, and progress toward the endstate.19 Therefore, absent clear and measurable data sets—developed from the beginning and aligned with clear and measurable objectives to drive creation of reporting requirements—accurate assessment is not feasible. Throughout the process, staffs should note that CCMD campaign plans are the target of the assessment process; war and kinetic operations have existing methodologies that provide enemy assessments as a part of an operation.
Next, the CCMD will need to refine the collection requirements to tailor the ROI analysis. CCMDs have at their disposal volumes of historical and current data as well as robust collection mechanisms that will need fine-tuning to collect the required data. The likely problem for most staffs will be the data collection and management for application of ROI, which will require being able to identify and manage the types of data necessary for ROI calculations. Once applied, data-driven analytics in combination with commander and staff experience will yield greater clarity for making task organization and mission assignment decisions. In terms of decision support, data-centric analysis may provide the commander a useful tool for assessing progress toward CCMD campaign plan TSC operations. Therefore, application of ROI principles through the collection of specified data for select problem sets is likely to provide CCMDs with tailored assessment data that will assist campaign assessment, prepare for the Secretary of Defense’s anticipated fiscal austerity measures, and focus on maximizing leverage of available resources. JFQ
Notes
1 Aaron Mehta and Joe Gould, “Night Court Comes to the Pentagon,” DefenseNews, August 28, 2019, available at<www.defensenews.com/pentagon/2019/08/28/night-court-comes-to-the-pentagon/>.
2 Ben Connable, Embracing the Fog of War: Assessment and Metrics in Counterinsurgency (Santa Monica, CA: RAND, 2012), 113.
3 Ibid., 131.
4 “Mark Twain Quotes,” Brainy Quote, available at <www.brainyquote.com/quotes/mark_twain_163414>.
5 Connable, Embracing the Fog of War, 95.
6 Joint Doctrine Note 1-15, “Operation Assessment,” The Joint Staff, January 15, 2015, i.
7 Joint Publication 5-0, Joint Planning (Washington, DC: The Joint Staff, June 16, 2017), VI-13, VI-20.
8 U.S. Government Accountability Office (GAO), Joint Exercise Program: DOD Needs to Take Steps to Improve the Quality of Funding Data, GAO-17-7 (Washington, DC: GAO, 2017), 10.
9 Chairman of the Joint Chiefs of Staff Instruction 5123.01H, Charter of the Joint Requirements Oversight Council (JROC) and Implementation of the Joint Capabilities Integration and Development System (JCIDS) (Washington, DC: The Joint Staff, August 31, 2018), D-17.
10 Posture Statement of Admiral Kurt W. Tidd, USN, Commander, U.S. Southern Command, Senate Armed Services Committee, 115th Congress, February 15, 2018; Statement of General Thomas D. Waldhauser, USMC, Commander, U.S. Africa Command, Senate Armed Services Committee, 116th Congress, February 7, 2019.
11 Tia Benjamin, “What Is a Return on Investment in Human Resources?,” Houston Chronicle, available at <http://smallbusiness.chron.com/return-investment-human-resources-45590.html>.
12 Edward A. Ravenscroft, “Return on Investment: Fit the Method to Your Need,” Harvard Business Review 38, no. 2 (March–April 1960), 1.
13 Judith A. Gebhardt, “Correlates of ‘Return on Investment’ and Organizational Factors: A Study of High and Low ROI Strategic Business Units and Intangibles” (Ph.D. diss., California School of Professional Psychology, 2001), xvi.
14 Angela O’Mahony et al., Assessing, Monitoring, and Evaluating Army Security Cooperation: A Framework for Implementation (Santa Monica, CA: RAND, 2018), available at <www.rand.org/pubs/research_reports/RR2165.html>.
15 Department of Defense Instruction 5132.14, Assessment, Monitoring, and Evaluation Policy for the Security Cooperation Enterprise (Washington, DC: Office of the Secretary of Defense, January 13, 2017), 12.
16 Ibid., 14.
17 Ibid., 15.
18 Christopher Telley, Special Operations Command South, email to John Romito, September 22, 2019.
19 Commander’s Handbook for Assessment Planning and Execution (Suffolk, VA: The Joint Staff, September 9, 2011), vii.