News | Sept. 14, 2017

Learning from the Struggle to Assess Counterinsurgency and Stabilization Operations in Afghanistan

By Nathan White PRISM Volume 7, No 1

DOWNLOAD PDF

American assessment practices proved to be inadequate for U.S.-led operations in Afghanistan. The type of conflict in which America and its allies would eventually find themselvesengaged did not necessarily fit neatly within any of the primary civilian and military mission sets. U.S. military assessment practices are largely meant to support a traditional conventional war paradigm in which Joint Force combat overmatch and the defeat of a state adversary’s military forces has been increasingly treated as the definitive factor in achieving victory. The assessment practices at U.S. civilian agencies, in particular the U.S. State Department and the U.S. Agency for International Development (USAID), generally are designed to measure success in activities, projects, and programming associated with their traditional missions such as development, diplomacy, democracy promotion, human rights, and disaster relief.

As early as 2009, senior U.S. officials—civilian and military alike—recognized an urgent need for a significant change to U.S. assessment practices in Afghanistan. This article analyzes the 2009–11 period when American personnel worked to apply new assessment practices that were meant to be more suitable for the requirements of Afghanistan operations. Even though these new practices directly targeted the main aspects of the identified assessment gap, institutional deficiencies within U.S. Government (USG) organizations ensured that requirements for assessments in Afghanistan remained unmet. Even perfectly designed approaches and frameworks for assessment will continue to fall short in future operations if these institutional deficiencies are left unaddressed.

Background

Assessments in war serve two primary purposes: to measure progress and to inform adaptation. In so doing, they enable decisionmakers to make decisions that increase the likelihood of achieving U.S. objectives.1 The associated planning, direction, collection, monitoring, and evaluation activities that make up assessments must measure performance, outcomes, and the status of the operational environment in relation to mission goals.2 Assessment activities are continuous, but assessments themselves are a snapshot in time and ideally represent the highest quality analysis given all of the available information collected, processed, and analyzed to date.3

A requirement for new assessment approaches in Afghanistan followed a shift in strategy led by General Stanley McChrystal who expressed an urgent need for a significant change to the U.S-led strategy and the way the force thinks and operates. In his 2009 review of the Operation Enduring Freedom campaign he wrote:

Success is achievable, but it will not be attained simply by trying harder or “doubling down” on the previous strategy. Additional resources are required, but focusing on force or resource requirements misses the point entirely…We must conduct classic counterinsurgency operations in an environment that is uniquely complex…Our strategy cannot be focused on seizing terrain or destroying insurgent forces; our objective must be the population…In the struggle to gain the support of the people, every action we take must enable this effort. The population also represents a powerful actor that can and must be leveraged in this complex system. Gaining their support will require a better understanding of the people’s choices and needs.4

Civilian and military leaders recognized that the shift to a counterinsurgency and stabilization approach in Afghanistan required changes to how the USG assessed its operations. For instance, in his guidance to U.S. civilian personnel in Afghanistan in August 2010, Ambassador Karl Eikenberry wrote: “Assess the impact of our efforts…Always emphasize effects, not just inputs and outputs.”5 In 2011, USAID Administrator Dr. Rajiv Shah also recognized the need to adapt assessment approaches:

While stability is a necessary precursor for our long-term development goals, stabilization programming often has different objectives, beneficiaries, modalities, and measurement tools than long-term development programming. Our training, planning, metrics, labeling, and communications efforts, among others, must reflect both the differences and the linkages.6

The USAID Stabilization Unit in Kabul appears to have taken Shah’s guidance to heart by 2012, when it stated that “monitoring, evaluating and assessing the impact of stabilization programs in a counterinsurgency context requires a mixture of creative, flexible, pragmatic, and contextual thought that extends beyond traditional monitoring and evaluation practices in terms of scope, approach, and methodology.”7

Reflecting on the missions in Iraq and Afghanistan then Chairman, Joint Chiefs of Staff General Martin Dempsey in a January 2015 interview at the National Defense University implied that perhaps one of the main requirements for assessments in counterinsurgency is the need to more thoroughly account for societal factors in the operational environment. He explained that in applying the military instrument against state actors, the military differentiates itself by “size and technology.” He contrasted this with operations in Iraq and Afghanistan:

…We were fighting an insurgency on behalf of Iraq and an insurgency on behalf of Afghanistan, simultaneously trying to restore their abilities to govern. In that kind of conflict, the use of military [forces] against nonstate actors, I think size and technology matter, but what matters more is the rate at which we innovate…The rate of innovation becomes a better predictor of success than the force management level, for example. Size matters, but the rate at which we can innovate, adapt, and respond to changes in the environment matters more…You have to understand the factors that would cause you to need to innovate, and they largely reside in societal factors.”8

If societal factors were more important to understand in such conflicts, then assessment practices would have to account for those factors as well as more traditional metrics.

The Assessment Gap

The shifting nature of assessment requirements for counterinsurgency and stabilization operations in Afghanistan highlighted a gap in U.S. capabilities—assessments failed to account for the many nuances, in time and space, of the complex counterinsurgency environment.9 Specifically, they did not account for relevant aspects of the operational environment; neglected to facilitate a common operating picture among U.S. and allied organizations; overlooked what mattered for the campaign at hand; failed to provide useful information for measuring progress toward mission objectives; and did not inform the identification of opportunities for adaptation. Several mutually reinforcing deficiencies contributed to this.

Requests for Information (RFIs) Overwhelmed Field Personnel

The massive numbers of information requirements, many of which were irrelevant to the mission at hand, took a toll on field personnel—operators and analysts alike—who were already busy completing tasks associated with counterinsurgency and stabilization in the field. These personnel often became overwhelmed with RFIs from higher headquarters, which led to poor information and analysis for the sake of speed. In many cases, personnel made up results to “satisfy the beast.” And, in some cases, the requirements simply were never met.10

Organizations Employed Unique Methodologies

Organizations often employed their own unique methodologies based on different conceptualizations of progress, often simultaneously, in the same area. Assessment approaches varied significantly with each International Security Assistance Force (ISAF) commander and among different organizations such as Congress and the National Security Council.11 This led to great confusion and it taxed limited analytic resources, which hurt assessment quality.12

Assessments Often Took a Centralized, Top–Down Approach

It was not uncommon for national-level metrics to be mistakenly assumed as relevant for the entire country, at detriment to area-specific nuances.13 The USAID Stabilization Unit in Kabul concluded in 2012 that, “While past efforts to provide quantified and scientifically rigorous measures of stabilization impact have met with some success, a more data-rich and geographically detailed approach is necessary to systematize our understanding of stabilization in the context of Afghanistan.”14

Operationally Relevant Factors Were Overlooked

Security metrics such as troops trained, numbers of significant activities (SIGACTs), and numbers of improvised explosive device (IED) incidents are examples of the kind of blanket metrics that were commonly collected across the country. For development, roads built, children educated, and the number of people provided healthcare were often measured. On the governance side, government posts filled, government officials trained, and the number of people who voted in an election were all considered acceptable metrics. Although all three sets of metrics hypothetically could prove useful, their relevance and significance is not uniform across the operational environment. For instance, what if SIGACTs and IED numbers went down because insurgents had moved out of a given area on their own accord or U.S. forces had shifted their patrolling to areas where militants were not present? Without context, the drop in numbers of SIGACTSs and IED incidents could lead to false conclusions about security progress, and result in the misallocation of resources and manpower.

Underlying Drivers of Conflict Were Overlooked

Assessments failed to adequately account for the underlying drivers of conflict—a crucial step for identifying what to measure and for assessing progress. In 2010 a report from the Wilton Park Conference 1022 concludes that, “There is an urgent need to ensure that the new ‘population centric’ counterinsurgency strategy is evidence based, and does not continue to uncritically assume that development aid ‘wins hearts and minds’ and/or promotes stability. Priority should be given to assessing stabilization effects of projects, rather than assuming impact based on amounts of money spent or the number of projects implemented.” The report continues, “Greater emphasis should also be given to understanding drivers of conflict, as aid projects can only be effective in promoting stability objectives if they are effectively addressing the main causes of instability.”15

Assessments Were Overly Focused on the Actions by the United States and its Allies

In any given operational area, much occurs that is independent of U.S and allied activities that impacts mission progress. U.S. assessments often failed to consider these developments because they were overly focused on the performance and impact of U.S. and allied activities to the neglect of other developments in the operational environment that potentially could impact the mission. For example, assessments did not always account for factors like local political disputes or intra-tribal conflicts that altered the stability of a given area, but were not directly related to any particular action by the U.S.-led coalition.16

Emphasis was Placed on Inputs, not Outcomes

Assessments focused primarily on measuring inputs versus outcomes, often characterized as a tendency to measure performance versus effectiveness or impact. Common measures of performance used in recent conflicts include: money spent, projects completed to standard, programs ongoing, adversaries captured or killed, troops trained to standard, and successful management of a development budget. Measures of impact are different. Examples of this include: the corresponding impact of various projects, programs, kinetic actions, and other initiatives on the calculus of locals regarding whether or not to support an insurgency, or the willingness and capability of an indigenous military that has been trained by the United States to effectively combat an insurgency.

A Lack of Critical Thinking and Structured Analysis Led to Flawed Findings

Assessments lacked sophistication and often proceeded from flawed assumptions.17 The practice of color coding areas on a map in accordance with perceived levels of stability—a.k.a. “coloring book assessments”—for example, hampered understanding of the nuanced counterinsurgency environment and led to findings that were incomplete, inaccurate, or both.18 Another example is that assessments prioritized demonstrating linear progress on various issues as the primary indicator of success. As William Upshur, Jonathan Roginsky, and David Kilcullen observe of their time conducting assessments in Afghanistan:

Even in gathering and analyzing all [of] the data within reach, assessment cells generally put too little energy into information design. Operational assessments are usually presented on a linear scale with a marker to represent progression from left to right, or from ‘very bad’ to ‘very good.’ Yet with near universal agreement on the complexity of counterinsurgency, and conflict environments in general, it would be difficult to find anyone who thinks that linear visualizations actually describe changes in the environment in an operationally useful way.19

Lack of Critical Thinking also Allowed Room for Politicization and Other Contamination

Metrics that policymakers viewed as important for justifying the expense of blood and treasure often took precedence over indicators that were relevant to progress in the counterinsurgency and stabilization mission. Similarly, commanders in the field were under tremendous pressure to ensure their assessments showed results on their organizations’ preconceived notions of success metrics, as opposed to progress on those metrics required for mission success.

New Approaches to Assessment in Afghanistan

In response to the shift to a counterinsurgency and stabilization approach for Afghanistan in 2009, civilian and military organizations adopted (to varying degrees) new assessment approaches that were meant to address the various components of the assessment gap.20

Measuring Progress in Conflict Environments (MPICE)

This framework was first utilized in the field in 2007 in support of the Haiti stabilization initiative. Initially developed on the heels of a series of workshops held from 2004–05 by the United States Institute for Peace and the Center for Strategic and International Studies, MPICE by 2010 gained the attention of some U.S. officials, including several working with the U.S. Army Corps of Engineers on Afghanistan operations.21 MPICE is underpinned by the belief there are three objective states that exist with regard to conflict: imposed stability, assisted stability, and self-sustaining peace. It measures “the drivers of violent conflict against the ability of indigenous institutions to resolve conflict peacefully.22 Institutional performance includes the formal institutions of government and informal societal practices.”23 The framework assesses five predetermined factors that are deemed by USIP’s “Framework for Societies Emerging from Conflict” to be essential in conflict resolution: safe and secure environment; political moderation and stable governance; rule of law; sustainable economy; and social well-being.24 The measures are then adapted to, “the specific policy goals, conflict dynamics, and cultural peculiarities relevant to each conflict setting.”25

Interagency Conflict Assessment Framework (ICAF)

Military and civilian personnel in Afghanistan utilized the ICAF to a limited degree since it was developed in 2008. This tool enables interagency teams to assess conflict situations systematically and collaboratively, and plan for conflict prevention, mitigation, and stabilization.26 The ICAF is comprised of two overarching processes—diagnosis and planning—with four steps involved with diagnosis:

  • evaluate the context of the conflict;
  • understand core grievances and social/ institutional resilience;
  • identify drivers of conflict and
    mitigating factors;
  • and describe opportunities for
    increasing or decreasing conflict.

The planning process is less defined and largely situation specific, but is meant to ensure the diagnosis informs planning.27 If focused on the same geographic area, ICAF assessments eventually will highlight changes in the environment, new challenges that have emerged, and other information that can be used to determine progress and inform refinement and adaptation of U.S. approaches.28

Tactical Conflict Assessment and Planning Framework (TCAPF) and the District Stability Framework (DSF)

TCAPF and its successor, DSF, were the two new frameworks employed most in Afghanistan.29 USAID first tested TCAPF/DSF in the Horn of Africa in 2006. It was employed by British and American forces in Afghanistan, starting in Helmand Province in 2009, to help facilitate more impactful development programming and reduce the multitude of programs that had limited impact. The process consisted of asking four questions:30

  • Have there been changes in the village population in the last year?
  • What are the most important problems facing the village?
  • Who do you believe can solve your problems?
  • What should be done first to help the village?31

These questions were intended to yield information that could then be analyzed to provide enhanced understanding of relevant aspects of the operational environment, with an emphasis on identifying sources of stability and especially instability.32 Additionally, the process was meant to help measure and understand progress, and whether or not programming was achieving the desired impact on the operational environment. And if not, to inform adaptation of operational and tactical approaches to achieve that desired endstate.33

Region South Stabilization Approach (RSSA)

RSSA emerged in 2010 in Afghanistan, based on a recognition that a process was needed to integrate civilian and military planning, develop a common operating picture, and establish an interagency system for monitoring progress in Regional Command–South.34 The process identifies a “stability continuum” to assist with planning for the allocation of security, development, and governance assets across participating agencies.35

The Lingering Assessment Gap

Despite all of the many efforts to improve, by 2012 the assessment gap was far from resolved. A civilian advisor to ISAF Joint Command (IJC) captured this best in an email from 2012 to the author regarding U.S. assessment practices in Afghanistan:

I’m still out in Afghanistan, now at IJC for a week before heading out. [I] was in a discussion with a [senior officer]36 here about stability vs. instability, how we measure it, are we looking at things the right way, etc. My inclination is to say no.37

That year a USAID study on the impact of its efforts in Afghanistan found that TCAPF/DSF had failed to resolve the assessment problem.38 Another document from USAID reports that as of 2013, the Agency was introducing an entirely new assessment methodology known as the Stability Analysis Methodology, which again seemed to target the same five aspects of the assessment gap.39

Institutional Constraints

These examples highlight a lingering problem that begs the question of why methodologies designed to target the various components of the assessment gap were not successful. To investigate the source, the research for this article looked more in depth at the most commonly utilized (and probably the most well-known) assessment approach—TCAPF/DSF—and traced the history of the approach in Afghanistan from its initial implementation. A less rigorous review of the experience with implementing the other frameworks was also conducted.

In reviewing the evolution of the TCAPF/DSF, this author found that thousands of American, allied, and indigenous personnel were trained to use the framework. In spite of many complaints about the observer effect of the interview-based survey approach employed for TCAPF/DSF and the complexity of the data management process, many civilian and military personnel reported that the approach was useful for counterinsurgency and stabilization in Afghanistan.40 A more cursory review of the other assessment approaches revealed similar reporting about their utility.

Yet the research also found that none of the new frameworks utilized in Afghanistan were actually implemented as designed. This was less the result of any particular issue with the framework methodologies themselves. Instead, it was more the result of institutional constraints within the USG that prevented the frameworks from being utilized properly.

Ambiguous Mission, Strategy, and Desired Endstate

The U.S.-led civilian and military force that was fielded in Afghanistan never achieved clarity of purpose.41 Thus, although some found TCAPF/DSF and other frameworks useful, the frameworks’ utility was questionable from the start because they were used to measure progress against the user’s own unique interpretation of the mission, strategy, and endstate which, of course, differed depending on the user. When the mission, strategy, and endstate are unclear, even the best assessment processes never have a chance of achieving their purpose—measuring progress and informing adaptation. In such cases, it is unclear what progress is being measured toward and to what intermediate objectives and endstate assessment information can inform adaptation to achieve.

Insufficient Conceptualization of Strategy

Stability requires a strategic approach that integrates the lethal and nonlethal tools of state power to influence relevant actors to behave in a manner that contributes (actively or passively) to stability. Yet the U.S. approach to strategy is frequently divorced from human decisionmaking and behavior.42 The United States approached strategy in Afghanistan as numerous lines of effort (e.g. security, governance, and development, often with subsets within each of these three lines) as if they were separate stovepipes.43 What tended to occur was that technocratic objectives and metrics for success within the lines of effort would become the focus, as opposed to goals of shaping relevant actor behavior in a manner that achieved mission success.44 Special Assistant to the U.S. President for National Security Affairs Lieutenant General H.R. McMaster, refers to this assessment deficiency as “the confusion of activity with progress.”45 All of the frameworks listed are meant to assess the environment, better understand drivers of instability, and inform the adaptation of operations to address them and achieve more stabilizing behavior among relevant actors. For the new frameworks to have succeeded, the USG would have needed to conceptualize strategy in a manner that was relevant to influencing human behavior in accordance with U.S. objectives.46

Multiple Chains of Command

A lack of unity of command and especially unity of action meant that even when one part of the whole-of-government force in the field embraced a framework, it was very common for others operating in the same area not to.47 Without a single chain of command, multiple frameworks were often utilized alongside one another. There were usually no mechanisms in place by which the interagency force would be mandated to align and develop a common operating picture and a shared concept of how best to plan and execute the mission, measure progress, and adapt the force for enhanced success.

Continuity of Effort

Turnover among units and individuals challenged those few frameworks that gained traction among multiple USG organizations, as was the case for Regional Command–East.48 Assessment approaches changed drastically during the various tenures of the ISAF Commanders. Similarly, TCAPF/DSF went from being the primary framework for tactical assessment 2009–11 to being far less utilized by follow-on units. Even when TCAPF/DSF was integrated into General David Petraeus’ counterinsurgency qualification standards, inconsistency still occurred.

Knowledge Management

Finally, there was no strategy for optimizing the utility of the information and making sure all who could benefit received the information in a usable form that suited their purposes; various organizations purchased their own information technology support packages for knowledge management.49 Compounding this, limited attention was given to how deployed personnel and their replacements could maintain and update assessments for others to access.50 A catalogue of previous assessments did not exist, which made it difficult for newly serving policymakers, operators, and analysts to understand the evolution of the campaign and led to further unnecessary taxing of the RFI system.

Conclusion

The assessment gap that persisted in Afghanistan had little to do with deficiencies in the new approaches that were attempted. Rather, the gap persisted as a result of institutional barriers within the U.S. national security system that prevented the implementation of the new approaches as designed. Success in the assessment of counterinsurgency and stabilization missions requires more than just sound assessment approaches and methodologies. They must be accompanied by a plan for how they will be implemented as designed so as to achieve their purpose. These challenges persist within the USG today. Unless they are addressed, even the most promising new assessment approaches will continue to fall short of their potential to improve mission effectiveness. PRISM

 

Notes

1 See Jason Campbell, Michael O’Hanlon, and Jacob Shapiro, “How to Measure the War,” Policy Review, 157 (2009), 15–30; and Stephen Downes-Martin, “Operations Assessment in Afghanistan is Broken: What Is to Be Done?,” Naval War College Review 64, no. 4 (Autumn 2011). Also, fitting within the context of the two overarching purposes, Dr. Jonathan Schroden, explains that assessments, “inform commanders’ decisionmaking (e.g., on resource allocation); completing the planning or design cycle (i.e., “observe plan-execute-assess,” or “design-learn-redesign”); recognizing changing conditions in the environment; stimulating and informing adaptation and innovation; reducing uncertainty and bounding risk; showing causal linkages between actions and the achievement of objectives; documenting the commander’s decision-making process; and evaluating performance of subordinate units.” See Jonathan Schroden, “Why Operations Assessments Fail: It’s Not Just the Metrics,” Naval War College Review 64, no. 4 (Autumn 2011). For further analysis of U.S. assessment practices in post–9/11 operations, See Frank Hoffman and Alex Crowther, “Strategic Assessment and Adaptation: The Surges in Iraq and Afghanistan” in Lessons Encountered: Learning from the Long War, edited by Richard Hooker and Joseph Collins (Washington, D.C.: NDU Press, September 2015), 89–164, available at <http://ndupress.ndu.edu/Portals/68/Documents/Books/lessons-encountered/lessons-encountered.pdf>.

2 This third factor is critical, as much occurs in the operational environment that is not the direct result of U.S. actions. An assessment process that only focusses on U.S. performance and the outcomes of U.S. activities risks missing other important developments in the environment that could potentially impact (positively or negatively) the success of the mission.

3 Although outlined in Joint Doctrine in 2013, these conclusions were drawn in a November 2011 conference the author observed in Tampa, hosted by U.S. Central Command.

4 General Stanley McChrystal, “COMISAF Initial Assessment,” September 21, 2009, available at <http://www.washingtonpost.com/wp-dyn/content/article/2009/09/21/AR2009092100110.html>.

5 Ambassador Eikenberry’s Strategic Guidance to U.S. Civilian personnel in Afghanistan; August 30, 2010.

6 Rajiv Shah, USAID Administrator’s Stabilization Guidance; January 2011, available at <http://pdf.usaid.gov/pdf_docs/PDACQ822.pdf>.

7 USAID Assessment Contract from the USAID Stabilization Unit in Kabul, March 2012, available at <https://www.usaid.gov/sites/default/files/documents/1871/AID-306-TO-12-00004%20MISTI%20Task%20Order-redacted.pdf>.

8 Chairman Martin Dempsey, interview conducted by Richard D. Hooker Jr., and Joseph J. Collins, January7, 2015, published Joint Force Quarterly 78 (July 2015), available at <http://ndupress.ndu.edu/Media/News/NewsArticleView/tabid/7849/Article/607296/jfq-78-from-the-chairman-an-interview-with-martin-e-dempsey.aspx>.

9 For additional information on the assessment gap, see statements by former CENTCOM planner John Agoglia and the Deputy Assistant Secretary of Defense for Strategy made during the “Framework for Success in War-Torn Societies Workshop” hosted by the United States Institute for Peace; 2010, available at <http://www.c-span.org/video/?294439-1/measuring-progress-stabilizing-wartorn-societies&start=3182>. Also see Ben Connable, “Embracing the Fog of War: Assessment and Metrics in Counterinsurgency,” (Washington DC: RAND, 2012), available at <http://www.rand.org/content/dam/rand/pubs/monographs/2012/RAND_MG1086.pdf>.

10 Interviews conducted by the author in 2011 at the Kandahar Provincial Reconstruction Team (PRT) showed that the staff was overwhelmed by requirements for information to feed the assessment process. See also Ben Connable, “Embracing the Fog of War.”

11 Dr. Kristian Knus Larsen, Substitute for Victory, Performance Measurements in Vietnam, the Gulf War, and Afghanistan (Københavns Universitet, Institut for Statskundskab, 2014).

12 Finding derived from analysis of interviews conducted by the author in 2011 at the Kandahar PRT.

13 One reviewer of this article observed that assessments may have been, “reflective of the centralized, top–down nature of the U.S. campaign and the centralized, top–down nature of the solution foisted on Afghanistan,” and that perhaps then, “assessments were a reflection of the overall U.S. approach to the war.” See also: Ben Connable, “Embracing the Fog of War”, Jan Frelin and Anders Noren, “Recent Developments in Evaluation & Conflict Analysis Tools for Understanding Complex Conflicts,” (Swedish Defence Research Agency, May 2012).

14 USAID Assessment Contract from the USAID Stabilization Unit in Kabul, March 2012, available at <https://www.usaid.gov/sites/default/files/documents/1871/AID-306-TO-12-00004%20MISTI%20Task%20Order-redacted.pdf>.

15 Report from Wilton Park Conference WP1022, “Winning ‘Hearts and Minds’ in Afghanistan: Assessing the Effectiveness of Development Aid in Counterinsurgency Operations,” March 11–14, 2010, available at <https://www.wiltonpark.org.uk/wp-content/uploads/wp1022-report.pdf>. The paper goes on to assert that the replacement of the international community’s “enemy centric” approach with a “population centric” military strategy emphasizes the need for a sober assessment of what motivates people to rebel, and a deliberate incorporation of these observations into the design of a more effective strategy that addresses the underlying causes of unrest.

16 Author discussion with an assessment practitioner at a 2015 development workshop for the Joint Concept for Human Aspects of Military Operations, McLean, Virginia.

17 Wiliam Upshur, Jonathan Roginski, David Kilcullen, “Recognizing Symptoms in Afghanistan, Lessons Learned and New Approaches to Operational Assessments,” PRISM 3, no. 3 (2012), available at <http://www.ndu.edu/press/lib/pdf/prism3-3/prism87-104_upshur-roginski-kilcullen.pdf>. See also: Ben Connable, “Embracing the Fog of War.”

18 Wiliam Upshur, Jonathan Roginski, David Kilcullen, “Recognizing Symptoms in Afghanistan.” See also: Ben Connable, “Embracing the Fog of War.”

19 Wiliam Upshur, Jonathan Roginski, David Kilcullen, “Recognizing Symptoms in Afghanistan.” To fix the problem, several have posed an alternative mental model based on theories of complex adaptive systems. See Cynthia Irmer, “A Systems Approach and the Interagency Conflict Assessment Framework (ICAF),” Cornwallis Group, 2009.

20 A proliferation of new approaches occurred. Interestingly, although many attempts at developing assessment practices specifically for Afghanistan did take place, several of the most commonly utilized new approaches were not developed specifically for the Afghanistan mission. Instead, they were based on a broader perceived gap in USG assessment practices for stabilization missions.

21 MPICE was initially developed on the heels of a series of workshops held between 2004-05 by the USIP and the Center for Strategic and international Studies, in response to a perceived gap in U.S. interagency capability to measure the outcomes of various efforts in conflict environments. The working groups had a goal to, “define the major requirements and make recommendations for those who strive to measure progress in war-torn, weak, and failed states.” This process eventually led to a report which called for a framework to, “measure progress toward reducing the means and motivations for violent conflict and building local capacity to resolve conflict peacefully.” See Craig Cohen, “Measuring Progress in Stabilization and Reconstruction,” USIP Special Report (2005), available at <https://www.usip.org/publications/2006/03/measuring-progress-stabilization-and-reconstruction>. The first use of MPICE in the field came in Haiti starting in 2007 to support the Haiti Stabilization Initiative. See David C. Becker and Robert Grossman-Vermas, “Metrics for the Haiti Stabilization Initiative,” PRISM 2, no. 2, 145–58. For information on the usage of MPICE by the U.S Army Corps of Engineers please see: Edited by John Agoglia, Michael Dziedzic, Barbara Sotirin; Measuring Progress in Conflict Environments (MPICE): A Metrics Framework (Washington, D.C: USIP, 2010), available at <http://www.dtic.mil/dtic/tr/fulltext/u2/a530245.pdf>. At the time, USACE was becoming increasingly concerned that it could not adequately explain whether or not the projects in which it was engaged were having their desired effect on the outcome of the ISAF mission.

22 Craig Cohen, “Measuring Progress in Stabilization and Reconstruction,” USIP Special Report, 2005.

23 John Agoglia, Michael Dziedzic, Barbara Sotirin, Measuring Progress in Conflict Environments (MPICE).

24 Ibid.

25 Ibid.

26 The Interagency Conflict Assessment Framework; U.S. State Department Document (2008), available at < http://www.state.gov/documents/organization/187786.pdf>.

27 Ibid. Note that the U.S. State Department describes the purpose of ICAF as, “to develop a commonly held understanding, across relevant USG Departments and Agencies of the dynamics driving and mitigating violent conflict within a country that informs US policy and planning decisions. It may also include steps to establish a strategic baseline against which USG engagement can be evaluated.” It utilizes a process by which an interagency team will identify societal and situational dynamics that are shown to increase or decrease the likelihood of violent conflict.

28 Dr. Cynthia Irmer is credited for the eventual ICAF framework that resulted. Outlining the logic behind ICAF, she describes a requirement to shift the mindset of interagency teams from, “using linear problem/solution metaphors such as ‘lanes of operation, ‘clear, hold, build’ and ‘unity of command/unity of effort’” toward a “complex, adaptive system frame,” rooted in, a better understanding of the system itself as opposed to being based on the previous, “experience or knowledge of US civilians or military personnel.” The result, she says, is an, “improved, because better informed and better thought- through, collective understanding of the situation and basis for going forward with coordinated whole-of-government strategic planning or individual agency program design.”

29 Most consider DSF to be TCAPF “rebranded” and, for the purposes of this article, are therefore referred to as one. An interviewee knowledgeable about the TCAPF and DSF development process recalls that in January 2010, USAID and the Counterinsurgency Training Academy in Kabul revised the TCAPF to create the DSF. The revision kept the DSF model and retained the survey methodology. However, DSF also emphasized the importance of other sources of information regarding sources of instability, and the use of an “integrated Tactical Stability Matrix” (TSM) for the development of (primarily non_kinetic) lines of effort to address sources of instability. DSF also placed added emphasis on the importance of standing up a stability working group, which was to include all relevant interagency partners, as well as indigenous stakeholders, for enhanced coordination.

30 2011 interview with an individual knowledgeable about the TCAPF and DSF development process. For more info on TCAPF/DSF, also see Robert Swope, “Improving the Effectiveness of the U.S. Agency for International Development (USAID) District Stability Framework (DSF) in Afghanistan”; December 14, 2012

31 Military Operations Research Society (MORS) Annual Symposium, DSF Briefing under Chatham House Rule, 2011. TCAPF/DSF training emphasized all four questions should be followed by questions of “why.” In a 2011 interview, the interviewee explained the utility of TCAPF/DSF. He said that the military would see Taliban cells clustered in a particular area. They would approach this problem by trying to identify how best to remove the cells. He explains that using TCAPF/DSF a stabilization approach, “would seek to understand what was normal for a given area and what led to/enabled the Taliban cells to be present in the area. By focusing on causes rather than symptoms and devising solutions that addressed these underlying issues in a manner that made sense for the locals in question, sustainable long-term fixes were more likely to be developed.”

32 See Jason Alexander and James Derleth, “Stability Operations: From Policy to Practice;” PRISM 2, no. 3, 2012. Derleth and Alexander explain, “To stabilize an area, two simultaneous processes must occur. First, the sources of instability must be identified and mitigated. Second, societal and/or governmental capability and capacity to mitigate future sources of instability must be fostered. Simply stated, practitioners must diminish the sources of instability while building up the forces of stability. This process is the underlying foundation of the District Stabilization Framework.”

33 MORS Annual Symposium, DSF Briefing under Chatham House Rule, 2011; Also see: USAID Statement of Work for Contract # USAID Task Order AID-306-TO-12-00004, Management Systems International; Additionally, in a 2011 interview with an individual knowledgeable about the TCAPF and DSF development process, the interviewee explains that the foundational concepts behind TCAPF/DSF are greatly informed by Collier’s work on stabilization vs. development.

34 Interviews with members of the District Stability Coordination office at the Kandahar PRT in 2011. Similar to other frameworks, RSSA was not embraced by all partners in all parts of the area of operation. Some also criticized it for the lack of granularity and detail. Also note that an interviewee from a mobile training team for TCAPF/DSF in 2011 shared that some have argued that TCAPF/DSF would have been a natural complement to RSSA in the field, given the highly localized focus of the TCAPF/DSF approach. They argued that TCAPF/DSF was better for capturing overarching individual district assessments nested within the RSSA and that RSSA assessments lacked the data and structure facilitated by TCAPF/DSF at the district level and below.

35 Interviews with members of the District Stability Coordination office at the Kandahar PRT in 2011.

36 Restated to preserve anonymity.

37 Email exchange between the Author and an IJC civilian advisor. Author has been authorized by the advisor to use information without attribution by name.

38 USAID Stabilization Unit Contract.

39 “The Stability Analysis Methodology in Afghanistan: An Evaluation of Best Practices and a Recommended Method;” Caerus Associates, LLC, October 6, 2013, available at < http://pdf.usaid.gov/pdf_docs/PA00JN7D.pdf>. Note that the SAM was really a consolidation of different assessment approaches for all fixed regional command areas of operation.

40 This observation was drawn primarily from a review of 22 after action reports on TCAPF/DSF implementation in the field provided to the author by a representative from the U.S. Department of State in an email, 2013.

41 This finding is based on an analysis of more than 80 interviews with returning civilian and military personnel from Afghanistan as well as an estimated 20 interviews conducted by the author in Kandahar and Kabul in 2011; also note that there was no single conceptualization in the field of the mission, strategy, and desired endstate of the Afghanistan campaign and many of the views were very different and even contradictory among USG personnel and their allied and indigenous partners. Also, for an example of lack of clarity of mission and strategy in Afghanistan, see 2015 interview with General Stanley McChrystal, USA (ret.) conducted by Joseph Collins, Frank Hoffman, and this author.

42 Kilcullen observes that counterinsurgent forces have a, “tendency to judge success based on progress in creating top-down, state-based institutions, while reposing less value and significance in bottom-up societal indicators.” He goes on to note that analysts tend to, “give greater weight to events at the national level, or to elite-level political maneuvering, than to events at the grassroots, civil society level.” Kilcullen, David ;“Intelligence,” in Understanding Counterinsurgency: Doctrine, Operations and Challenges, ed. Thomas Rid and Thomas Keany (New York: Routledge, 2010). Kilcullen was speaking mainly about intelligence services, but he explains, “This pathology may not be confined to intelligence services. Rather, it seems to reflect wider Eurocentric attitudes to the process of state formation. Recent research suggests that the international community, including the vast international aid and development bureaucracy and the ‘peace industry’ associated with international organizations such as the United Nations and the International Monetary Fund, tends to have a strong preference for top-down state formation (‘nation building’) based on the creation of national-level, “modern,” Western-style institutions of the central state.” The failure to properly address human aspects has more recently been framed as a neglect of the human domain of war or the human aspects of military operations. For instance, Frank Hoffman and T.X. Hammes both note that the U.S. tends to “overlook human factors” in war and warfare, observing that the tendency, “has a long history, and reflects a tension in American strategic culture which values science, technology, and logistics over other strategic dimensions.” They observe that the U.S. often emphasizes “technologically-produced solutions to what are inherently political challenges that can only be resolved in the minds and will of the social community that is challenged.” F.G. Hoffman and T.X. Hammes, “Joint Force 2020 and Human Dynamics: Time for a new conceptual framework?” Institute for National Strategic Studies Research Paper (NDU Press: Washington D.C.:April, 2013).

43 Integrated Civil-Military Campaign Plans for Afghanistan in 2009 (McChrystal/Eikenberry) and in 2010 (Petraeus/Eikenberry).

44 Different agencies and entities within the USG were then put in charge of these various lines. For instance, the military was generally in charge of security, the Department of State led most governance initiatives, and USAID was in charge of most development, although the eventual spike in Commander’s Emergency Response funding and the role of the military on PRTs and on Agribusiness Support Teams made the military a very active player in the development realm.

45 See Erdmann, Andrew; “How militaries learn and adapt: An interview with Major General H. R. McMaster” McKinsey and Company, April 2013, available at <http://www.mckinsey.com/insights/public_sector/how_militaries_learn_and_adapt>.In it, then MG McMaster explains that in war, “... we often start by determining the resources we want to commit or what is palatable from a political standpoint. We confuse activity with progress, and that’s always dangerous, especially in war. In reality, we should first define the objective, compare it with the current state, and then work backward: what is the nature of this conflict? What are the obstacles to progress, and how do we overcome them? What are the opportunities, and how do we exploit them? What resources do we need to accomplish our goals? The confusion of activity with progress is one final continuity in the nature of warfare that we must always remember.”

46 The various agencies and organizations were often much more concerned with demonstrating progress within their own lines of effort, which often did not have a link to a corresponding relevant actor behavior related outcome.

47 Military fragmentary orders were issued ordering the use of TCAP/DSF in the field. However, the civilian agencies were not under the chain of command of the military battlespace owners and many civilians refused to embrace the framework alongside the military. Even within the civilian agencies and military organizations, there was great disagreement as to the relevance of TCAPF/DSF. The USAID Stabilization Unit in Kabul for instance promoted TCAPF/DSF, while many in the more mainstream bureaus of USAID either had never heard of the framework or did not support its use. Department of State and USAID personnel also did not agree on the utility of the frameworks at various times.

48 Assessment approaches changed drastically over the various tenures of ISAF Commanders. Similarly, TCAPF/DSF went from being the primary framework for tactical assessment 2009–11, to being far less utilized by follow-on units. Even when fragmentary orders were initiated to use TCAPF/DSF and it was integrated into General Petraeus’ Counterinsurgency qualification standards, inconsistency still occurred. Some paid lip service to TCAPF/DSF and simply filled out the assessment frameworks mindlessly as required. Others utilized the framework as the central mechanism by which to decide on where to focus their efforts throughout their deployments. Dr. Kristian Knus Larsen , “Substitute for Victory, Performance Measurements in Vietnam, the Gulf War, and Afghanistan,” (Ph.D. diss, Copenhagen University, 2014).

49 MORS Annual Symposium, DSF Briefing under Chatham House Rule, 2011; When asked if TCAPF/DSF had an IT support package with it, one TCAPF/DSF briefer said that “Afghanistan is where databases go to die.” This is a fair enough point and perhaps a new database was not the answer. However, there was no strategy for optimizing the utility of the information and making sure all who could benefit received the information in a usable form that suited their purposes.

50 Various military and civilian entities purchased their own IT support packages for knowledge management. Others tried to adapt the systems that they already had for the different types of information utilized within the new assessment approaches.

 

Mr. Nathan White recently completed a research fellowship with the Center for Complex Operations at the National Defense University. Human subjective protection protocols have been used in this study in accordance with the appropriate statutes and Defense Department (DOD) regulations governing those protocols. The sources’ views are solely their own and do not represent the official policy or position of DOD or the U.S. Government.