Forum

JPME Today

Commentary

Book Reviews

Doctrine

Joint Force Quarterly 79 (4th Quarter, October 2015)

News | Oct. 1, 2015

Improving Joint Interagency Coordination: Changing Mindsets

By Alexander L. Carter

Joint interagency coordination is incredibly important but difficult work that is hampered by cultural differences among team members and an absence of clear and focused performance measures. Despite some rare successes in interagency work between the Department of Defense (DOD) and other partners in the past 20 years, successful interagency teamwork remains elusive across the combatant commands. This article examines the recent history of joint interagency coordination, discusses some of the key cultural and organizational impediments facing these teams, and introduces a set of performance measures for immediate use across these commands. These measures, if adopted by these teams, would positively impact performance and inform our senior civilian and military leadership on the nature of how we exercise national power to support our allies and defeat our enemies.

Army and Air National Guardsmen carefully exit helicopter pad during PATRIOT Exercise 2015 at Mile Bluff Medical Center in Mauston, Wisconsin, July 2015 (U.S. Air National Guard/Paul Mann)

Army and Air National Guardsmen carefully exit helicopter pad during PATRIOT Exercise 2015 at Mile Bluff Medical Center in Mauston, Wisconsin, July 2015 (U.S. Air National Guard/Paul Mann)

Why It Matters

Clearly, the world is getting more dangerous and unpredictable, and not just within the traditional paradigms of war and conflict. There have been global and regional conflicts involving the United States, but there have also been natural and manmade disasters (hurricanes, earthquakes, tsunamis, oil spills, refugee crises, and so forth) around the world. And we have supported our allies and friends in their own humanitarian and disaster recovery efforts. At the discretion of the President and Congress, we have responded to many of these events by typically leveraging our military resources through any one of the unified combatant commands. Increasingly, these manmade and natural conflicts and disasters create a new and much more complicated set of challenges—that is, wicked problems—for our military planners. These problems require a different set of skills, ones that are increasingly being sourced outside of our military structure and institutions.

Wicked problems are almost impossible to solve. For example, there are multiple stakeholders whose interests are linked to the problem(s). Wicked problems are unique; they are not discrete. Typically, as the wicked problem gets analyzed, it morphs into a new or different set of problems.1 In short, those holding opposing viewpoints would (and should) approach these problems from different biases, perspectives, and experiences in order to create a “shared understanding of the problem[s],”2 especially when they cannot be “solved by traditional processes.”3 Thus, the U.S. military’s opportunities to work more closely with its non-DOD (that is, interagency) partners have never been more relevant and timely. We cannot solve or attempt to solve these wicked problems without the expertise and skills of those drawn from all of our instruments of national power (diplomatic, information, military, and economic, or DIME),4 including those from outside the government sector (contractors, academicians, not-for-profit agencies, corporations, and so forth). Joint interagency teams, therefore, should be increasingly viewed as attractive forums and vehicles to leverage our combined national power in support of U.S. interests, at home and abroad. So how has joint interagency work evolved and progressed (or not) over the years, and what lessons can help us make better use of these unique organizations?

Ups and Downs

In the last 25 years, the U.S. experience with joint interagency coordination has evolved, spurred by our military interventions in Panama (1989–1990), Somalia (1992–1994), and Haiti (1994–1995).5 Reflecting on those interventions, President Bill Clinton issued Presidential Decision Directive 56, “Managing Complex Contingency Operations,” in May 1997, which established standardized processes and structures relating to joint interagency coordination.6 However, a report reviewing the directive criticized the joint interagency environment, citing a continuing lack of a “decisive authority and . . . the contrasting approaches and institutional cultures.”7 Later, with our involvement in the wars in Iraq and Afghanistan, President George W. Bush promulgated national-level guidance relating to joint interagency coordination on December 7, 2005: National Security Presidential Directive 44, “Management of Joint Interagency Efforts Concerning Reconstruction and Stability.”8 The directive expanded the need for joint interagency coordination across the “spectrum of conflict: complex contingencies, peacekeeping, failed and failing states, political transitions, and other military interventions.”9

Another key publication that continued the evolution of joint interagency coordination was Joint Publication (JP) 3-08, Interorganizational Coordination During Joint Operations,10 which established guidance within DOD on the structures and processes in place to support joint interagency coordination, including key U.S. Government agency responsibilities and lead designations for different types of military and nonmilitary interventions. JP 3-08 also formalized a joint interagency team structure that U.S. Central Command had created years earlier: the Joint Interagency Coordination Group. The goal of JP 3-08 was to:

provide sufficient detail to help Combatant Commanders, subordinate Joint Force Commands, their staffs, and joint interagency partners understand the Joint Interagency Coordinating Group (or equivalent organization) as a capability to enable the coordination of all instruments of national power with joint operations.11

It is during this period of recent history, and with the backdrop of these supporting directives and policies, that we can point to some rare but relevant success stories with joint interagency work, despite organizational and cultural obstacles. Two such examples are the Bosnian train and equip program and Joint Interagency Task Force–South.

Congress funded the Bosnian train and equip program following the Bosnian war and the 1995 signing of the Dayton Peace Accords.12 The objective of the program was to provide the Bosnian Federation military force with training, weapons, and other types of equipment to build up their capability to defend themselves against the neighboring Serbian military. An interagency task force that drew its ranks initially from DOD, the Department of State, and the Central Intelligence Agency was created to oversee the program.

At the outset, the task force faced significant challenges. Initially, it had “no money, no equipment, and no training.”13 But during the first 2 years of operation, the task force was able to obtain adequate funding, secure and execute critical training contracts, obtain weapons (mostly donated from other countries), and overcome anti-U.S. sentiment against the program at home and abroad. Yet in writing about the task force, its former deputy Christopher Lamb asserts that its success was due to a combination of organizational, team, and individual variables. Ultimately, Dr. Lamb surmised, the train and equip program “rectified the military imbalance between Bosnian Serb and Federation forces, reassuring the Bosnians and sobering the Serbs,”14 and it “facilitated the integrated approach the United States pursued in Bosnia, proving remarkably adept at implementing its controversial security assistance program.”15

Another example of interagency success is Joint Interagency Task Force–South (JIATF-South), headquartered in Key West, Florida. Its mission is to conduct “interagency and international detection and monitoring operations, and the interdiction of illicit trafficking and other narco-terrorist threats in support of national and partner nation security.”16 Since its latest formation in 2003, when it combined with another task force (JIATF-East), the team’s composition has reflected a diverse body of team members including all branches of the U.S. military, U.S. Coast Guard, Drug Enforcement Administration, Federal Bureau of Investigation, National Security Agency, National Geospatial-Intelligence Agency, Central Intelligence Agency, and U.S. Customs and Border Protection. Additionally, JIATF-South has a plethora of international partners across the region. Over the past 10 years, JIATF-South’s accomplishments have been impressive, with its successes allowing “JIATF-South to stand toe-to-toe with the drug traffickers . . . driving up their costs, cutting their profits, raising their risk of prosecution and incarceration, and forcing them to divert their trade to less costly destinations . . . accounting for roughly 50 percent of global cocaine interdiction.”17

Specialists prepare to investigate mock chemical weapons inside training village of Sangari at Joint Readiness Training Center, Fort Polk, Louisiana (40th Public Affairs Directorate/William Gore)

Specialists prepare to investigate mock chemical weapons inside training village of Sangari at Joint Readiness Training Center, Fort Polk, Louisiana (40th Public Affairs Directorate/William Gore)

Despite these two examples of interagency successes, however, joint interagency coordination within the combatant commands continues to be difficult to achieve despite publications, speeches, briefs, endless memoranda, directives, and working groups. For example, two combatant commands were the subject of a 2010 review by the U.S. Government Accountability Office (GAO).18 In its report, GAO cited that U.S. Africa Command (USAFRICOM) demonstrated some practices that “sustain collaboration, but areas for improvement remained”19 in key staff work associated with linking geographic combatant command theater security cooperation plans to country and Embassy strategic plans. In addition, USAFRICOM staff had “limited knowledge about working with U.S. embassies and about cultural issues in Africa, which has resulted in some cultural missteps.”20

U.S. Southern Command, on the other hand, was viewed as having “mature joint interagency processes and coordinating mechanisms,”21 but GAO was still critical of the command’s handling of its logistical support to the 2010 Haiti earthquake disaster relief effort and the command’s underlying joint interagency planning and staffing processes.22 The U.S. Agency for International Development expressed similar disappointment in its after-action review of that same relief effort, commenting that, in effect, the military commanders on the ground were not adequately educated on the humanitarian assistance/disaster relief operations.23

Why do some interagency teams succeed while others struggle? In reviewing the examples of the Bosnian train and equip program and JIATF-South, Dr. Lamb writes that both interagency teams were successful because they exhibited 10 positive “determinants of effectiveness” within 3 broad performance areas: organizational (purpose, empowerment, and support), team (structure, decisionmaking, culture, and learning), and individual (composition, rewards, and leadership).24 A successful team will generally have positive indicators within these areas. Similarly, in reviewing interagency teams or environments that were not successful, it can be argued there were negative indicators assessed within these same areas.

Culture Clash and Structure

Two indicators of interagency team success or failure that deserve additional enquiry relate to the team’s culture and structure. Perhaps joint interagency coordination can be challenging because the individuals and institutions they represent are so different in terms of the cultures and organizational structures. For example, in comparing military officers (DOD) with Foreign Service Officers from the Department of State, the contrasts in approach and style are significant. For example, whereas the DOD mission is to prepare for and fight war, the State mission is to conduct diplomacy. Unlike DOD, State does not see training as a major activity or as important for either units or individuals. DOD is uncomfortable with ambiguity, but State can deal with it. Doctrine is seen as critical to DOD but not to State. Where DOD is focused on discrete events and activities with plans, objectives, courses of action, and endstates, State is focused on ongoing processes without expectation of an endstate.25 DOD views plans and planning as a core activity, yet State views a plan in general terms to achieve objectives but values flexibility and innovation.26 Is it any wonder, then, that “most Foreign Service Officers spend the majority of their time engaging their host-nation equivalents, not directing actions along a line of subordinates?”27

If we are to become more effective with joint interagency coordination, DOD must understand and appreciate the value that joint interagency partners bring to the fight. Joint interagency coordination cannot “be described like the command and control relationships for a military operation. . . . [U.S. Government] agencies may have different organizational cultures and, in some cases, conflicting goals, policies, procedures, and decision-making techniques and processes.”28 Because of the cultural and ideological differences between DOD and non-DOD participants, the level of commitment exhibited by members of this joint interagency team may vary tremendously, which will prevent or impede the team’s ability to become a “high performance group.”29

Joint interagency teams can organize themselves in many ways to accomplish their mission. Too often, though, they face challenges in governance—how work gets done and by whom. One observer noted, “The principal problem of joint interagency decisionmaking is lack of decisive authority; there is no one in charge.”30 In reviewing the more scientific study of organizational psychology, an argument can be made that joint interagency teams fit the definition of “leaderless groups,” which are those that “usually do not have a professional leader or facilitator who is responsible for the group and its functioning.”31 Instead, members assume the role of leader or facilitator. The purpose for which the group was created can become lost or blurred over time. Group members who assume the role of leader are likely to be untrained in group leadership and consequently may not understand group dynamics and how to manage group leadership tasks. These groups may run the risk of groupthink that produces a situation where disagreement and differences are not tolerated.32 Some basic team tasks, such as enforcing ground rules and team norms, may not be accomplished. Finally, team meetings may lack structure, focus, or direction.33

Given these cultural and structural challenges, joint interagency teams may benefit from a common set of standards or measures to strive toward, linking them with common standards and norms. Joint interagency teams may benefit by using some methods to evaluate how effective they are within their respective combatant commands. The questions surrounding measurement of joint interagency teams, however, are initially daunting: How do you measure teamwork? How do you measure coordination? How do you quantify a group’s success when most of its products and services (such as advice) are not quantifiable? Should we compare our joint interagency efforts to other similar organizations or functions in other combatant commands? Any measures adopted by the team must be clear, unambiguous, and unifying to the team.

Crew of Coast Guard Cutter Stratton stands by to offload 34 metric tons of cocaine in San Diego, California, August 2015 (U.S. Coast Guard/Patrick Kelley)

Crew of Coast Guard Cutter Stratton stands by to offload 34 metric tons of cocaine in San Diego, California, August 2015 (U.S. Coast Guard/Patrick Kelley)

Performance Measures

Group behavior and performance in a joint interagency group would be most effectively harnessed and channeled by focusing on agreed-upon performance measures. Introducing these critical few measures would help channel discussion, focus, and overall results. The framework developed by Lamb provides a good starting point to assess the environment within which any joint interagency operates.34 But actionable measures within this framework are needed to tie individual, team, and organizational performance together. What measures are needed?

The military typically refers to measures of effectiveness (MOEs) and measures of performance (MOPs). MOEs are defined as criteria used to assess changes in system behavior, capability, or operational environment that are tied to measuring the attainment of an endstate, achievement of an objective, or creation of an effect. MOPs are defined as criteria used to assess friendly actions tied to measuring task accomplishment.35 Taken together, these measures can inform and drive team performance if built and regularly reported on. According to JP 3-0, Joint Operations, “continuous assessment helps the Joint Force Command and joint force component commanders determine if the joint force is doing the right things (MOE) to achieve its objectives, not just doing things right (MOP).”36 MOEs and MOPs add concrete, tangible indicators of whether a joint interagency team is operating effectively, but these measures should be grouped according to a general area of observation or performance.

Both are important types of measures for the purposes of driving joint interagency team behavior and performance. In the table, the first column includes the 10 Postulated Determinants of Effectiveness that serve as an overall performance framework through which to measure level of joint interagency success; the second column specifies the Supporting Measures. This column is a collection of example performance measures (a combination of MOEs and MOPs).

Table. Postulated Determinants of Effectiveness and Supporting Measures

Building and Using Performance Measures

The work of joint interagency teams could and should be measured primarily in how they produce advice, conduct coordination, and, in some cases, lead the combined U.S. Government response to planned or unplanned events around the world in support of national interests and as directed by senior diplomatic or military leadership. But how does one truly measure teamwork? How can performance measures really gauge how well team members cooperate or how well they provide outstanding staff support to their command or joint activity? To answer this question, the team should understand the areas of performance that it can influence within its structure and mission by conducting a team assessment of where it stands and where it needs to go. This is done through three simple steps.

Soldier with 5th Battalion, 3rd Field Artillery Regiment, 17th Field Artillery Brigade, 7th Infantry Division readies firefighting gear at unit headquarters on Joint Base Lewis-McChord, Washington, August 2015 (28th Public Affairs Directorate/Patricia McMurphy)Soldier with 5th Battalion, 3rd Field Artillery Regiment, 17th Field Artillery Brigade, 7th Infantry Division readies firefighting gear at unit headquarters on Joint Base Lewis-McChord, Washington, August 2015 (28th Public Affairs Directorate/Patricia McMurphy)

First, a team self-assessment must be conducted using the framework areas of performance, focusing on where the team rates generally positively or negatively for each of the 10 areas within the framework. For example, one team’s members might review the framework and self-assess that while they generally are doing fine in the areas of composition, decisionmaking, and leadership, they believe that they could do better in culture, structure, and empowerment. This initial and subjective team assessment sets a baseline for where to improve team performance. This should be a subject of hearty discourse and heated debate—an agenda item that may be best planned as a singularly focused offsite retreat. Second, the team should identify a set of a few critical measures (5–7 MOPs or MOEs within the table) across organization, team, and individual areas. The team may choose the ones offered in the table or create others more appropriate, adhering to the principle of definition that each measure be specific, measurable, attainable, relevant, and timely.37 Third, each measure must be selected with the endstate of improving joint interagency team results.

The team should then assign someone to be responsible for collecting the data and tracking and reporting the team’s progress against each agreed MOP and MOE. That person is also responsible for helping to define where the team wants to go with that area of performance. As such, the measure will have some clear thresholds of what determines underperforming, performing, or overperforming. The point is that the team determines which measures are right for it and charts a path forward on how to achieve success in these measures.

Any joint interagency team members can take the measures they have adopted to help them channel their individual and collective energies toward more productive activity. Measures will give the team focus, direction, and added meaning as team members seek to support their command organization, whether it be a combatant command or some other joint activity. Individuals will benefit from being able to link their efforts and contributions to the team. They will be able to report back to their parent commands or agencies in a more factual and descriptive manner, informing their leadership in richer ways about how their agency is supporting this joint effort. But these measures will not only drive performance and results within each joint interagency team; this new model or framework with its supporting measures also has the opportunity to influence and inform the most senior levels of military leadership.

There are many forms of joint interagency team constructs within the U.S. Government. The more familiar ones may be found within unified combatant commands or even at Embassies, but there are others. Regardless of where they are and whom they support, these teams operate within an enterprise, driven by either senior military or civilian leadership. These teams may ultimately report to four-star generals, Federal agency administrators, governmental senior executives, or even specially appointed directors with quasi-governmental jurisdiction and powers. All of these leaders are charged with the responsibility to support their organizational or enterprise mission and track progress toward goals and objectives on a regular basis. The measures developed for joint interagency teams can be a critical component of a leader’s evaluation of how joint interagency teams are supporting their “customers.” One technique borrowed from the business sector that is worth a brief mention is the power of comparing similar activities (in this case, joint interagency coordination) across geographies (that is, U.S. Southern Command, U.S. European Command, U.S. Central Command) or even comparing similar functions (that is, theater security cooperation activities). Why do this?

As senior leaders are facing increasingly complex problems within their areas of interest and operation and are being asked to do more with less through appropriated funding constraints, they also are having to question the efficiency and effectiveness of the programs and activities for which they are responsible. By comparing similar activities or functions using the same measures, leaders could be better informed about the resource and manpower decisions they make within these joint support activities.

Many leading businesses, whether in the manufacturing, service, or retail industries, for example, regularly score their performance using industry standard measures. Using this internal assessment, they can see how their company performance stacks up against other similar companies in the same industry. For example, a manufacturing company may have as one of its key measures or metrics a need to capture “purchase order cycle time.” This would be a metric that would be regularly updated, reported on, and assessed relative to how other companies were performing in this same metric. Information on this measure would be collected from various sources on a regular basis. It is assumed that this metric is so universal that a comparison of company-level performance across the industry would be instructive because it would allow the company to see how it is doing relative to its peers—where it stands. This review offers the company an external, independent look at a part of its operations and usually motivates it to improve upon key aspects of its business. This process is called benchmarking, which can be defined as:

a standard of performance . . . benchmarking helps organizations [to] identify standards of performance in other organizations and to import them successfully to their own. It allows them to discover where they stand in relation to others. By identifying, understanding, and comparing the best practices and processes of others with its own, an organization can target problem areas and develop solutions to achieve the best levels of government.38

Benchmarking is an example of a productivity solution (or management tool) in the business world that can be properly applied to the joint interagency environment. Another way to look at benchmarking (which should have increasing relevance to the government in light of continuing Federal budget challenges) is as “the routine comparison with similar organizations of administrative processes, practices, costs, and staffing, to uncover opportunities to improve services and/or lower costs.”39

Critics of using self-defined measures to benchmark themselves against others might be afraid of what they may find. As Jeremy Hope and Steve Player write, “Benchmarking is the practice of being humble enough to admit that others are better at something than you are and wise enough to learn how to match or even surpass them.”40 Proponents of this benchmarking practice, on the other hand, argue that “setting aspirational and directional goals can inspire and motivate teams. The process recognizes that everything is connected and achieving any one goal depends on making progress towards all others.”41

The measures introduced above should be further discussed, defined, and operationalized within each combatant command. With adopted measures in place, joint interagency teams are better able to chart a course of improvement by understanding where they are (baseline) and where they need to go (endstate). But these measures by themselves are of limited value if they are not put in the broader context of how similar joint interagency activities are performing across the combatant commands, since each of these commands competes for funding and resources. For example, are there some measures that should be candidates for comparison across combatant commands, despite their differences in mission, climate, geography, the type of interagency supported historically provided, and so forth? How can we compare joint interagency activities across the DOD enterprise using metrics defined within our own combatant command?

Final Thoughts

The United States will continue to be called upon to support its allies and fight its enemies across a broad spectrum of conflict. Our measured response to each of these calls for help should not be confined to purely military or diplomatic lines. As we see more wicked problems taking the world stage, we must look to our joint interagency teams and the commands and agencies they represent to deliberate on and provide advice across the full range of our national instruments of power (DIME). But these teams will continue to be hamstrung by cultural clashes and structural challenges unless changes are put in place to properly structure and support these teams. By doing so, the teams could leverage the combined talents and resources from capabilities across government, the nonprofit sector, academia, and even the business sector.

These changes to our joint interagency teams would involve a mental shift in the way they (and others) evaluate their performance through meaningful performance measures. These measures must gauge not only whether we are doing things right, but also whether we are doing the right things. Through the adoption of a performance framework and supporting measures, teams can channel their energies, talents, and resources to support the leaders entrusted to represent national interests overseas. With measures in place and teams properly aligned, the Nation’s leaders, civilian and military, can begin an informed dialogue about how to potentially assess and benchmark team performance that cuts across and transcends geographies, jurisdictions, and commands. JFQ

Notes

  1. Debra E. Hahn, “Predicting Program Success—Not Child’s Play,” Defense AT&L 43, no. 1 (January–February 2014), 46.
  2. Gabriel Marcella, Affairs of the State: The Interagency and National Security (Carlisle Barracks, PA: U.S. Army War College, 2009), 38.
  3. Christopher M. Schnaubelt, “Complex Operations and Interagency Operational Art,” PRISM 1, no. 1 (December 2009), 47.
  4. Joint Publication (JP) 3-0, Joint Operations (Washington, DC: The Joint Staff, August 11, 2011), VII.
  5. Marcella, 29.
  6. Ibid.
  7. Ibid., 30.
  8. Ibid., 31.
  9. Ibid.
  10. JP 3-08, Interorganizational Coordination During Joint Operations (Washington, DC: The Joint Staff, March 17, 2006).
  11. Ibid., D-1.
  12. Christopher J. Lamb with Sarah Arkin and Sally Scudder, The Bosnian Train and Equip Program: A Lesson in Interagency Integration of Hard and Soft Power, INSS Strategic Perspectives 15 (Washington, DC: NDU Press, March 2014).
  13. Ibid., 24.
  14. Ibid., 56.
  15. Ibid., 118.
  16. See Joint Interagency Task Force–South Web site at <www.jiatfs.southcom.mil/index.aspx>.
  17. Evan Munsing and Christopher J. Lamb, Joint Interagency Task Force–South: The Best Known, Least Understood Interagency Success, INSS Strategic Perspectives 5 (Washington, DC: NDU Press, June 2011), 76.
  18. U.S. Government Accountability Office (GAO), Interagency Collaboration Practices and Challenges at DOD’s Southern and Africa Commands, GAO-10-962T (Washington, DC: GAO, July 2010).
  19. Ibid., 2.
  20. Ibid.
  21. Ibid., 3.
  22. GAO, U.S. Southern Command Demonstrates Interagency Collaboration, but Its Haiti Disaster Response Revealed Challenges Conducting a Large Military Operation, GAO-10-801 (Washington, DC: GAO, July 2010), 1.
  23. Donald A. Ziolkowski, “Whole of Government Approaches: How Can We Capitalize on the Experience of Our Reserve Forces in Order to Ensure a Whole of Government Approach to Unfolding Crises? Can We Ensure We Replicate Inter-agency Processes During Large-scale Exercises?” unpublished paper, Joint Forces Staff College, April 2013.
  24. Lamb with Arkin and Scudder, 57.
  25. Schnaubelt.
  26. Ibid.
  27. Ibid., 43.
  28. Commander’s Handbook for the Joint Interagency Coordination Group (Norfolk, VA: U.S. Joint Forces Command, March 1, 2007), II-2.
  29. David W. Johnson and Frank P. Johnson, Joining Together: Group Theory and Group Skills, 9th ed. (Boston: Pearson Education, Inc., 2006), 20.
  30. Marcella, 37.
  31. Nina Brown, Facilitating Challenging Groups: Leaderless, Open, and Single Session Groups (New York: Routledge, 2013), 5.
  32. Ibid., 3.
  33. Ibid., 88.
  34. Lamb with Arkin and Scudder, 57.
  35. JP 1-02, Department of Defense Dictionary of Military and Associated Terms (Washington, DC: The Joint Staff, November 8, 2010), 159.
  36. JP 3-0, II-10.
  37. George T. Doran, “There’s a S.M.A.R.T. Way to Write Management’s Goals and Objectives,” Management Review 70, no. 1 (1981), 35–36.
  38. Jeremy Hope and Steve Player, Beyond Performance Management: Why, When, and How to Use 40 Tools and Best Practices for Superior Business Performance (Boston: Harvard Business Review Press, 2012), 87–88.
  39. Mark Howard and Bill Kilmartin, Assessment of Benchmarking within Government Organizations (New York: Accenture, 2006).
  40. Hope and Player, 94.
  41. Ibid., 33.