News | Feb. 7, 2020

Asking Strategic Questions: A Primer for National Security Professionals

By Andrew Hill and Stephen J. Gerras Joint Force Quarterly 96

Download PDF

Andrew Hill is the Chair of Strategic Leadership at the U.S. Army War College. Dr. Stephen J. Gerras is Professor of Behavioral Sciences in the Department of Command, Leadership, and Management at the U.S. Army War College.

Aircrew member of C-130J Super Hercules assigned to 774th Expeditionary Airlift Squadron prepares to depart for various bases throughout Afghanistan, August 2, 2019, at Bagram Airfield (U.S. Air Force/Keifer Bowes)
Aircrew member of C-130J Super Hercules assigned to 774th Expeditionary Airlift Squadron prepares to depart for various bases throughout Afghanistan, August 2, 2019, at Bagram Airfield (U.S. Air Force/Keifer Bowes)
Aircrew member of C-130J Super Hercules assigned to 774th Expeditionary Airlift Squadron prepares to depart for various bases throughout Afghanistan, August 2, 2019, at Bagram Airfield (U.S. Air Force/Keifer Bowes)
Aircrew member of C-130J
Aircrew member of C-130J Super Hercules assigned to 774th Expeditionary Airlift Squadron prepares to depart for various bases throughout Afghanistan, August 2, 2019, at Bagram Airfield (U.S. Air Force/Keifer Bowes)
Photo By: Staff Sgt. Keifer Bowes
VIRIN: 190802-F-ZD147-0274

If one wants to solve a problem, one must generally know what the problem is. A large part of the solution lies in knowing what it is one is trying to do.

—Fred Kerlinger and Howard B. Lee, Foundations of Behavioral Research

Your teachers lied to you: some questions really are stupid. At best, a bad question wastes time and energy by distracting from what is important. At worst, it sets one up for failure, either by asking the wrong question or presuming the wrong answer to the right question. These problems are even more pronounced in the military, where a powerful culture of obedience responds to a leader’s curiosity with a frenzy of activity that may or may not be useful.

Because leaders have so much power over which questions organizations ask, it is essential that leaders understand the basic characteristics of good strategic questions. We use the term strategic to differentiate the questions that shape and inform strategy—the focus of this article—from the wide variety of questions that organizations may explore. For example, what are the essential characteristics of 21st-century military leaders? Are we selecting for and developing these characteristics? What are U.S. military options in dealing with [nation X]? How will [nation X] respond to different military actions? What are the most significant current capability needs of the U.S. Army? How should we prioritize those needs? These are all strategic questions—difficult to answer, but useful to ask and explore. In this article, we propose guidelines for asking questions designed to improve an organization’s performance amid competitive uncertainty.

Asking good strategic questions is not just a useful leadership habit; in the national security profession, it can save lives or alter the course of history. On October 16, 1962, President John F. Kennedy was briefed on the photographic findings of U-2 flights over Cuba. The President was shown photos that appeared to reveal Soviet medium-range ballistic missile sites. Over the next 13 days, President Kennedy and his advisors would ask hundreds of questions. What is happening? What does it mean? What will happen if we do nothing? What can we do? What will happen if we do X? Finally, what could go wrong? In that situation, the short answer was “a lot,” including a nuclear war with the Soviet Union. The prospect of Armageddon gave the other questions a great deal of urgency. President Kennedy avoided the worst-case scenario for the Cuban Missile Crisis, in no small part because of the way he guided his leadership team through a grueling process of strategic inquiry. He asked excellent strategic questions.

Three Categories of Strategic Questions

Figure. Detail of Memorandum by Theodore Sorensen, October 18, 1962
Figure. Detail of Memorandum by Theodore Sorensen, October 18, 1962
Figure. Detail of Memorandum by Theodore Sorensen, October 18, 1962
Photo By: Source: Image courtesy of the John F. Kennedy Presidential Library.
VIRIN: 200207-D-BD104-021

Definition questions ask what is happening. These include:

  • Defining nature: What is the thing we are analyzing? How is it interacting with the world around it? Example: What is China’s current policy toward Taiwan?
  • Defining extent: How big is the problem? What are the likely costs of inaction? Example: How many sexual assaults occurred among Active-duty Servicemembers last year?
  • Defining urgency: How is the problem unfolding in time? Is it getting better or worse? How quickly? Example: How have the operational readiness levels of Air Force aircraft changed in the past 5 years?

Causation questions ask why a thing is happening or what it may lead to in the future. These questions include:

  • Explanation: Why is it happening? What are the causes? Example: Why are African American officers underrepresented in the combat arms branches of the Army?
  • Prediction: What is likely to happen because of this situation or event? Example: What kind of senior leaders is the current Army personnel system likely to produce?

Intervention questions involve proposals for solving or mitigating a problem (or exacerbating a problem for an adversary). Intervention questions extend causal analysis to examine one or more proposed actions (such as a policy change or a new program). Intervention questions fall into one of three areas:1

  • Effectiveness: Does it work? Example: What is the likely effect of new sanctions on Iran?
  • Efficiency: What is the relationship between the benefits and the costs? Example: What are the readiness improvements resulting from more frequent Army unit rotations at the National Training Center? How do those improvements compare to the costs of those rotations?
  • Robustness: Is the proposed intervention still sufficiently efficient or effective if we relax key assumptions? Example: How effective is our campaign plan if we lose access to bases in [country X]?

Five Characteristics of a Good Strategic Question

While definition, causation, and intervention questions require different research approaches, all three question types should have five characteristics in common.2

A Good Question Is Grounded in the Competitive Context. A good research question reflects a preliminary understanding of the context of the problem or issue. That is, the question is grounded in a basic understanding of the situation. The purpose of asking these questions is not to become an expert on a topic—that is what the subsequent research is supposed to do. Nor does grounding necessarily sacrifice creativity. Grounding is akin to conducting a reconnaissance of a problem or issue. Research scholar Andrew Van de Ven writes, “The purpose of these activities is to become sufficiently familiar with a problem domain to be able to answer the journalist’s basic questions of who, what, where, when, why, and how.3 Depending on the topic, this may involve a review of prior work on the subject, some direct interaction with the problem area, review of relevant data, and discussions with people familiar with the problem.

There is tension between knowing enough to ground analysis and knowing so much that one becomes a slave to the tyranny of expertise. Much can be said for bringing in the novel perspective of a nonexpert. Grounding is intended to give leaders enough of an understanding to judge whether a question has the potential to generate useful insight and to avoid replicating others’ work or falling into a trap that prior researchers have encountered.

It would be incorrect to say that grounding is the most important part of asking strategic questions, but hastily passing over the grounding questions may set one up for big problems. The failure of U.S. policymakers and military planners to anticipate the effects of the 2003 overthrow of Saddam Hussein and the ruling Ba’athist party in Iraq was rooted (among other things) in a failure to ask basic contextual questions before the invasion that would have led to an entirely different set of questions about the strategic plan for a post-Saddam Iraq.

Grounding also underscores a common problem in large organizations: their frustrating tendency not to know what they know. What characteristics and behaviors are necessary for effective military leadership? How do we select for the right characteristics? How do we develop the right behaviors? The U.S. military is constantly examining these questions, yet it tends to approach them as if no prior work had ever occurred. A key part of grounding questions is developing familiarity with the good work that has already been done. This saves time and energy and is more likely to produce original and important insight. Instead of redoing the good work of our predecessors, we should build on it.

A Good Question Has Two or More Variables. A good strategic question has at least one “explanatory” or “independent” variable and one “response” or “dependent” variable. In the Cuban Missile Crisis, the independent variable was the action chosen by the United States, and the dependent variable was the result of that action. In experimental terms, the explanatory variable is the “treatment” condition, and the response variable is the “outcome” measure.

A Good Question Is Stated Clearly and Unambiguously in Question Form. This seems like an easy rule to follow, but it is not. For example, we ask, “How does a U.S. military presence in Afghanistan affect violence in the country?” Is this a good research question? On the face of it, it seems to be. Two variables? Check. Clear and unambiguous? Maybe not.

It is, in fact, a vague research question. Which two variables are we going to explore? We have lots of choices. How are we going to measure “military presence”? Are we interested in all U.S. military activity, or do we focus only on U.S. troops in regular contact with noncombatants? What about measuring violence? Are we interested in violence in general or only in political or military violence?

Maybe we want to know about how different forms of military activity influence violent behavior, so we want to examine how foot patrols compare to mounted patrols in affecting violence in different areas. Or perhaps we are testing the “broken windows” theory of civil order, exploring the connection between the intensity of policing low-level offenses and the occurrence of violence.4

When formulating or evaluating a research question, consider whether a question clearly identifies the phenomenon of interest. A question that does not yield specific research implications is a poor one.

A Good Question Implies the Possibilities of an Observable Answer. A good question will convey some information about how the relationship between the two (or more) variables is going to be tested. It tells us something about the key variables and about how we are likely to model the relationship between them. Above all, a good question suggests the possibility of a positive or a negative result, and a willingness to accept either one.

The question “How does the type of patrol (foot patrols vs. car patrols) affect the prevalence of violence in similar neighborhoods with otherwise similar military presences?” contains a lot of information about the statistical model a researcher is likely to use. It tells us something about the explanatory variable (percentage of patrol time spent on foot, controlling for overall patrol time) and the dependent variable (violence rate). It also suggests other measures (called “control variables”) that will be included to try to isolate the effect of policing: total police presence, geographic size of the area, demographics, income levels, and so forth. All of that can be quantified and modeled.

A Good Question Acknowledges the Uncertainty Inherent in Competition. “The enemy gets a vote” is a wise military adage. Most significant strategic questions inevitably involve some matters that are partially (if not entirely) outside of our control. When posing strategic questions, it is useful to have in mind the limits of what we can know at any time. The answers to most important strategic questions are inherently provisional. Good strategic questions invite us to consider how to improve our competitive position or manage a problem better. They do not ask us how to “win” where winning is not possible or “solve” where no permanent solution exists. For example, “How do we solve the improvised explosive device (IED) problem?” is not a good question. It is better to ask, “How can we improve the protection of our forces against IED attacks?” “How can we reduce the number of IEDs being placed?” “How can we identify emplaced IEDs prior to detonation?” Note that the answers to each of these questions will change over time.

Thus far, we have explored five characteristics of good questions. What about bad ones?

Five Signs of a Bad Question

Formulating a good strategic question takes time and effort. Asking a bad question is easy. Bad strategic questions often have one of the following characteristics.

A Bad Question Displays Little Grounding in the Context of the Problem or Issue. Just as it is arrogant to assert that nothing new can be said about an issue, it is equally hubristic to assume that no prior work is relevant to a problem now. Do the homework. Ask the journalist’s questions. Assume that predecessors’ experiences dealing with their problems may help deal more effectively with the current problems. Badly grounded questions often begin, “Why don’t we just . . . ?” For example, “Why don’t we just control our own budget?” “Why don’t we just push legal approvals down to the lowest level?” “Why don’t we just impose a common standard?”

Afghan National Army 10th Special Operation Kandak commandos conduct small arms barrier firing drills during series of weapons proficiency ranges at
Camp Pamir, Kunduz Province, Afghanistan, January 13, 2018 (U.S. Air Force/Sean Carnes)
Afghan National Army 10th Special Operation Kandak commandos conduct small arms barrier firing drills during series of weapons proficiency ranges at Camp Pamir, Kunduz Province, Afghanistan, January 13, 2018 (U.S. Air Force/Sean Carnes)
Afghan National Army 10th Special Operation Kandak commandos conduct small arms barrier firing drills during series of weapons proficiency ranges at
Camp Pamir, Kunduz Province, Afghanistan, January 13, 2018 (U.S. Air Force/Sean Carnes)
Afghan National Army 10th Special Operation Kandak commandos conduct small arms barrier firing drills during series of weapons proficiency ranges at Camp Pamir, Kunduz Province, Afghanistan, January 13, 2018 (U.S. Air Force/Sean Carnes)
Afghan National Army 10th Special Operation Kandak commandos conduct small arms barrier firing drills during series of weapons proficiency ranges at Camp Pamir, Kunduz Province, Afghanistan, January 13, 2018 (U.S. Air Force/Sean Carnes)
Photo By: Senior Airman Sean Carnes
VIRIN: 180113-F-CC297-078

A Bad Question Is Vague Regarding Key Variables. “Why is counterinsurgency not working in Afghanistan?” is a bad analytical question. What does “not working” mean? The question does not suggest any specific measure of performance, and we have numerous options: violent noncombatant deaths, Afghan military casualties, coalition casualties, number of cities and villages under Taliban control, total population under Taliban control, and so forth. Without knowing more about the basic question motivating the analysis, the question of variable specification has no right answer. If the analytical question opens an endless discussion about which variables are the right ones for analysis, then it probably needs to be rephrased.

A Bad Question Presupposes the Answer, Includes the Answer, or Signals That Only Certain Answers Are Acceptable.5 “Why is counterinsurgency not working in Afghanistan?” presupposes an answer to two other questions: first that the U.S.-led coalition and Afghan national forces are executing a large-scale counterinsurgency, and second that the counterinsurgency is not effective. The author of such a study (and many others) may see both assumptions as settled issues. However, avoid embedding assumptions in any question that are (1) not beyond doubt and (2) not central to the question.

It may sometimes be necessary to break a strategic question into multiple parts. This is fine, if follow-on questions logically reflect the answers to opening questions. For example, “How well does the current U.S.-Afghan operation match the canonical principles of counterinsurgency?” is a decent definitional opening question regarding what is happening in Afghanistan. Thus, if we find that the U.S.-Afghan effort is not a counterinsurgency, focusing more on killing the enemy and less on protecting populations, then we may ask questions about the effectiveness of this approach.

Another research foul is a question that clearly indicates the unacceptability of certain answers, such as, “What makes the aircraft carrier essential to American power?” This question (a bad one) strongly implies that it is unacceptable to conclude that the aircraft carrier is not essential to American power.

In policy and program analysis, as in all research, the potential value of the work is proportional to its potential to find a surprising result. Again, Van de Ven advises, “Permit and entertain at least two plausible answers to the question. Alternative answers increase independent thought trials.”6

A Bad Question Includes Causal Claims or Solutions.7 “Given that prisons are the higher education system of crime, how does incarceration affect the probability of a first-time offender’s future imprisonment?” This question is interesting but flawed. It both answers the question (imprisonment increases the probability of future imprisonment) and explains why it is the answer (newer criminals learn from more experienced criminals). One should avoid embedding causal claims or solutions into questions. This will skew the analysis, artificially narrowing the focus. It will also reduce credibility.

An embedded causal claim (bad) and a hypothesis (good, if phrased correctly) are not the same. An embedded causal claim is usually not the object of analysis. It is a proposition that we are sneaking into the question, often without proving it or asking whether it is legitimate. In the prison question, we sneaked in the claim that prisons are criminal universities.

In contrast, a hypothesis is a claim that is being tested in the analysis. Good research questions have good hypotheses that rephrase them as testable assertions. Thus, “How does incarceration affect the probability of a first-time offender’s future imprisonment?” may have a corollary hypothesis: “Incarceration increases the probability of a first-time offender’s future imprisonment.” That is a testable claim, and it does not carry any unnecessary or unfounded assertions about why it may (or may not) be true.

In discussing the hypothesis, a leader may acknowledge many reasons for an expected relationship. In this example, such discussion may include the “prison is college for criminals” concept. This is fine. But we must always bear in mind what is and is not being tested in any analysis. For example, finding that incarceration increases the probability of future imprisonment will corroborate (but not prove) the hypothesis, but it will not justify a specific causal claim for that relationship. That would require a second research question and a second hypothesis.

A Bad Question Includes Moral or Ethical Claims or Value Statements That Complicate Quantification. Many of us have an understandable aversion to the modern tendency to count everything. “Not everything that counts can be counted,” someone wise once stated. The analytical rejoinder is, “If it cannot be counted, it will not count.” Intangibles are often the last refuge of obsolete ideas.

Watch for questions that include value statements or ethical or moral assertions. According to Kerlinger and Lee, such questions use “words such as ‘should,’ ‘ought,’ ‘better than’ (instead of ‘greater than’), and similar words that indicate cultural or personal judgments or preferences.”8 “Who is the greatest basketball player of all time?” is a great question for living (or bar) room conversation, but terrible for analysis because it resists quantification. Several quantifiable questions may be connected to it: Who won the most National Basketball Association championships as a starter? Who is the all-time leading scorer? Who is the all-time leader in points per game? Any one of these questions may help us identify the “greatest.” But none of them are, in themselves, going to tell us who the “greatest” actually was.

Leaders who ask good strategic questions prompt productive inquiry and set a positive example when they reveal their justifiable ignorance. Leaders cannot be expected to be experts in all things, but guiding or assessing a strategic question is one area in which they must be active and involved. A lack of research expertise is no barrier. Leaders are responsible for shaping good questions to prompt an intelligence report or a research study and for reviewing the questions that guided completed work. Strategic questions drive organizational attention, energy, and resources and can make the difference between competitive success and failure. JFQ

Notes

1 Adapted from Eugene Bardach, A Practical Guide for Policy Analysis: The Eightfold Path to More Effective Problem Solving (New York: Chatham House, 2000), 20–25.

2 Characteristic one adapted from Andrew H. Van de Ven, Engaged Scholarship: A Guide for Organizational and Social Research (Oxford: Oxford University Press, 2007), 77–79. Characteristics two through five are adapted from Fred Kerlinger and Howard B. Lee, Foundations of Behavioral Research, 4th ed. (New York: Wadsworth Publishing, 1999), 16.

3 Van de Ven, Engaged Scholarship, 78.

4 George L. Kelling and James Q. Wilson, “Broken Windows: The Police and Neighborhood Safety,” The Atlantic, March 1982, 29–38, available at <www.theatlantic.com/magazine/archive/1982/03/broken-windows/304465/>.

5 Bardach, A Practical Guide for Policy Analysis, 6–7.

6 Van de Ven, Engaged Scholarship, 96.

7 Bardach, A Practical Guide for Policy Analysis, 6–7.

8 Kerlinger and Lee, Foundations of Behavioral Research, 21.