Analysis blog Conferences Intelligence Analysis International Relations Modeling National Security Operations Research Systems Analysis ORSA Philosophy of Science Policy Producer/Consumer Relations Science Social Science

Observations on Quantitative Modeling in Defense and Intelligence Analysis – Agents, Evolution and International Relations

During the last couple of weeks I had the opportunity to take part in a two conferences that targeted on the position of formal modeling in intelligence and defense analysis.  The preparation for these occasions stored me away from the blog, and I’m hoping to have a chance to write down more as nearly all of my time and consideration return to my dissertation for the subsequent several months.

The conferences have been quite informative, each for serving as a platform to articulate most of the arguments developed earlier on this weblog (specifically in posts on using fashions and empirical limits), and to speak with others who’re working on comparable problems who have their very own distinctive experiences.  From a methodological perspective, it was fairly fascinating to see how organizational issues affect the choice and use of analytic tools or approaches.

It is very important maintain in mind that the intelligence group has all the time struggled with how unbiased it must be from the policy-makers they help, making the so-called producer/shopper relationship among the most troublesome, challenging, and fascinating facet of intelligence research.  By comparison, analyses performed in-house inside policy-minded organizations, for example by DOD employees, are carried out by individuals who work instantly for decision-makers (even if they reside in totally different sub-organizations or elements of the identical institution).  The result’s that formal methods, notably people who rely on what are usually considered “scientific” approaches have been acquired in another way by these communities.

Based mostly on the discussions that I noticed, I observed an fascinating paradox.  The traditional assumption is that analysts who work immediately for policy-makers (or moderately policy-making organizations) would feel compelled to supply research whose findings are in step with policy-makers’ wishes.  Indeed, this has been a standard critique of the analysis performed inside the DOD relating to Iraq’s WMD capabilities and ties to Al Qaeda in the course of the run-up to the 2003 Iraq Struggle (for examples see Betts,Jervis, Rovner, and Pillar).  Nevertheless, a barely totally different story emerged through the conferences.  Because in-house analysts are assumed to be working in the direction of the identical objectives of their bosses, they’re much freer to pursue an “objective” or “scientific” strategy to learning issues.  Whereas such phrases are loaded, these terms largely align analytic methods with tools and processes familiar to the tutorial group, notably with respect to using formal models and inductive methods which are predominantly knowledge pushed (and typically falsely perceived to be freed from theoretical baggage or assumptions).  While they acknowledge that a mannequin shouldn’t trump human experience, there is a higher probability that a formal mannequin, as an unbiased artifact, can be afforded a voice and illustration in the analytic manufacturing course of, even when its assumptions could also be considered overseas or counterintuitive by the policymakers they serve.

By comparison, the unbiased standing of intelligence analysts signifies that they are more sensitive to producing assessments which are deemed relevant by their shoppers, and this seek for relevance performs heavily on the choice of analytic methods and approaches.  Because intelligence analysts are employed independently of the policy-makers they help, they’re primarily outsiders and their commitment in the direction of the achievement of policy-makers’ objectives will not be all the time assumed by shoppers.  Thus, intelligence analysts typically go to nice pains to start out any analysis from the viewpoint or perspective of their shoppers in an effort to simultaneously set up the relevance of their evaluation and the bona fides of their private and organizational intentions.  After these are established, analytic methods and frames might broaden to include a wider range of perspectives, however starting from assumptions or strategies which are alien and counterintuitive to their shoppers might deny their participation in the policy-making process, rendering them ineffective and irrelevant.

The truth that shoppers are free to disregard intelligence assessments produces the paradoxical results of larger concern for relevance on the part of unbiased intelligence analysts as a result of there’s little institutional or organizational demand for their work to be thought-about.  The result is that formal methods have a troublesome time rising roots in the intelligence group compared with their coverage evaluation cousins, due to the need to start evaluation from the attitude and interests of shoppers, which may be riddled with inner contradictions, bias, wishful assumptions, and strongly held philosophical or ideological beliefs and unspoken objectives (including domestic political aims which are out of bounds for analysts to deal with).  As an alternative, intelligence analysts appeared to start out their evaluation from a subjective in orientation, first and foremost targeted on revealing the implications of specific assumptions held by (or believed to be held by shoppers), and ultimately expanding to incorporate an more and more broad range of assumptions and views once they show they’re working in good religion, in order to supply a more complete evaluation.

From my perspective, it turned clear that there are a collection of “difficult conversations” that have to be held amongst analysts, managers, and policy-makers relating to most of the meta-level features of study and decision-making.  For instance, phrases like “scientific” and “objective” are loaded and could be wielded as weapons, as can “validation” and “prediction.”  What seems clear is that there’s a want to discuss precisely what it means to build a model, whether deductively or inductively, and what it means to use a proper model in analytic processes.  There’s typically an intellectually lazy assumption lurking in the background of discussions that suggests fashions, as artifacts of knowledge and principle, are by some means objective, while human analysts and decision-makers are biased.  Yet, fashions are actually products of human beliefs and expertise, externalized and frozen in equations, pc packages, graphics, tables, physical replications, and so on.  Thus a mannequin accommodates assumptions concerning the accuracy and relevance of the info used in it, or the quality and veracity of a specific behavioral rule, practical type, and so forth.  Certainly, whereas the conferences have been quite in quantitative evaluation, it seemed to me that quantitative tools and strategies must be used to offer context-highlighting points of issues where necessary and related insights might be seen which have qualitative influence.  This implies shifting away from the subtleties of optimization (min/max/equilibrium outcomes) that rely on attaining probably the most fascinating end result from the attitude of a given model, and extending the search throughout various fashions with an emphasis on figuring out the tradeoff area associated with various frameworks or analytic contexts, accepting that no single body dominates across organizations, the interagency, or even coalitions, alliances, and the international group.

Although calls for robustness have been made in the past, it’s more and more clear and recognized by practitioners that most of the assumptions that militate towards its adoption are deeply embedded in the tradition of operations research and techniques analysis, with its heavy emphasis on microeconomic, rational decision-making, conversion of uncertainty into danger, and peculiar view and remedy of history and science may be working towards the present wants of the group.  I discovered references to Karl Popper and his argument that each one science is hypothesis testing amusing, provided that Popper himself ultimately backed away from this criteria realizing that a lot of science did not, and couldn’t match this strict standards.  Indeed, I’ve a suspicion that if all evaluation was carried out in accordance with the dominant interpretation of science as mentioned by part of the group, than no work could possibly be carried out — it will simply be unattainable to proceed in strict accordance with the perfect commonplace.  As I read extra of the philosophy of science, I’m stunned by how a lot Popper has receded in affect, notably in the social sciences in favor of more forgiving standards or definitions of what it means to be “scientific.”

More and more, national safety points can’t be constrained by assumptions that have been acceptable in the past, and as they are challenged, the instruments and methods are proving to be harder to use absent vital caveats.  None of this could doom quantitative evaluation or formal modeling.  Slightly it ought to encompass such analyses with accompanying issues of how problems are structured or framed in order to determine the epistemological limitations of what can  be concluded from a single formal model, an ensemble of many fashions, and an entire collection of combined strategies studies that collectively present insight.  I feel we’re unlikely to see any research or evaluation from the policy or intelligence group that may firmly rely on a single strategy given the complexity of the issues that national security group faces, the various alternative ways of decreasing or simplifying issues analytically, and the range of preliminary assumptions and expectations of decision-makers that shade their receptivity to totally different approaches and the sequence by which new, typically difficult, perspectives are arrived at and introduced to them.