ABSTRACT

One issue in SoS work is how to approach modeling and analysis. This paper draws upon experiences from another domain, “defense planning” or “force planning,” for lessons-learned that carry over into system engineering of SoS. The bottom line is that we need to resurrect the art, science, and prestige of higher level design, while employing modern methods of modeling and analysis, such as exploratory analysis and multi-resolution modeling. Although doing so may seem like common sense to some, the psychological and organizational obstacles to doing so are high. Leadership by the senior system engineers at this conference could help a great deal.

1. Introduction

1.1 Prefacing Comments

In the spirit of this interdisciplinary conference, in this paper I draw upon past research in another domain for methods that can be used in the study of systems of systems (SoS). That “other domain” is defense planning, or what is sometimes called force planning. Ironically, some of the research on which I draw was inspired by fine examples from engineering. Thus, in a sense I am gong full circle.

I should acknowledge some key assumptions at the outset:

  • Designing and building SoS is different in many respects from normal engineering as it is practiced.
  • The paradigm of complex adaptive systems is valid for addressing many of the most fundamental issues in SoS work.

The paper is written for those who need no further elaboration or justification of the assumptions.

1.2Classic Defense Planning

1.1.1 Characteristics

Until about a decade ago, many veteran defense planners believed that there was a relatively well-defined methodology to be followed. Major ingredients included: (1) identifying the principal threats e.g., the USSR; (2) developing planning scenarios for those threats; and (3) analyzing and choosing among alternative ways for the United States to cope in an affordable way.

Figure 1 shows a cartoonish depiction that roughly mirrors analysis of ways to defend Western Europe during the cold war. The top curve shows the assumed buildup of enemy forces versus time. If one believed that the enemy force level should never be more than 50% higher than the friendly force level (a theater-level force ratio of 1.5 to 1), then one could draw a “requirement curve” as shown. In the absence of changes to the defense program, friendly forces would have a severe shortfall in the early portions of the scenario (bottom dashed line). If war actually began during such a period, defeat would be expected. Thus, one would consider alternative defense programs. For the alternative depicted (the higher dashed line), friendly force levels would just barely meet the requirement. This might be accomplished, for example, by buying equipment sets to store near the defense zone and buying aircraft to permit rapid deployment of personnel to “marry up” with their equipment. The cost to the US alone might be substantial, but the program would meet the requirement and all would be well.This, in essence, was the logic that underllay the POMCUS program of the 1970s and 1980s in which enough equipment sets were to be deployed so as to permit building to ten US divisions in ten days on the NATO central front.

Figure 1—Schematic Version of Point-Scenario Planning During Cold War

This planning construct was well suited to organizational processed and proved politically successful as well. It conveyed a sense that DoD had an understandable and sensible rationale for its defense program. Although people argued about many details, such as the force-ratio requirement, the approach was rather circumscribed.

Underlying such simple depictions, there were simulation-based analyses showing results of simulated combat day by day. They revealed a myriad of details that had to be addressed to actually achieve the buildups depicted in the simpler model. As a result, there was great effort, e.g., to avoid congestion at individual airports and seaports, to hone and exercise capabilities that might otherwise be merely theoretical, and to improve the likelihood that the United States would actually recognize and act upon warning soon enough to deploy forces rapidly. Further, the C-17 airlifter was designed so as to be highly efficient for reinforcing Western Europe. Thus, there was a great deal of richness involved in translating the simple concept into real-world capability.

1.2.2 Shortcomings

Despite such richness of detail, the concept was simple. In some respects it was positively simplistic. Many observations could be made about this style of planning, but some of its obvious attributes were:

  • Its “mechanical” quality, with no explicit discussion of commanders, troops, morale, strategy, tactics, and so on. War, for the purposes of this type of planning, was treated as simply a matter of relative resources.
  • An emphasis on numbers and data, which were the fodder for both simple modeling and sophisticated simulation.
  • A very deterministic image of war.
  • Minimal discussion of uncertainty more generally. Indeed, although analysts did some excursions, much of the planning was built around “point scenarios.”

1.3 Modern Concepts of Defense Planning

1.3.1 Major Tenets

The shortcomings of the analysis approach described above were recognized and written up twenty years ago {Davis, 1988, #4827} and relatively detailed suggestions for improvement described in the early 1990s {Davis, 1994, #8538}. The odler approach persisted, however, because it had so many practical virtues, many of them organizational and sociological. Even when the cold war ended, the DoD initially attempted to extend the same analysis approach, focusing primarily on Iraq and North Korea. In time, however, it was widely recognized that something different was needed. That “something” is often called capabilities-based planning {Rumsfeld, 2001; Davis, 2002}. As often happens, a good concept can be misinterpreted and misapplied, but the fundamentals of capabilities-based planning are sound and enduring. These include:

  • Confronting the problem of deep and ubiquitous uncertainty
  • Recognizing that the salient uncertainties are both simple (e.g., future enemies) and complex (e.g., how future commanders will choose to employ forces; how erroneous perceptions at the time will cause major errors by friendly and enemy leaders, commanders, and infantry soldiers; how local tribes in an occupied area will or will not unite, and in support or who and what…
  • Recognizing that enemy commanders will likely seek to arrange circumstances, strategy, and tactics so as to maximize their odds of success.
  • Worrying about such possibilities and trying to prepare for accordingly so as to leave the enemy no good options.

Such elements of CBP are challenging even when resources are lush, but in practice resources are limited and a principle of CBP is accomplishing the above while working within an economic framework.[1] That, of course, implies balancing risks, making a variety of tradeoff judgments, and ultimately making some painful decisions.

All of this may seem very difficult—especially dealing forthrightly with massive uncertainty, but effective planning occurs in all walks of life every day. Planning under uncertainty is entirely feasible.

To the extent that there is a school solution for planning uncertainty, it is planning for adatpiveness (Davis, 1994a; Davis, 2002) This usage of “adaptiveness” (equivalent to what some mean by “agility”) is shorthand for something multifaceted (Alberts et al., 2003) A longer expression would be:

  • The key to planning under uncertainty is adopting strategies that are flexible, adaptive, and robust (i.e., FAR strategies).

In this more elaborated context, “flexibility” refers to the ability to deal with new missions and challenges, “adaptive” means refers to the ability to deal with diverse circumstances, and “robust” refers to the ability to withstand and recover gracefully from adverse shocks. Although the three words are often used synonymously within the English language, they are used here in a way that exploits some traditional shades of meaning. Others use the words in somewhat different ways, but what matters most is recognizing and covering all three of the attributes. The attributes overlap, but only to some degree. This emphasis on FAR strategies was accepted by a recent National Academy panel as it recommended the way ahead for DoD modeling, simulation, and analysis (National Research Council, 2006).

1.3.2 Simple Concept, But Major Implications for Analysis

What could be less exceptionable than suggesting FAR strategies? Is this not mere common sense? The answer is no. Knowing to buy insurance and to avoid potentially ruinous bets are learned skills—for both individuals and governments. It is not accidental that The Netherlands plans for once-in-a-century events; the Dutch suffered grievously from past flooding. Whether and what the U.S. government has learned from Hurricane Katrina remains to be seen. Shifting to a more happy topic, consider football teams. No professional coach imagines that he can develop a team and hone its skills suitably by focusing on a single image of next year’s championship game with a particular opponent in a particular stadium and set of weather conditions.

All of this has profound implications for analysis. If the objective is to find good FAR strategies, that is very different from finding the best strategy for the most likely case. If uncertainties were modest, this would not be so, but in strategic planning uncertainties are commonly ubiquitous, large, and “deep.” The word “deep” here refers to the nature, as well as the magnitude of uncertainty..

Another major implication is for the types of modeling and simulation needed to support analysis of alternative FAR strategies. Since the real world is not so neatly mechanical and deterministic as in the earlier discussion, if uncertainties are everywhere, what kind of M&S is needed? And how does one even do analysis?

Exploratory Analysis. The answer, it seems to me, includes prominently “exploratory analysis (EA).” In EA, one examines the goodness of strategy throughout the possibility space. Instead of sensitivity analysis, in which one typically has a best-estimate baseline and considers excursions by varying parameters one at a time, in EA one sees the consequences of all the possible combinations of input values. To be less abstract, suppose that in a defense problem the inputs include warning time, the axis of the enemy’s attack, officers on duty, and the real-world “sharpness” of the troops. One could have a base case and vary each of these parameters separately, but in EA, one would also see cases in which the enemy did everything he possibly could, simultaneously, to maximize advantage. The result might be minimal warning time AND an unexpected axis of attack AND a situation where the defending commander is taking Christmas dinner with some friends, AND the troops are sleepy. Even though that’s a “corner” of the space, it happens to be an important corner.

Many people immediately think to use probability distributions for EA. My colleagues and I have usually avoided doing this (except for truly stochastic phenomena such as white noise, and for dealing with large numbers of smallish and apparently uncorrelated factors) because: (1) many of the relevant inputs are correlated, as in the example above (which is far more probable, given that war actually occurs, than some fairly low probability to the fourth power; (2) people do not have a stellar record in characterizing probability distributions of deep uncertainty; and (3) we want to retain maximum transparency, knowing why and for what combinations, results are good or bad. When probability distributions are used, one is integrating over variables, which can then make it difficult to understand outcomes.

Conducting EA by sampling from discrete cases over the entire space might be misinterpreted as assuming that all points as equally probable, but we are merely displaying results as a function of location in the n-dimensional space—deferring the decision about what portions of the space to discard as implausible. I believe that deferring such judgments is generally a good idea.

In any case, RAND has done a great deal of work on EA over the last decade or so, and has published many of the methods and findings. I shall not elaborate further on these matters in this paper.[2]

The Need for Low-Resolution Models. The fundamental problem with EA is the curse of dimensionality. Given a model with thousands of inputs, varying all of them over their uncertainty ranges is beyond the capacity of any current computer—or, more fundamental—of any analyst to understand. It is therefore exceedingly useful to have simplified models for the exploration phase of analysis. If a model has 5-20 parameters, it is relatively easy to do insightful exploratory analysis with only a desktop computer and commercial programs such as Analytica®, or more specialized display technology such as CARs ®.[3]

But the Models Must Be Appropriate. One small catch is that EA is no better than the model used for the analysis. If, for example, the model reflects a mechanistic, deterministic, no-people-to-mess-things-up attitude about phenomenology, then the analysis may well be worthless. Some may recall the China Syndrome movie twenty years ago or so, which vividly pointed out the obvious about nuclear power plants: engineers can do a superb job of working out technical fault trees and designing for robustness, but if they or the builders and administrators overlook shortcomings such as human greed, corruption, and sloppiness when bored, everything else may be for naught. In the modern world of defense planning, we must now demand that M&S have within them the capacity to represent all sorts of “soft and slippery” stuff, such as the occasional imperfections of our own commanders and troops, and the adaptations of enemy, the behavior of local tribes, the emergence of low-tech “asymmetric” tactics.

Again, it may seem obvious that the models used need to be appropriate, but the reality is that legacy DoD M&S have generally not had the features needed. The shortcomings of current DoD M&S are so fundamental that there is need to broaden the concept of M&S to include human-in-the-loop simulation, human gaming, use of experts, use of historical information, and so on—all so as to increase the likelihood that “analysis,” broadly construed, will take into account a sufficiently broad range of possibilities (Davis and Henninger, forthcoming; National Research Council, 2006). Figure 2 summarizes the point that a family of tools should be used, rather than relying entirely upon large simulations.

Figure 2
Relative Merits of Illustrative Items in a Family of Tools

SOURCE: National Research Council (2006); Davis and Henninger, forthcoming, #46588}

Even if we restrict ourselves to technical issues, it is often the case that simple models are overly simplistic. In particular, they ma do a poor job of representing the aggregate consequences of the details over which they gloss. They may be inappropriately linear; they may assume independent probabilities; they may ignore some factors altogether. Thus, developing good simple models is often not straightforward, but there are some multi-resolution modeling methods to help {Davis and Bigelow, 1998; Davis and Bigelow, 2003, #10598}.

Structural Uncertainties. Another reason that developing the right models is not so straightforward is “structural uncertainty.” In many cases we do not even know with confidence the direction of arrows in the causal diagrams (e.g., influence diagrams) that many of us favor in modeling complex systems.[4] Further, some of the most important factors affecting outcomes are often “exogenous,” i.e., outside the explicit model itself. This is not simply for lack of imagination by modelers, but a reflection of reality. Things happen. Situations change. People do the unexpected. Small events can have consequences grossly out of proportion to their objective significance. And, of course, even some of the factors that “ought” to be endongenous and well understood may not even be recognized at a given time.

All of these matters are examples of structural uncertainty. No one knows how to deal well them, and some spinoff of Godel’s theorem would probably prove that it’s impossible to do so perfectly. Still, experience tells us that doing something is often better than doing nothing. My colleagues and I have made some advances in recent work on “massive scenario generation” (MSG). In this exploratory analysis work we allowed for “randomness” of such things as the directionality and magnitude of causality, and for potential exogenous events with large consequences. We also modeled at a high enough level of abstraction so that a vast range of detailed factors, which we could not hope to capture, were plausibly captured at least in the aggregate. Although very research, the experience was both sobering and encouraging (Davis et al., forthcoming).

3. Systems of SYstems and the CAS Perspective

Is is probably becoming apparent how all this relates to complex adaptive systems and the design and development of systems of systems—and in more than a metaphorical sense. Let me move rather quickly to what I see as the major implications.

3.1. Principles

When developing capabilities, including relevant hardware and software, to deal with complex adaptive systems, it is essential in my view to begin with attitudes markedly different from what characterizes a good deal of modern-day engineering. From a strategic perspective within a development, this attitude includes:

  • Approaching the problem system on its own terms, rather than attempting to impose an engineer’s concepts (e.g., resolve to deal with the unpleasant and squishy “people problems” and the near certainty of unexpected developments in the world)
  • Looking to broaden the system construct rather than to narrow the range of factors considered, the case space, and so on.
  • Assuming from the outset that the SoS will probably include people and that this is highly desirable because people are sometimes more creative, capable, or wise than our best machines (the opposite is also true). Thus, the challenge becomes one of designing the best man-machine systems, rather than one of designing the best closed systems
  • Recognizing that even the smartest of clients is unlikely to have a very good sense of technical “requirements” at the outset, nor even of what will prove to be needed functionality. Thus, the client’s desires and wisdom should be accepted with great interest and attention as inputs, but “requirements” should not be established early.

Although the strategic view should arguably be expansive, patient, curious, and open, it will also be true that progress will occur only when finite interim problems are taken on, defined rigorously, and worked. To a substantial degree all of us learn from doing and seeing. This said, it becomes very important to have a sufficiently good concept of the larger system problem so as to define interim steps that will truly be in the right direction. Yes, chasing some red herrings will probably prove necessary and ultimately useful, but interim approaches that are inherently fatally flawed with respect to the larger “real” problems should typically be avoided. When engineers began developing what became stealthy aircraft in the 1970s, they were wise enough to take an approach that addressed not just radar cross section, but all aspects of signature, and that also anticipated countermeasures.