What are highest priorities in building intellectual infrastructure to understand & design Complex Human Systems (CHS)? The case for Multi-Agent Based Simulation

Importance of Internet as source of global awareness

Internet offers opportunity to synoptically monitor R&D progress in analysis & synthesis of information driven, human controlled systems (defined here as “Complex Human Systems”) in the open world (eg., G7 nations plus Asia, Australia & New Zealand). In a RAND paper from 1968, now considered a classic of Operations Research, Albert Wohlstetter described a form of analysis that should be developed for the study of what he called “Opposed Systems.” He wrote, “I shall use the phrase "opposed-systems design" to name a kind of study that attempts to discern and answer questions affecting policy -- specifically affecting a choice of ends and of means to accomplish ends that stand a good chance of being opposed by other governments. The ends of any governments are multiple and only partially incompatible, and of course such conflicts may be resolved without fighting. A peaceful resolution may depend in part on what the risks of combat are with those of other governments -- even very hostile ones. In any case the conflict of aims raises the possibility of combat: and a major part of these studies is concerned therefore with the likelihood and the likely outcomes of such combat. In fact, they grew out of operational research as it had been practiced in World War II.”

It is not too great an extrapolation to argue that Wohlstetter was proposing to analyze social conflict in specific space-time context as a macro-system to be modeled as a distributed control problem with the complexity of frictions among players on the same team as well as the struggle for control among various sides of the primary conflict. The analytic problem domain includes primary variables which determine how the “Opposed System” can converge to different ensemble behavior, depending on strategies for military conflict and negotiation. Then, using this understanding, statesmen and leaders of industry can design policies & strategies that most closely appropximate the preferable “Opposed System” behavior or, in the language of this paper, the “Complex Human Systems” behavior.

Over 10 yrs personal experience in monitoring via Internet

While initial work identified as complexity science began in US mostly at Los Alamos & Sante Fe Institute by picking up earlier threads from von Neumann and Ulam, in last 6 yrs the use of simulation to analyze and control vehicular traffic and other infrastructure networks controlled by humans has advanced at a faster rate in EU than in US. The ETH in Switzerland, several universities each in UK, Germany, France & Italy plus the EU-wide GIACS (General Integration of the Applications of Complexity in Science)

have a record of significant papers on agent-based simulation and other analysis of network systems. Another metric is participation in the simulation track of RoboCup Soccer where US participation mostly from CMU has declined and winning teams now come from EU, Japan or even Iran, which fielded about 30 teams in 2005 competition.

Chart summarizing recent assessment of methods for understanding Complex Systems

While categorical groups have some fuzzy boundaries, it is useful for discussion to group modeling technologies into the eight communities shown in chart. The scope of each group is defined by the subordinate technologies or concepts listed under each group heading and the bold blue font indicates those areas where progress seems most productive in the last five years to this observer.


The dark arrows indicate where some interaction or overlap among communities is already occurring even though in many cases this interaction is generated by only one or two individuals. The information flow predominantly from the more theoretical methods to models applied to real-world complex systems problems is as expected. The dotted red arrows indicate this observer’s assessment of what are the most important missing or inferior links and where further work would have the most beneficial effect. Two subareas of “multi-scale scenario design” and “design for emergence” that seem highly important and inter-related have no noticeable presence on the internet other than receiving occasional mention.

Examples of scenarios with multi-scale complexity & interactions

The major bottleneck in using multi-agent based simulation for exploratory analysis and understanding emergent behavior either as unintended consequences or desired synergy is the time spent constructing scenarios of sufficient complexity—enough for surprising and useful results but not so complex that the scenario becomes confusing or generates “noise in the analysis.” Most current scenarios have one and possibly two scales of interaction, but most interaction is within scale rather than the more interesting interactions from larger to smaller scale or an integration of smaller scale effects to cause an effect at the larger scale.

Three is a useful number of scales to aggregate and play in a simulation. One can think of them as global, national or strategic; regional, community, or theater; and local, personal or tactical. Each scale exists within a unique domain of time and space, say 1 to 30 min and .01 to 100 km for tactical interactions, 0.5 to 12 hrs and 100 km to 1000 km for regional and 0.5 to 5 days and 1000 km to GEO for strategic. However, these quantitative domains can vary as a function of the analysis problem.

In competitive or conflict relationships the decisions made at the larger scale are intended to constrain/inhibit or enable the smaller scale interactions. In the case of downward constraints the time lag for implementation is a function of the C2 and operational tempo of the larger scale agent.

An intentional agent at the smallest tactical or individual level may perpetrate actions that are intended to have the major effect at the strategic scale. The two WTC attacks eight years apart illustrate the difference of the first tactical scale event having only a regional effect and the second a global effect. More commonly it takes an integration of smaller scale effects within some moving time window to cause an effect at a larger scale. In addition to effects between different scales, there are possibilities of effects between two players within the same scale. Both the inter and intra-level effects can be described within the DoD paradigm of “Effects Based Operations.” Some examples of these varieties are given below.

  • S  Ta

--Constrain movement & check ident. at borders or checkpoints on roads

--Media reports that inhibit or encourage individual violence in public

  • S  Th

--Failure of a leader’s action encourages factions or coup attempts

--National oppression of ethnic or religious groups generates resistance

  • S  S

--National leader stimulates change in leader of rival nation regarding policy risk or priority of resource allocation

--Nations negotiate for trade leverage (OPEC) or economic embargo (UN towards Saddam)

  • Th  Ta

--Affiliation group leaders incite followers to riot or attack other groups

--Success of black market economy encourages size and number of gangs

--Coordinated attacks on utility networks increases disrespect for state

  • Ta  Th

--Siphoning off resources (oil) from state increases size of black market

  • Ta  Ta

--Witnesses to violence have agent state change from policy of avoid enemy to revenge

  • Ta  S

--Public violence triggers govt. tighter movement constraints & violence towards public in streets

Second mover advantage

The network of complexity science communities indicates some of the pioneering leaders who often achieve their leadership by means of focus and drive. There is also a need for a type of early-adopter who can survey a broader space of the intellectual marketplace and provide synergistic integration of concepts from non-communicating research teams. This workshop is intended to facilitate this kind of activity as well as affording some of the leaders an opportunity to interact outside of their normal affiliation group (similar to charter for Kavli Institute of Theoretical Physics at UCSB or purpose of Dagstuhl Seminars).

Building intellectual capital & infrastructure for architecture design involves collaboration & integration of “best of breed” concepts and language that gains dominant “mindshare” of the technical community (eg., why we use Leibnitz notation for calculus rather than Newton’s or VHS format for videotape rather than Betamax)

Why Agent-oriented M&S Needed for Design of Complex Human Systems?

The a priori argument—it’s required for the conceptual model

--Whenever complex system involves “humans-in-the-loop” for C2 of operations or processes (ie. the target system is a CHS & includes intention, goal conflicts, human/soft factors & cognitive limitations)

--In order to model autocatalytic or control subsystems with discrete, multi-modal adaptive responses to the environment or social context

--In such cases equation-based models do not pass the test of “face validation”

The a posteriori argument—case histories of successful CHS design impact

--Comm & computer network management via distributed control

--Designing protocols for use of anti-biotics to prevent pandemics

Why Discrete Agents & Autocatalytic Behavior is Needed? Ref:

“….The discrete character of the individuals turns out to be crucial for the macroscopic behavior of complex systems….the slightest microscopic granularity insures the emergence of … localized macroscopic collective objects with adaptive properties….

The exact mechanism by which this happens depends crucially on the other unifying concept appearing ubiquitously in complex systems: auto-catalyticity. The dynamics of a quantity is said auto-catalytic if the time variations of that quantity are proportional (via stochastic factors) to its current value. It turns out that as a rule, the "simple" objects responsible for the emergence of most of the complex collective objects in nature have auto-catalytic properties. Autocatalyticity insures that the behaviour of the entire system is dominated by the elements with the highest auto-catalytic growth rate rather than by the typical or average element. This has profound implications on the very concept of scientific explanation: the fact that the dynamics is dominated by the exceptional individual/events (that enjoyed fortuitously the fastest stochastic growth factor) invalidates "reasonable" arguments based on the … 'average' or 'representative' case. This in turn generates the conceptual gap separating...disciplines: in conditions in which only a few exceptional individuals dominate… in the emergence of nuclei from nucleons, molecules from atoms, DNA from simple molecules, humans from apes, there are always the un-typical cases…that carry the day.

This is the challenge of complexity: understanding the basic objects (e.g. cells) in one science (biology) in terms of the collective dynamics of objects (molecules) belonging to another science (chemistry). Moreover the mandate of complexity is to uncover the determinism that hides behind the systematic and fateful recurrence in various sciences of seemingly fortuitous autocatalytic accidents. The conceptual and practical rewards for such a trans-disciplinary effort are inestimable."

Recognizing & Overcoming Cultural & Institutional Barriers

--Will Rogers: “It’s not what people don’t know that hurts them, it’s what they know that ain’t so.”

--Expresses also personal experience that education of government & industrial managers deserves higher priority—MS&A as a technology does still not receive enough respect as high leverage but moderate cost factor compared with spending on inevitable overruns and “forensic engineering”

Matching Analysis Domains to Types of M&S

“For systems between the small and large number extremes, there is an essential feature of the two classical methods. On one hand, the Square Law of Computation says that we cannot solve medium number systems by analysis, [since number of equations expressing interactions scales up as the square of number of objects], while on the other hand, the Square Root of N Law [deviation from expected value being proportional to square root of number of objects] warns us not to expect too much from averages. By combining these two laws, then we get a third--the Law of Medium Numbers: For medium number systems, we can expect that large fluctuations, irregularities, and discrepancy with any theory will occur more or less regularly….As with most general systems laws, we find a form…in folklore. Translated into our daily experience…the Law of Medium Numbers becomes Murphy’s Law: Anything that can happen, will happen.”

G.M. Weinberg, An Introduction to General Systems Thinking, John Wiley & Sons, New York, 1975, pp. 19-20.

Recent development of cheap computational power and the extension of object-technology software into representing agents with reactive/adaptive behavior provides analysts with a virtual laboratory to understand performance of information systems where human decision makers are a part of a system operating to achieve some goal in an adverse environment. There is no longer need to “make do” by applying M&S applications from the domains of “organized simplicity” or “unorganized complexity” to the arena of Complex Adaptive Systems. Algorithmic Models (using Wegner’s definition of Turing Machine program) implemented in procedural code are poor at simulating “Murphy’s Law” outcomes in the mesoscale regime of organized complexity.

Inability to distinguish between analytic domains and technology of M&S, which is pervasive among government and industry managers, results in a major waste of precious analysis and software simulation technology resources.


More Specifics on Limits of Equations to Model CHS

Edmund Chattoe , “Why Are We Simulating Anyway? Some Answers from Economics,” ESRC Project L122-251-013, Nov 95

“...there are also a number of quite concrete limitations to mathematical representation….

The difficulties of such a representation fall into two complementary classes: those caused by an unrealistic treatment of time and those resulting from an attempt to represent multiple agency as an ordered sequence of individual actions. These difficulties are complementary because the unrealistic treatment of time is both a consequence and a partial cause of the unrealistic treatment of multiple agency….

The modeler who arranges an equation system to guarantee its solubility does so because he or she must solve it sequentially, it is not feasible for certain processes to be carried out ``in the background'' or for the actions of several agents to be revised at once. Thus only one agent can act at a time in such models. Everyone else must freeze while this action is taking place. The richness of the environment is thus restricted to suit the attention of the modeler. This is plainly unrealistic.”

Although Chattoe here is speaking of modeling marketplace behavior of human actors in the area of economics, the same caveats clearly apply to modeling military systems in combat where human actors are part of the “system of systems.” The “real world” of combat involves behavior which is neither deterministic nor completely random. The complex macrosystem encompassing all parties to the conflict has space-time regions where behavior on the part of one adversary or the other may be intermittently synchronized to some degree. This local correlation of actions with some set battle plan depends on whether the “fog of war” is dominant or the C4ISR systems allow a critical number of C2 agents to penetrate that fog. The flux of the information states of the various distributed C2 agents on either side and their reaction to that information is clearly a phenomenon which currently has no accepted System Dynamic form which can transform the micro-processes of combat interactions into a macro-level MOE meaningful to combat commanders.

If we look at the natural world as a massively parallel processing computer with distributed control loops for each conscious actor and no guaranteed central control of behavior on any side of the modeled conflict, then the task of our computer simulation is to represent similar amounts of fluctuating coordination and chaotic activities over the extent of the battle space based on flows of perceived information which is erred and incomplete. Our computer code should produce the same complex distributions of possible outcomes as would be found if we could “replay” the natural world enough times to establish the risks and payoffs of different input conditions.

Equation-based models used in domain of CHS are frequently the tools of charlatans

Against Value-at-Risk: Nassim Taleb Replies to Philippe Jorion, 1997.

“…in economics, and the social sciences, engineering has been the science of misplaced and misdirected concreteness. Perhaps old J.M. Keynes had the insight of the problem when he wrote: ‘To convert a model into a quantitative formula is to destroy its usefulness as an instrument of thought.’

….Marshall, Allais and Coase used the term charlatanism to describe the concealment of a poor understanding of economics with mathematical smoke. Philosophers of science used the designation charlatanism in a the context of a theory that does not lend itself to falsification (Popper) or gradual corroboration (the Bayesians).”

Taleb, author of “Fooled by Randomness” is not shy about holding analysts to a standard of professional accountability as he notes the damaging effect of those in the field of financial risk analysis, either negligently or fraudulently propagating mistaken concepts. Unfortunately, such people also exist, perhaps in even greater numbers, in the defense industrial sector. Bad software models still have a large market share/mindshare and present a significant inertia against adopting recommendations of several science advisory board studies and MORS Workshops. The system analyst community and especially those knowledgeable about complexity science have the responsibility to spend more effort identifying to management the damage such flawed modeling is causing to the reputation of M&S technology.

Interactive models more powerful than algorithmic models

Peter Wegner, OOPSLA'95 Tutorial

“The irreducibility of object behavior to that of algorithms has radical consequences for both the theory and the practice of computing….

The negative result that interaction cannot be modeled by algorithms leads to positive principles of interactive modeling by interface constraints that support partial descriptions of interactive systems whose complete behavior is inherently unspecifiable. The unspecifiability of complete behavior for interactive systems is a computational analog of Goedel incompleteness for the integers….