Dimensions of Complexity other than “Complexity”

Mark W. Maier, Ph.D.

Distinguished Engineer

The Aerospace Corporation

Abstract

A predicate of this workshop is that something called “complexity” exists, and that complexity matters. It is presumed that complexity matters in the sense that systems that are relatively complex (themselves, or to develop) require some different principles or approaches than those that are relatively simple. This paper does not dispute that contention, indeed it accepts it, but argues that the vision is too narrow. Complexity, as has been typically defined in the “Complex Systems” literature is only one form of complexity that matters. A variety of other attributes matter as well, in the sense that they drive practices. This paper introduces a variety of potential complexity factors, and situates the more common notions of the “Complex Systems” community among them.

Discussion

The central thesis of this paper is that seeking to discover a definition for, and processes adapted to, “complexity” in the sense of this community is a limiting and inappropriate activity. Systems differ on many dimensions. While the particular set of attributes most generally ascribed to “complex systems” today is probably a valid category, and important, restricting one’s attention in that way is limiting. It is more useful, and more general, to consider broadly what makes system development efforts (not just systems themselves) complex. It is most useful to understand how system development efforts can be categorized, in the sense that developments sharing a common category should share common best practices, and those best practices should be distinct from developments in other categories.

For the discussion to follow, we must carefully distinguish a number of concepts. We take up “system-of-interest,” “development effort,” and “system category.” For the purposes of this paper, the “system-of-interest” is whatever it is that we are interested in developing. It may be discrete and readily identifiable (e.g. an iPod), or highly distributed with fuzzy scope (e.g. the Internet). The system-of-interest is likewise distinguished from its surrounding supporting systems, although clearly associated with them, in the style of ISO 15288.

The “development effort” is what ever deliberate engineering and management effort is undertaken to bring the system-of-interest into existence. Again, it may be that the development effort is well defined and carried out by a single organization, or it may be that it is of fuzzy scope and covers many, loosely coordinated, organizations. In what follows we are primarily interested in the complexity of the development effort, not of the system-of-interest. This is because engineering is associated with the development effort, not the system itself.

A “system category” is some collection of attributes associated with distinctive best practices in development efforts. Systems within a category share common best practices in development, and those best practices are distinguished from those associated with other system categories.

Table 1 provides a list of attributes, and ranges of values, from “simple” to “complex.”

Table 1: Factors logically associated with the “complexity” of system development efforts.
Simple / Complex
Sponsors / One, w/ $ / Several, w/ $ / One, w/o $ / Many, w/o $
Users / Same as sponsors / Aligned with sponsor / Distinct from sponsor / Unknown
Technology / Low / Medium / High / Super-high
Feasibility / Easy / Barely / No
Control / Centralized / Distributed / Virtual
Situation-Objectives / Tame / Discoverable / Ill-structured / Wicked
Quality / Measurable / Semi-measurable / One-shot and unstable
Program Scope / <$ 1 Million / $10’s of Millions / $100’s of Millions / >$ 1 billion
Organizational Maturity / High / Inside low, outside high / First of kind
Technical Scope / Discrete product / Product + Delivery Enterprise / Products or Product-line + Delivery Enterprise / Assemblage of products and enterprises
Operational Adaptation / Stable / User Adaptive / Competitor Adaptive / Full Scope Adaptation

Definitions

The following define key terms and concepts in Table 1.

Sponsors

The “Sponsors” are the people funding and directing the development of the system-of-interest. In the simplest case there is a single sponsor (either a literal individual or an organization with a single point of view), and the sponsor has all the resources required for developing the system. The next more complex case is where several sponsors exist, and they jointly (but not individually) have the resources necessary to develop the system. The next step in complexity is where one a single sponsor exists, but that sponsor does not have enough resources to develop the system. The development effort must then include the search for and capture of resources. Finally, we have the case where a system has many sponsors who individually and jointly do not have the resources to actually develop the system.

An additional complexity sometimes encountered is the diffusion of the sponsor over planning versus development efforts. For example, an organization might sponsor the development of an “architecture” for some complex assemblage of systems owned and operated by others. The sponsor of the architecture effort has only limited influence other component systems, but that influence might be decisive in particular ways (as is the influence of a zoning board). A development team working for the architecture project sponsor may face a quandary of conflicting objectives between the architecture sponsor (who is immediately paying) and the sponsors of the component systems (whose power may be greater, but who are not paying for the architecture effort).

Users

The “Users” are those who will eventually use the system and (most importantly) derive value from it. The simplest case is where the sponsor and users are the same. In this case the sponsor’s judgments as to value and tradeoffs can be considered decisive. A more complex case is where the sponsors and users are distinct, but they are considered to be aligned, as would be the case where they belong to the same organization, or where the sponsors are drawn directly from the user community. The situation grows more complex as the distance between sponsors and users grows. The most complex case is where the users are unknown at the time of system development.

Technology

The technology level captures the maturity of technology required to realize the system. In the simplest case all required technology is available as mature, production components. In the most complex case some essential components are available only as laboratory observations, and the complete maturation process must be undertaken as part of the system development (as an example, consider the Manhattan Project). Intermediate levels would be where all components are available, but some only as special production items (medium technology level); and where some components are available only in production prototype form (high technology level).

Feasibility

“Feasibility” refers to the difficulty in meeting the principal expectations of the sponsor and users in all attributes of system performance (including cost, schedule, and other programmatic factors). The “easy” case means there are many system configurations within the boundaries of expectation. The “barely” case means that only very carefully optimized system configurations meet basic expectations. The “No” case means that it is impossible to meet the basic expectations of sponsors and users. In this case it will be possible to build a system only if the sponsor and users are willing to substantially adjust their expectations as to the nature of the system.

Control

“Control” refers to how the system, once deployed, is managed or run in operations. In centralized control there is clear, specific, end-to-end responsibility for system operation. In distributed control the participants in operational control are all known, but no single individual or organization is invested with end-to-end responsibility (and authority) for system operation. “Virtual” means the system is operated by entities not known in advance, or necessarily to each other during system operation.

Situation-Objectives

Situation-Objectives refers to how success of the system is judged, or how the objectives or requirements are made known to the development team. In the “tame” case all objectives are explicitly given to the development team when development planning begins. In this case the objectives are expected to be stable, and are independent of the nature of the solution developed. In the “Discoverable” case the objectives are not immediately made available, but can be discovered by the actions of the development team. If the development team acts effectively the discovered objectives are expected to be stable and independent of the nature of the solution. In an “Ill-Structured” situation the objectives are expected to be influenced by the nature of the solution. The sponsor and/or users do not effectively understand how various solution options will affect their ability to reach their objectives, and their understanding of their own objectives is likely to change as they examine possible solutions. However, in the ill-structured case examination of possible solutions (assuming it is done well) should resolve the sponsor or users uncertainty about objectives. In the “Wicked” case no amount of design examination will converge the objectives, we expect that sponsor and user evaluations will change substantially after they actually have the system of interest available to them, and that there objectives are likely to drift over time as further experience with the system of interest is gained.

Quality

The “Quality” attribute refers to the situation in which we evaluate how well the system of interest performs, and how well the development and production processes are realizing the objectives. In the “Measurable” case the team can directly measure the quality attributes of interest. In this case, quality falls within the scope of statistical process control. In the “Semi-Measurable” case quality attributes are well quantified, but cannot be directly measured. A representative situation is where failure probabilities are required to be much lower than the inverse of the system population (e.g. rocket launch failure probabilities less than 1 in 10,000 on production runs of 1 to 100). In this situation measurements can be made, but they are indirectly representative of the desired quality attributes. The most complex quality situation is where the key performance measures occur in singular, non-repeatable situations, and those situations are subject to unstable change after system deployment. Systems intended to operate during general nuclear war are classic examples of this category, as prior direct testing is impossible, indirect tests are known to leave off important system-wide effects, and operational situation is continually varied by intelligent enemy adaptation.

Program Scope

“Program Scope” refers to the size of the total effort. Money here is a surrogate for the level of human effort that must be organized and coordinated, varying from 5 to 10 man-years at the low end to 10,000 man-years or more at the high end. At the low end a single human can directly supervise (or at least observe) all of the work involved in developing the system. At the high end the individuals with end-to-end responsibility (assuming any such individuals exist) are at least four hierarchical levels removed from direct supervision of the actual work.

Organizational Maturity

“Organizational Maturity” refers to the maturity of the development organization with respect to the developing systems similar to the system-of-interest. A “High” maturity organization has developed many similar systems, has qualified staff in excess of the development requirements, and has pre-existing processes known to be effective. A step up in complexity is where the developing organization is not very mature, but other organizations are. In this case the developing organization must (or should) learn from outside best practices while conducting the development. The most complex case is where no organization possesses the requisite expertise and processes. The developing organization must learn, and has no model it can learn from.

Technical Scope

“Technical Scope” refers to the nature of the system elements or components within the scope of the overall development effort. At the low end the scope is a single technical system that will be deployed into a pre-existing support infrastructure (manufacturing, delivery, maintenance, operation). Greater complexity occurs as the scope expands to include the supporting systems (perhaps much more complex than the system of interest), the system-of-interest divides into many configurations, and eventually to assemblages that span organizations.

Operational Adaptation

No system is used in operation exactly as it was intended. Users, and the environment, adapt once a system has been deployed. In general, the greater the adaptation in operation the more complex the development problem becomes. In the simplest case no adaptation teaks place. The development problem is more complex when the users adapt their own operation in response to the capabilities of the new system-of-interest. Even more complex is where competitors and/or other opposition adapt their operations (and choices of system) in response to the system-of-interest (or in anticipation of the system of interest). Finally, the scope of adaptation may expand so that the users, the competition, and the environment itself are effected by, and adapt to, the system-of-interest, and the value of the system-of-interest is strongly effected by that adaptation.

Consequences

With the map in Table 1, we can suggest footprints of attribute values that correspond to system categories. Table 2 can be referred to as “textbook engineering,” or the situation normally encountered in engineering school. Many engineers have had little experience with other footprints.

Table 2: Textbook Engineering
Simple / Complex
Sponsors / One, w/ $ / Several, w/ $ / One, w/o $ / Many, w/o $
Users / Same as sponsors / Aligned with sponsor / Distinct from sponsor / Unknown
Technology / Low / Medium / High / Super-high
Feasibility / Easy / Barely / No
Control / Centralized / Distributed / Virtual
Situation-Objectives / Tame / Discoverable / Ill-structured / Wicked
Quality / Measurable / Semi-measurable / One-shot and unstable
Program Scope / <$ 1 Million / $10’s of Millions / $100’s of Millions / >$ 1 billion
Organizational Maturity / High / Inside low, outside high / First of kind
Technical Scope / Discrete product / Product + Delivery Enterprise / Products or Product-line + Delivery Enterprise / Assemblage of products and enterprises
Operational Adaptation / Stable / User Adaptive / Competitor Adaptive / Full Scope Adaptation

Table 3 can be referred to as “classic systems engineering,” in that it defines what is commonly thought of the center of practice in systems engineering as conventionally taught.

Table 3: Classic Systems Engineering.
Simple / Complex
Sponsors / One, w/ $ / Several, w/ $ / One, w/o $ / Many, w/o $
Users / Same as sponsors / Aligned with sponsor / Distinct from sponsor / Unknown
Technology / Low / Medium / High / Super-high
Feasibility / Easy / Barely / No
Control / Centralized / Distributed / Virtual
Situation-Objectives / Tame / Discoverable / Ill-structured / Wicked
Quality / Measurable / Semi-measurable / One-shot and unstable
Program Scope / <$ 1 Million / $10’s of Millions / $100’s of Millions / >$ 1 billion
Organizational Maturity / High / Inside low, outside high / First of kind
Technical Scope / Discrete product / Product + Delivery Enterprise / Products or Product-line + Delivery Enterprise / Assemblage of products and enterprises
Operational Adaptation / Stable / User Adaptive / Competitor Adaptive / Full Scope Adaptation

Table 4 is an interpretation of “complex systems,” in the sense of the author’s original (and now commonly accepted) definition.

Table 4: An Interpretation of “Complex Systems”.
Simple / Complex
Sponsors / One, w/ $ / Several, w/ $ / One, w/o $ / Many, w/o $
Users / Same as sponsors / Aligned with sponsor / Distinct from sponsor / Unknown
Technology / Low / Medium / High / Super-high
Feasibility / Easy / Barely / No
Control / Centralized / Distributed / Virtual
Situation-Objectives / Tame / Discoverable / Ill-structured / Wicked
Quality / Measurable / Semi-measurable / One-shot and unstable
Program Scope / <$ 1 Million / $10’s of Millions / $100’s of Millions / >$ 1 billion
Organizational Maturity / High / Inside low, outside high / First of kind
Technical Scope / Discrete product / Product + Delivery Enterprise / Products or Product-line + Delivery Enterprise / Assemblage of products and enterprises
Operational Adaptation / Stable / User Adaptive / Competitor Adaptive / Full Scope Adaptation

The complex systems community would most likely agree that system developments with the footprint defined by the generally accepted definition forms a valid system category. They would most likely agree as well that a wide variety of important and useful best practices for system developments in that footprint can be identified. However, it is far from clear that the complex systems footprint (Table 4) is the only relatively complex footprint, or even the most important. For example, are not the attributes of ill-structured/wicked situations, high to super-high technology, semi-measurable or worse quality situations, and low organizational maturity not dominant factors in many major system developments? More prosaically, but of probable equal performance, what about the problems of dealing with projects simply of very large size? Many such projects fail, often spectacularly, and their failures often have large impact because cost over-runs on very large programs tend to cascade disproportionately into the project portfolios of the organizations attempting to run very large programs.

Conclusions

All systems, and all system developments, are not the same. Different types of developments require different sets of practices. The distributed collaborations commonly described as “systems-of-systems” or “complex systems” certainly defines one footprint of particular practices. However, equally clearly, there are many other footprints distinct from distributed collaborations that are also important, that also categorize distinct good practices, but whose good practices are different. This paper offers a variety of attributes, with ranges, that are potentially equally valid development discriminators.

Author’s Biography

Dr. Mark W. Maier is a Distinguished Engineer at The Aerospace Corporation, aCalifornia non-profit corporation that operates a Federally Funded Research and DevelopmentCenter with oversight responsibility for the National Security Space Program. At Aerospace he founded the systems architecting training program and applies architecting methods to government and commercial clients, particularly in portfolios-of-systems and research and development problems. He received the BS and MS degrees from the California Institute of Technology and the Engineer and PhD degrees in Electrical Engineering from the University of Southern California. While at USC, he held a Hughes Aircraft Company Doctoral Fellowship, where he was also employed as a section head. Prior to coming to The Aerospace Corporation he was an Associate Professor of Electrical and Computer Engineering at the University of Alabama at Huntsville. Dr. Maier is co-author, with Dr. Eberhardt Rechtin, of The Art of Systems Architecting, Second Edition, CRC Press, the mostly widely used textbook on systems architecting, as well more than 50 papers on systems engineering, architecting, and sensor analysis.