9

1st PRINCIPLES of Efficacious Adaptation

Bill McKelvey

UCLAAnderson School of Management

Originally 2004. Updated 2007

Explanations for why some structures have adaptive success while others do not range from biology to social science. If the same principle applies all the way from microbes to organizations, it is assuredly scale-free. In this essay I identify seven fundamental explanatory 1st Principles[1] that meet this standard. These define the Causal Drivers of emergence. They explain why, in the econosphere, there are millions more emergent networks than multinational corporations.

I orient this theory of emergence around these 1st principles because they have guided leading-edge research in biology and organization studies for more than 50 years, in most cases. Each of these principles has been framed as a generative force driving the creation of order in organisms and organizations: Adaptive Tension, Variation Rates, Requisite Variety, Near Decomposability, Causal Complexity, Mutual Causality, and Causal Rhythms. Like other management scholars who have shown how a specific set of generative forces may be at the origin of certain social processes (Abbott, 1992; Pentland, 1995; Van de Ven & Poole, 1995), I suggest that these seven generative forces are the foundation for emergence processes within and across organizations. The ability of agents to create viable emergent structures capable of efficacious adaptation in changing environments, comprised of scarce resources and aggressive competitors, is pursued within the confluence of these forces.

1. Prigogine’s Dissipative Structures Theory

One of the origins of order creation is a disequilibrium between two adjacent systems or “fields,” where one field enjoys a high concentration of resources (e.g. information, knowledge, capital, market potential) in comparison to an adjacent other field (Prigogine, 1955; & with Stengers, 1984, 1997). The disequilibrium sets up an “adaptive tension,” defined as a contextually imposed energy differential (McKelvey, 2001). Since tension seeks resolution, this energy differential will produce a creative response that increases the order within the system as a whole (Arikan, 2007) This force is the cornerstone of the European School (Nicolis & Prigogine, 1989). It is their prime driver of order-creation behavior. According to dissipative structures theory, every instance of adaptive tension is in some measure imposed from the environment, or from managerial perceptions of the environment.

Entrepreneurship, the management discipline devoted to order creation, utilizes a variety of theories based on the idea of adaptive tension. In one sense a technical or process innovation (Schumpeter, 1942; Tushman & Anderson, 1986) can set up a disequilibrium between the entrepreneur and the current market, these innovations drive the emergence of new firms and new industries (Binks & Vale, 1990; Foster, 2000). Such disequilibria are at the heart of opportunity recognition Shane, 2000; Shane & Venkataraman, 2000), through which an entrepreneur identifies an untapped market that can be capitalized on through the value-creating activities within a new venture. From a broader economic perspective, Hayek showed that entrepreneurial activity can be attributed to geographical differences in information and knowledge (Hayek, 1967)—these adaptive tensions give rise to the self-organization of the market

Anderson suggests adaptive tension is generated by organizational leaders:

Those with influence and/or authority turn the heat up…on an organization by recruiting new sources of energy (e.g. members, suppliers, partners, and customers), by motivating stakeholders, by shaking up the organization, and by providing new sets of challenges that cannot be mastered by hewing to existing procedures. (1999: 222)

A modern example appears as Jack Welch’s admonition/threat at GE, “Be #1 or 2 in your industry in market share or you will be fixed, sold, or closed” (Tichy & Sherman, 1994: 108; somewhat paraphrased). Collins (2001) argues that adaptive tension forces firms to “face the brutal facts” if they want to go from Good to Great, as he titles his book. Adaptive tension is generated through challenges such as this one, and also from increased degrees of interaction between agents, through opportunity recognition, through goals that are self-organized or externally generated, and through institutional selection processes (McKelvey, 2002, 2004, 2008).

2. Ashby’s Law of Requisite Variety

Ashby’s (1956) path-breaking work in cybernetics identified a key Law of system complexity, namely that in order to remain viable, a system needs to generate the same degree of internal variety as the external variety it faces in the environment. Formally, “R’s capacity as a regulator cannot exceed R’s capacity as a channel of communication” (Ashby, 1956: 211). Essentially, external variety—including “disturbances” or uncertainty—can be managed or “destroyed” by matching it with a similar degree of internal variety: “Only variety can destroy variety” (p. 207). McKelvey and Boisot (2008) update Ashby’s Law to the Law of Requisite Complexity.

The creation of internal variety has been described at multiple levels of management theory. The very study of cognitive schema is based on the simplified concept that in order to operate effectively in organizational or social settings our internal constructions of the world (mental models) must be representative of the complexity we experience (Boulding, 1956; Gell-Mann, 2002). When faced with phenomena falling outside our schema we may expand our schema (increase our requisite complexity) to account for those unaccountable experiences (Bateson, 1978), or restrict our perception of reality (mitigate external variety) to rationalize or simply deny the conflicting information (Staw, Sandelands & Dutton, 1981). Boisot and McKelvey (2006) discuss ways of both reducing environmental complexity from possible to probable variety as well as, then, expanding internal complexity to cope with probable external complexity. Gell-Mann (2002, p. 13) calls this “effective complexity.”

In leadership settings, management scholars suggest that facing the challenges of more complex organizations requires the adoption of a more complex style of thinking, by developing for example a more “complicated understanding” of organizational phenomena (Bartunek, Gordon & Weathersby, 1983). This view is supported by Weick & Robert’s (1993) recognition that dealing with the extremely complex dynamics of landing planes on an aircraft carrier requires a mutually constructive and complicating process of “heedful interrelating.” This approach is reflected in research that shows how complex organizational problems may sometimes be solved by greatly expanding the contextual information surrounding the problem (i.e. increasing requisite complexity), through situated learning (Lave, 1991; Orlikowski, 1996) or through communities of practice (Brown & Duguid, 1991) which benefit from the accumulated knowledge of many agents.

3. Fisher’s Mutation-Rate Theorem

Fisher’s (1930) work made a key link between variation and adaptation, a link that is now all but axiomatic in the biological and social sciences. His basic law stated: “The rate of evolution of a character at any time is proportional to its additive genetic variance at that time” (quoted in Depew & Weber, 1995: 251). In other words, adaptation progresses at the rate that usable variation becomes available.

Although originally a theory underlying natural selection of biological entities, Fisher’s theorem has essentially become the cornerstone of modern strategy theories, particularly research on innovation and knowledge-creation in high-velocity environments where knowledge creation provides the key to ongoing “variations” within products and product lines (Eisenhardt, 1989, Eisenhardt & Tabrizi, 1995). As product life cycles and hypercompetition have increased (D’Aveni, 1994), this speed of knowledge creation and appreciation has become a central attribute of competitive advantage (Leonard-Barton, 1995). Prusak says (1996: 6):

“The only thing that gives an organization a competitive edge—the only thing that is sustainable—is what it knows, how it uses what it knows, and how fast it can know something new!”

The speed of adaptation idea is featured by Fine (1998) in his book, Clock Speed, and by Jennings and Haughton (2000) in their book, It’s not the BIG that eat the SMALL…it’s the FAST that eat the SLOW.

Complexity studies have confirmed that the classic “organic” organizing style is just too slow to keep pace with changes in high-velocity environments (Brown & Eisenhardt, 1997), and does not in and of itself generate the conditions for ongoing variations within firms (Stacey, Griffin, & Shaw, 2000; McKelvey, 2008). McKelvey (1997) follows Fisher in emphasizing intra-firm rates of change; since change rate is relative to a changing environment, the optimal change rate is curvilinear, i.e. an inverted U (Anderson, 1999; Brown and Eisenhardt, 1997). Movements toward an optimal rate of change relative to a changing environment will support order creation, whereas any move away from optimality at any level will inhibit emergence (Brown & Eisenhardt, 1997; Stacey, Griffin, & Shaw, 2000; Davis & Eisenhardt, 2004). If by chance, change rate is not optimal, the odds of the units remaining viable long enough to progress to the next level of emergent structure are diminished.

4. Simon’s Principle of Near Decomposability

Simon (1962) identified a fourth fundamental driver of order emergence in his study of adaptive mechanisms occurring in biological and in social systems. Near decomposability is an internal design capability through which complex architectures or processes are “decomposed” into semi-autonomous hierarchical subsystems. Simon illustrates why such “nearly” autonomous—that is, “nearly decomposable”—complexity destruction components, and the differentiation dynamics they set in motion, have adaptive advantage. His canonical example compares two watchmakers making timepieces with 1000 parts. In the approach used by Tempus, any interruption in the process of assembling all 1000 pieces would cause the watch to fall to pieces, requiring him to reassemble it from scratch. In contrast, Hora designed the task into subassemblies of about ten elements each, each of which were put together in further sets of ten; all ten of the latter assemblies constituted the whole watch.

Hence, when Hora had to put down a partly assembled watch in order to answer the phone, he lost only a small part of his work, and he assembled his watches in only a fraction of the man-hours it took Tempus. (1962: 470)

The phenomenon of near decomposability is ubiquitous; examples of it can be found at virtually every level of human endeavor. Cognitively, the ability to break a problem into component parts is at the heart of “analysis,” a foundation of reductionism (Nagel, 1961). Similarly, one of the basics of time management is to organize endeavors into sets of component tasks; project managers have developed elaborate mechanisms that support such hierarchical planning (Wideman, 1985).

New firms develop in the same manner; new-venture creation is studied in terms of elemental behaviors or tasks that systemically integrate to drive organizational emergence (Carter, et al., 1996). The early development of companies has long been framed in terms of increasing degrees and levels of structure and control (Churchill & Lewis, 1983; Hanks et al., 1994). A similar model permeates early models of organizational design (Thompson, 1967; Blau, 1970), and the need to continuously increase internal levels of differentiation and specialization has been a driving force in the evolution of new organizational forms (Lawrence & Lorsch, 1967; Miles et al., 1999. Competitive strategists’ focus on related multidivisional firms followed, where nearly decomposable business units in different industries (which are related only with respect to one or two elements such as common technologies or supply-chains), offer adaptive and competitive advantage (Chandler, 1962, 1977; Williamson, 1975; Rumelt, 1991).

5. Causal Intricacy: Lindblom’s Science of Muddling Through

Lindblom introduces the processes of parallel interaction, mutual adjustment and coordination that characterize social units facing complex, uncertain choice and action situations. In these contexts he asserts: “…The method of successive limited comparisons…will be superior to any other decision-making method available for complex problems” (Lindblom, 1959: 84, 88). His “muddling through” process depends on the creation of a “…division of labor [in which] every important interest or value has its watchdog.” Though writing before Simon (1962) and Buchler (1966), Lindblom introduces the foundational notion of Buchler’s “interactional complexity” that arises once near decomposability is achieved. Groups of various kinds may be interconnected; each group has an agenda; each agenda provides some causal push on the forward movement and successive alteration of policies and organizational action. Lindblom notes that the means-ends agendas of the various groups are simultaneously pursued. In his work we have early, if not the first, recognition of multiple causal influences in emergent process.[2]

Building on Lindblom, Cohen, March and Olsen (1972) observe that emergent, differentiated, semi-autonomous subunits generate the need to integrate “organized anarchy”. The latter leads to added hierarchical levels and emergent integration processes among them (Galbraith, 1973). The task here is to solve key problems that can limit order creation and emergence. The principle issue, epitomized in the phrase “organized anarchy,” is one of top-down directing vs. bottom-up emergent organizing—a causal duality. McKenzie and van Winkelen (2004) discuss six organizational dualities in which focusing on one pole builds up tension calling for focus on the opposite pole. If these duality tensions aren’t solved, a unit loses its adaptive capability. The secondary issue involves the ongoing task of balancing individual agency vs. collective good. If the individual unit is too strong (perhaps even defecting from the collective), the collective loses its ability to adapt to, and draw energy from, its environment. On the other hand if the collective (i.e., the enclosing unit) is too strong, destroying the diversity of the constituent units, the unit fails at generating sufficient variation to remain adaptive.

Accomplishing this task expeditiously becomes more difficult as the adaptive structures become more complex. This added complexity lowers the probability of reaching multilevel structures. As structures become more complex, the number of dualities increases. Pettigrew and Fenton (2000) mention as many as ten. Thomas et al. (2005) discuss causal intricacy in detail. While the other drivers apply equally at all levels, this one clearly becomes more prominent as hierarchical levels increase and subunits multiply. A variety of computational models experiments Lindblom’s basic insight (e.g., see chapters in Warglien & Masuch, 1996).

6. Maruyama’s Mutual Causal Processes (this is the beginning of positive feedback in general systems theory)

This principle is perhaps the one most implicitly utilized in studies of non-linear dynamics. Maruyama’s formalization of mutual causality reads as follows: “…Mutual causal relationships that amplify an insignificant or accidental initial kick, build up deviation and diverge from the initial condition” (Maruyama, 1963: 164). In parallel Lorenz (1963) initiated chaos theory and then gave rise to the “butterfly effect” phrase in the title of his 1972 paper: “Does the flap of a butterfly’s wings in Brazil set off a tornado in Texas?” Individually these relationships result in vicious circles or positive feedback loops (Arthur, 1988, 1990; Holland, 1995). Collectively, when multiple entities interact in a dynamic environment, these mutually causal processes are at the origin of emergent order creation in biology (Goerner, 1994) and economics (Ormerod, 1998).