THE COMPLEX EVOLUTION OF DUNCANK. FOLEY AS A COMPLEXITY ECONOMIST

J. Barkley Rosser, Jr.

JamesMadisonUniversity

August, 2011

Introduction

Duncan K. Foley has always dealt with the larger and deeper questions of economics since the day he walked into Herbert Scarf’s class on mathematical economics at YaleUniversity in the 1960s, with Scarf becoming his major professor. Scarf (1960, 1973) was the leading figure in the study of how to compute general equilibria and the problems of uniqueness and stability of same, problems that Foley (2010a) has pursued to the present day in the form of the related issues of convergence to general equilibria from non-equilibrium states. The pursuit of these issues would lead him to the problem of microfoundations of macroeconomics, which he saw as involving money and studied with the late Miguel Sidrauski (Foley and Sidrauski, 1971). Considering the problem of money would lead him to pursue the study of Marxian economics as possibly resolving this problem (Foley, 1982), a pursuit that would lead to his experiencing professional problems in the 1970s. But, he has continued to pursue the broader issue of the role of money in the economy, even while feeling some frustration along the way at trying to fill this “one of the big lacunae of economic theory.”[1]

While these topics as discussed so far are not obviously parts of what is called complexity economics, the pursuit of these matters would indeed lead him to pursue various forms of complexity economics. The initial link would be his reviving the model of Goodwin (1951) in a paper on the role of liquidity in business cycles, interested in the endogenous appearance of limit cycles, although these are at most on the edge of complexity models (Foley, 1986). This laid the groundwork for him to pursue ideas and approaches he had been previously interested in regarding statistical mechanics and also the role of the computer in economic decisionmaking and a more algorithmic and engineering approach to economics.[2] Meeting the late Peter Albin in the late 1980s would lead to collaboration with him (Albin and Foley, 1992; Albin with Foley, 1998) in which he pursued a more computational approach to complexity, albeit with some discussions of dynamic issues as well. During the same period he also took up the problem of applying statistical mechanics to general equilibrium theory (Foley, 1994), following on the work of Föllmer (1974), with this leading to a more specifically econophysics approach that linked more fully to considerations of the law of entropy (Foley and Smith, 2008) as he became associated with the Santa Fe Institute and moved more clearly into the complexity approach to modeling economics.

The rest of this essay honoring Duncan will focus on how his work in this area of complexity economics has related to ongoing controversies regarding the nature of complexity, and particularly which approach is most useful for economics. In particular, there has been an ongoing debate between those who focus more on computational complexity (Velupillai, 2009), in contrast with those who concentrate on dynamic complexity (Rosser, 2009). It can be broadly said that while Foley has tended to pursue and advocate more the computational complexity approach based on his work with Albin, he has also studied and articulated views about the dynamic approach and has suggested possible ways that they can be reconciled.

A Brief Overview of Competing Approaches to Complexity

What is complexity is a topic that has been beaten more or less to death in the existing literature, so we shall keep our discussion short here. Over a decade ago, Seth Lloyd of MIT famously gathered 45 definitions of complexity (Horgan, 1997, Chap. 11), with his not being a comprehensive list. This list can be viewed as comprising meta-complexity, the full set of definitions of complexity. Many of these 45 can be aggregated into sub-categories, with the sub-category arguably containing more of these definitions than any other being computational complexity, or variants thereof. Many of these definitions are variations on a theme of the minimum length of a computer program required to solve a problem, but there are indeed many variations on this. Furthermore, a hardline group argues that the only truly computationally complex systems are those that are undecidable due to halting problems in their programs (Blum, Cucker, Shub, and Smale, 1998). In any case, an argument made by those advocating the superiority of the computational approach to complexity is that it may be more precisely measurable, assuming one can agree on what is the appropriate measure, with well-defined hierarchies involved as well, although the hierarchy favored by Foley is not necessarily the same as that emphasized by some others.

Among economists almost certainly the main rival is dynamic complexity, defined by Rosser (1999), following Day (1994) as being systems that do not endogenously move to a point, a limit cycle, or a smooth implosion or smooth explosion.[3] Such systems inevitably involve either nonlinear dynamics or coupled linear systems that can be reduced to nonlinear systems (Goodwin, 1947). There are sub-categories of this form of complexity, with the “4 C’s” described by Horgan (op. cit.) being the main ones: cybernetics, catastrophe theory, chaos theory, and heterogeneous agent complexity. This latter has increasingly become what many mean by the term “complexity” in economics, with Arthur, Durlauf, and Lane (1997) laying out crucial characteristics of this approach, favored at the Santa Fe Institute, and often involving computer simulations. Given the interest of Foley in models using computer simulations of the sorts used at the Santa Fe Institute, such as cellular automata, it is not surprising that he is interested in these forms of complexity, although often emphasizing more the computability side of their use over the dynamics side of them. As laid out by Arthur, Durlauf, and Lane (1997), this approach is very much an antithesis of a general equilibrium perspective, with agents interacting locally with each other without ever necessarily being in any overall or general equilibrium and not ever necessarily arriving at one either. Many econophysics models arguably follow this pattern, although it can be debated whether or not Foley’s own forays into econophysics fully fit this model.

Foley on Computational Complexity

Foley’s most important direct work on questions related to computational complexity arose from his collaborations with the late Peter Albin. Most significantly, when Albin fell ill while working on a book covering his approach to this, Foley stepped in and edited a volume consisting mostly of previously published papers by Albin (Albin with Foley, 1998). As part of that, he wrote a substantial introductory chapter (pp. 3-72) in which he laid out his own perspective on such matters as nonlinear dynamical systems, the nature of complexity, various approaches to computational complexity, and other related issues. As the introductory chapter to this book, mostly by Albin, it followed very much Albin’s views, although it would seem that at least at that time, Foley’s views were fairly much in synch with those of Albin. This becomes important in that Albin was arguably the first economist to study computational complexity as it relates to economic systems (Albin, 1982, reprinted as Chap. 2 of Albin with Foley, 1998).

In particular, Albin emphasized computability problems associated with the halting problem, and the limits (or “barriers and bounds”) this imposed on full rationality by economic agents. Drawing on the literature deriving from Gödel’s Incompleteness Theorem as connected to computer science by Alan Turing (1936-37), he emphasized especially the role of self-referencing in bringing about these failures to halt, or infinite do-loops in programs. Such self-referencing can lead to such problems as the Cretan Liar paradox, “Is a Cretan who says ‘All Cretans are liars’ telling the truth?” Such questions can lead to an endless going back and forth between saying “yes” and “no,” thus becoming undecidable as the program fails to halt at a solution. Albin argued that economies are full of such self-referencing that implies such non-decidabilities from a computability perspective, a swarm of unstoppable infinite regresses, and Foley agreed.[4]

Albin was strongly influenced by the ideas of Stephen Wolfram (1986), who in turn has been strongly influenced by those of Noam Chomsky (1959). In his discussion of complexity in this introductory chapter, Foley follows through on these lines, discussing in succession computational complexity, linguistic complexity, and machine complexity. His initial discussion of computational complexity makes it clear that he is going to embed it within the four-level hierarchy developed by Chomsky and show how undecidability is a form of computational complexity that undermines full rationality. This is followed by a discusson of how computational complexity and dynamic complexity relate, which in turn is followed by a more detailed discussion of cellular automata and the forms of complexity in them.

Central to this entire discussion is the four-level hierarchy of languages, initially due to Chomsky (1959). He defines a formal language as consisting of finite sets of an alphabet of T terminal symbols (words), intermediate variables V, and a distinguished variable S that serves to initiate productions in the language. These take the form of P  Q, where P is a string composed of one or more variables with zero or more terminals and Q is a string composed of any combination of variables and terminals.

The lowest of these is the category of regular languages, generated by grammars that take the form P  T or P  TQ, where P and Q are variables and T is a string of terminal symbols that make up the regular language. The next level up are the context-free languages, which include the regular languages. The grammars generating these take the form of P  Q, where P is a string of variables and Q a string composed of terminal symbols and variables, without contextual restrictions. Above this is the level of context-sensitive languages that are generated by grammars such that P  Q have the length of Q at least as long as the length of P, this group likewise containing the one lower down. For these, if Q is a non-empty string and P1PP2 P1QP2, then Q and P can be substituted for each other in the context of P1 and P2, but not necessarily in other contexts. The highest level of these grammars generates unrestricted languages, which only need to follow the most general rules described above and will include the most complex formal languages. Each of the higher levels contains within it the level below it in an embedded or nested form.

This hierarchy is then seen to translate directly to a four-level hierarchy of machine complexity. At the lowest level is the finite automaton that that has a finite set of states and reads simple inputs off a tape to generate simple outputs, with a pocket calculator an example. This corresponds to the regular languages. By adding an unbounded pushdown stack on which the automaton can store and retrieve symbols on a first-in and last-out basis, one can recognize context-free languages. Having two bounded pushdowns moves one to the context-sensitive level. Having two unbounded pushdowns generates the level of unrestricted languages and becomes equivalent to an abstract Turing machine. Foley interprets all of these as representing informational and computational problems for economic decisionmakers.[5]

Foley then pulls what is the neat hat trick of applying all this to the analysis of dynamic complexity, which he sees as also having such a hierarchy, although in this he follows Wolfram. The dynamic equivalent of regular languages and finite automata are simple linear dynamical systems that converge on point attractors. The next stage up equivalent to context-free languages and one pushdown automata consists of nonlinear systems that endogenously generate periodic cycles as their attractors.

Context-sensitive languages and the two-bounded pushdown automata are equivalent to nonlinear dynamical systems that can go to a chaotic attractor. The break between this level and the previous one is equivalent to the line between the non-dynamically complex and the dynamically complex systems according to the Day-Rosser definition. Foley notes that at this level there can be long-period relations such as a parenthesis that opens and is closed much later. He identifies this with such economic phenomena as a loan being made that must then be paid back at some later time. He also notes that for this level, computing costs may rise so sharply that it may become impractical to actually solve problems, even if they are decidable in principle. This is somewhat equivalent to the break between P and NP systems more generally in the computational complexity literature, although it remains unproven that these really are distinct levels.

Finally, the equivalent of the unrestricted languages and two-unbounded pushdown automata equivalent to Turing machines may be undecidable. Monotonicities holding at the other levels break down at this level.

Foley then discusses Wolfram’s more specific adaptation of these categories to cellular automata. Type 1 evolve to uniform states. Type 2 evolve to periodic patterns from arbitrary initial conditions. Type 3 evolve to irregular patterns. And Type 4 generate a much broader range of possibilities, including non-monotonic ones in which a simple outcome may be the result of an enormously complicated set of generated structures. Foley links this to “edge of chaos” ideas of self-organizing systems and sees this level as that where new structures can emerge. In Albin’s discussion of this he ties this to the original von Neumann (1966) formulation for cellular automata that can self-reproduce, with von Neumann likening the jump from one level to another as being marked by complexity thresholds.

This is a stunning way of integrating the supposedly competing approaches to complexity, and this observer finds it impressive. However, this observer also questions whether this fully captures all the dynamic phenomena one observes in dynamically complex systems. In particular is the matter of emergence, argued by many to be at the very heart of dynamic complexity in systems ranging from evolution to urban development (Rosser, 2011). Two comebacks to this are that it may be captured by that highest level of Chomsky-Wolfram complexity. Another is to say that the supposed inability of computational systems to deal with this is a sign that such emergence is really not a proper complexity concept, with computational systems that provide genuine novelty (Moore, 1990) being the more useful concept.[6]

The Dynamic Complexity of Foley’s Econophysics

In the wake of his work with Peter Albin, Duncan Foley was also moving in another direction into the world of complexity economics, that of reconsidering the role of statistical mechanics as a foundation for general equilibrium theory, or more accurately, an alternative version of that theory. Mirowski (1989) has documented that the seminal developer of statistical mechanics theory, J. Willard Gibbs (1902),[7] was a profound influence on efforts in the US to mathematize neoclassical economic theory, both through Irving Fisher and later through Paul Samuelson (1947), who always looked to Gibbs as a source and inspiration thanks to the direct influence of Gibbs’s student, Edwin Bidwell Wilson at the University of Chicago. However, after this time the ideas of Gibbs faded from influencing many in economics directly. In any case, Samuelson was more influenced by other parts of Gibbs’s work than that on statistical mechanics.

The idea that an equilibrium might not be just a vector of prices (an idea certainly consistent with Gibbsian vector analysis), but a probability distribution of prices was what Foley would pursue.[8] His important paper of 1994 in the Journal of Economic Theory would do just that and would fit in with the emerging new confluence of economics and physics ideas that was happening at the Santa Fe Institute with which he would become affiliated, and which would eventually form as econophysics shortly thereafter, with H. Eugene Stanley coining this term in 1995, a year after Foley’s article appeared (Rosser, 2008).

While Foley has avoided these debates, econophysics has become quite controversial. Several economists generally sympathetic to it have criticized it fairly recently (Gallegati, Keen, Lux, and Ormerod, 2006), with some physicists replying sharply (Ball, 2006; McCauley, 2006). The critics argue on four points: 1) that many econophysicists make exaggerated claims of originality due to not knowing relevant economics literature, 2) that they often fail to use a sufficiently rigorous statistical methodology or techniques, 3) that they make exaggerated claims regarding the universality of some of their empirical findings, and 4) that they often lack any theoretical foundation or explanation for what they study. Interestingly, Foley’s work on this topic may help out on this last point. It is a bit ironic that he is an economist using a physics-based approach that may provide a theoretical foundation for certain portions of current econophysics.

Central to Foley’s approach was the idea that an equilibrium state should be associated with a maximization of entropy, a deeply Gibbsian idea, as Gibbs was also a founder of chemical thermodynamics. This was an aspect that Samuelson did not approve of so much. He made this clear at a symposium honoring Gibbs (Samuelson, 1990, p. 263):