Forgetting in SelfOrganising Systems

Bernard Scott

Learning Environments and Technology Unit

University of the Highlands and Islands Project

Lews Castle College

Stornoway

Isle of Lewis

HS1 2SD

Tel 01851 702331 Fax 01851 770001

Scott, B. (1999). “Forgetting in self-organising systems”, in The Evolution of Complexity, Vol. 8 of Einstein Meets Magritte, VUB, Brussels, 1995. Heylighen, J. Bollen and A. Reigler (eds), Kluwer, Dordrecht, pp. 157-167.

Abstract

Personally and societally, we adapt and learn. We also forget. In this paper I overview models for some of these processes in different domains and at different levels of resolution. I distinguish between first and second order interpretations of such models, on the one hand, as observer distinguished mechanisms, and, on the other hand, as the hermeneutics of observers' states of knowing. I go on to consider implications of this modelling for the survival of individuals, societies and subcultures within societies. I conclude by reflexively considering cybernetics itself, and consider how best the truths of cybernetics may be conserved in the context of continuing change and generational renewal.

The paper reflects the outcome of a series of conversations between the author and his colleagues, Tony Hirst and Simon Shurville, in which we set out to share our understandings of learning and evolution: Hirst, as abstract models and simulations, Scott, as processes in educational systems and Shurville, as processes involved in design and creativity.

Keyords forgetting, self-organising system, leraning, adaptation, cybernetics

Introduction.

"Cybernetics .. will only attain its true stature if it recognises itself as the science that reaches out for that which is hidden." Gotthard Gunther.

"Cybernetics transforms language into an exchange of news." Martin Heidegger.

In discussions of the properties of selforganising systems, it is common practice to stress the extent to which such systems learn, adapt and evolve ([1] provides an excellent introduction to the topic). However, as we consider in this paper, it is also the case that forgetting occurs. There are obvious practical consequences. For example, how do we design educational experiences so that effective learning, with retention, takes place? This is a particularly relevant topic as new technologies (CDROM, the internet) are increasingly being used to deliver educational experiences. A learner may browse, may explore and interact but what guarantees can be put in place to ensure the experience is

worthwhile? Will anything useful be retained or will the outcome be like that of reading a lightweight novel? Forgetting is a problem in a range of personal domains. There are continuing complaints that we and are neighbours fail to learn from experience. This theme has been raised most poignantly in 1995 with the 50th anniversary of VE Day. "We shall remember them!" is an exhortation and statement of intent, not, unfortunately, a statement of fact.

In the first part of this paper I look at why forgetting occurs in selforganising systems. To do this coherently requires reviewing the properties of such systems, as cybernetic abstractions, and giving them interpretations for particular domains. In the second part of the paper, I consider second order, hermeneutic interpretations of the concept "forgetting in selforganising systems", with illustrations of how concerns about personal and societal forgetting have been articulated by observers in different communities at different times. Finally, I consider the implications for cybernetics itself.

Perhaps I should be more explicit about my concerns: I am writing as someone committed to the cybernetic enterprise as conceived by Wiener and others in the 1940's: the transdisciplinary study of "circular causal and feedback mechanisms in biological and social systems" [2]. As I note in the body of the paper, I believe that much that was good in the original vision for cybernetics has been lost or distorted forgotten. I am explicitly seeking to promote a renewal of interest, particularly in cybernetics as a transdiscipline concerned with unity and coherence. I believe there has been major progress in the biological and physical sciences and some in the social sciences and I would wish to see these achievements are consolidated. I am also concerned to see progress made towards mutual understanding between the sciences and the arts and humanities. I have reflected this in this paper by placing emphasis on the hermeneutics of observer to observer communication.

SelfOrganisation.

In a classic paper, Heinz von Foerster [3] characterises a selforganising system as one in which, as measured from the observer's frame of reference, the rate of increase of redundancy is always positive. An immediate implication of this is that, as time passes, the observer is obliged to update and enlarge his description of the system and its possible states and behaviours. In so doing, he may also eventually be obliged to modify his frame of reference, adding new categories or dimensions. In other words, the system has grown or evolved. In so doing, it has become more adapted to or informed of its environment [4, 5]. These changes may also be characterised as a kind of learning, although some cyberneticians prefer to apply that predicate to languageoriented systems [6]. We shall return to this question later. For the moment we are concerned with an observer's representation of a taciturn, first order system [7]. Adaptive changes should also be distinguished from habituation, which Ashby defines as shifts of equilibrium in response to disturbances that do not require an observer to change his description of the system.

Two important questions follow from these dehmitions. There is a logical problem "If the system is changing, what makes it the same system (other than just by edict of the observer)?" Ashby and other cyberneticians of his generation were well aware of this problem of system identity and acknowledged that the concept selforganising system is contradictory. However, the term continues to be employed as a useful shorthand. Ashby, himself, used the concept of essential variables in order to develop his formal, axiomatic cybernetics of systems that evolve as stable unitary entities. Essential variables are those that have to be maintained within prescribed limits for the system to persist but which the observer may not be able to (indeed, usually cannot) specify exhaustively, in detail. For this notion, of there being something intrinsic to the system that cannot be changed (beyond certain limits) without the system losing its integrity, Ashby also coined the phrase "informationally closed".

"Cybernetics and general system theory are both primarily concerned with systems that are open to energy but are closed to information and control" [4].

Perhaps Ashby's greatest single insight and contribution to abstract cybernetics was to recognise that for any such system inhabiting a universe subject to any lawful constraint, the system would inevitably evolve so as to become informed of that constraint. This idea continues to fascinate. Nietzsche, on discovering Darwin, declared that mankind is at the "great noontide" where we are able to say how we came to be what we are. For the reductionist, materialist cosmologist Frank Tipler, this is our promise of immortality [8].

Ashby developed his ideas within a "collegiate" of fellow cybernetic thinkers (Pask, Beer, Gunther, Wiener, McCulloch, Loefgren and Maturana to name a few), with whom he corresponded, met at conferences or worked alongside at the Biological Computer Laboratory, University of Illinois, founded and directed by Heinz von Foerster . Towards the end of the 1960's, there was a convergence of thinking concerning the problems of selfreference and the modelling of complex, dynamical systems, not least, those such systems that are capable of explicit selfreference and selfdescription, amongst whom are those communities of languageoriented systems known as human observers. From the perspective of one observer fairly close to some of the key events [9], several things seemed to happen almost at once.

Loefgren [10] legitimised within formal set theory the notion that explanations (descriptions) may explain (describe) themselves. He also contributed a fairly simple Turing machine model for the evolution of such systems such that disturbances that do not require an observer to change his description of the system.

Two important questions follow from these definitions. There is a logical problem "If the system is changing, what makes it the same system (other than just by edict of the observer)?" Ashby and other cyberneticians of his generation were well aware of this problem of system identity and acknowledged that the concept selforganising system is contradictory. However, the term continues to be employed as a useful shorthand. Ashby, himself, used the concept of essential variables in order to develop his formal, axiomatic cybernetics of systems that evolve as stable unitary entities. Essential variables are those that have to be maintained within prescribed limits for the system to persist but which the observer may not be able to (indeed, usually cannot) specify exhaustively, in detail. For this notion, of there being something intrinsic to the system that cannot be changed (beyond certain limits) without the system losing its integrity, Ashby also coined the phrase "informationally closed".

"Cybernetics and general system theory are both primarily concerned with systems that are open to energy but are closed to information and control" [4].

Perhaps Ashby's greatest single insight and contribution to abstract cybernetics was to recognise that for any such system inhabiting a universe subject to any lawful constraint, the system would inevitably evolve so as to become informed of that constraint. This idea continues to fascinate. Nietzsche, on discovering Darwin, declared that mankind is at the "great noontide" where we are able to say how we came to be what we are. For the reductionist, materialist cosmologist Frank Tipler, this is our promise of immortality [8].

Ashby developed his ideas within a "collegiate" of fellow cybernetic thinkers (Pask, Beer, Gunther, Wiener, McCulloch, Loefgren and Maturana to name a few), with whom he corresponded, met at conferences or worked alongside at the Biological Computer Laboratory, University of Illinois, founded and directed by Heinz von Foerster . Towards the end of the 1960's, there was a convergence of thinking concerning the problems of selfreference and the modelling of complex, dynamical systems, not least, those such systems that are capable of explicit selfreference and selfdescription, amongst whom are those communities of languageoriented systems known as human observers. From the perspective of one observer fairly close to some of the key events [9], several things seemed to happen almost at once.

Loefgren [10] legitimised within formal set theory the notion that explanations (descriptions) may explain (describe) themselves. He also contributed a fairly simple Turing machine model for the evolution of such systems. Maturana developed these ideas using the term autopoiesis to refer to "the organisation of the living".

I am aware that such a terse account of events hardly does justice to the richness of the ideas and work done, as notions of selfreference were brought into scientific discourse. I wish to emphasise there was a conversational comingtogether and learningtogether, in which a new cybernetic epistemology was elaborated, an epistemology of the observer, in which, as Gunther [12] notes, there is "no prior commitment to a particular ontology".

To summarise so far: selforganisation, as defined by von Foerster implies an evolution of system complexity. This in turn makes the notion of system identity problematic. The question of identity is resolved if the integrity of the organisation of the system is taken as betokening its survival, in which case the system may be said to be selfproducing (autopoietic), that is, the system itself defines itself, rather than being observer defined. Ipso facto, the system is a cognitive system, an observer in its own right. An observer who interacts with the system as an "it" is regarding it as a firstorder, taciturn system. An observer who interacts with the system as if it were an observer (like himself) engages with it as a second order, languageoriented system ( as a "thou" to his "I").

Adaptation, evolution and learning.

Systems that are selforganising in von Foerster's terms are observed to evolve: they adapt to different environments, they acquire new repertoires of behaviour. How does this take place? The basic process has been highlighted in the work of Holland on evolutionary algorithms [19]: there is an "internal to the system" mechanism for generating variety. In other words, learning is an evolutionary process, in which selection acts on a population of possible "behaviours" or "cognitive structures" or "neuronal groups". Although this idea is not original to Holland (see [20], for an early paper on this theme) and is currently being developed by many workers (Kaufmann, Eidelman, Freeman), it is perhaps worth recalling that in the earlier part of this century it was still a common belief that it was the input of sensory stimulation that lead organisms to be active. Holland himself was greatly inspired by the writings of Hebb [21].

Memory and storage.

Von Foerster [13] has made elegant pleas that we should be aware of the semantic confusions that may arise when "memory" is confused with "storage", as it is in the jargon of computing. By memory, von Foerster means the cognitive processes of an observer, in which the processes of perceiving, knowing and remembering are, logically, inseparable as processes. However, cognitive processes are embodied and, insofar as learning, perceiving and remembering are taking place, it is common practice to refer to "memories" as being stored somewhere. Confusion arises when we find ourselves saying that "memories" are stored in "memory"! However, storage does require fabric, just as cognition requires embodiment. Some of that storage may be in parts of brains, more or less distributed, but it is also evident that much is stored in the environment (it has become fashionable to talk about "situated cognition").The "marks" that support and inform cognitive processes may be anywhere (situated and distributed cognition).