Horst – Cognitive Pluralism1

Part II (August 2014)

Cognitive Pluralism

Part II: Cognitive Pluralism and the Architecture of Understanding

Draft Material (August 2014)

Please do not cite without permission.

This material is posted online in hopes of professional feedback. I welcome your suggestions, emailed to the address below. (Ideally, it would be helpful to add marginal comments using the Comments feature in Word and send me copies of the files with your comments.)

Steven Horst

Professor of Philosophy, Wesleyan University

A Few Notes on this Draft

This is draft material for a book, tentatively entitled Cognitive Pluralism. I am posting Word documents with draft of each of the four sections of the book separately, and will be posting these separately as the come into a suitably clean form to make available for comments:

Part I: Unities and Disunities of Mind and Understanding

Part II: Cognitive Pluralism: A Model-Based View of Understanding

Part III: Cognitive Pluralism and the Disunity of Understanding

Part IV: Cognitive Pluralism and Metaphysics

In some places, the draft material will have references not yet filled in or arranged into bibliography. There will also be places where I have left some marginal notes to myself regarding things that may need additions, editorial decisions, or things that will need to be adjusted in light of changes made to other chapters. I have noted such places for my own later editing in yellow highlight or with marginal comments. I apologize for any frustrations these may cause to the reader.

I am grateful for any and all comments readers may have. As I pull together, organize, and rewrite material originally drafted over almost a decade, I have made an attempt to bring some unity to style, having made a decision to try to make the work accessible to a broad audience, introducing terminology when possible, while attempting to avoid the opposite perils of overviews that are longer than they need to be and using jargon that is understandable to only those within a specialized discipline. Inevitably, I fear, no such choice will prove perfectly optimal, and there will be some disunities of style, depth, and technicality. I hope that comments from readers from different backgrounds will provide consistent guidance on how this can be brought closer to optimality, so I am particularly interested in feedback on what sections were unclear (or unnecessary).

It may prove both easiest for the reader and most helpful to me if commentary is done by using Word’s comment feature and sending me back the file with your comments. If you do this, however, please put your last name in the file name so that I will be able to keep track of whose comments are whose.

Steven Horst

Part II: Cognitive Pluralism and the Architecture of Understanding

Chapters in this Part:

4. Cognitive Pluralism

  1. The Plausibility of a Pluralistic Cognitive Architecture
  2. Models
  3. Mental Models
  4. Models and Inuition
  5. Relations Between Mental Models
  6. A Speculative Phylogeny of Human Cognition
  7. Issues About the Nature and Status of Models (omitted in this draft)

[4]

Cognitive Pluralism

In Part I, we saw evidence that understanding is organized into units that are domain-specific, relatively self-contained, and possessing distinctive representational systems. This feature is found in nativistic and highly encapsulated perceptual Fodor-modules, in non-perceptual cognitive systems (such as Core Knowledge systems) that are canalized early in development and might be products of natural selection, in the organization of acquired understanding in semantic networks and frames, and in scientific models, our most exacting and regimented form of understanding. Moreover, at each level, there are several types of “plurality” to be found. First, there are a number of distinct domain-specific models. Second, we often utilize several of them in tandem to understand a single situation. And third, there is reason to doubt that these can be “unified” in the strong senses of (a) being reduced to a common denominator, or even (b) rendered formally consistent with one another.

I wish to suggest that we view this representational pluralism as a deep and fundamental design principle of the cognitive architecture of humans and other animals. In this chapter, I shall begin elaborate this thesis under the name of Cognitive Pluralism. This elaboration will proceed in the following stages. First, I shall attempt to spell out more exactly what it means to say that cognition has a pluralistic architecture, and how this general principle is realized differently in weakly nativistic modules and learned models. Second, in [the next chapter], I shall make a case that such an architecture is to be expected on evolutionary grounds, and confers distinct epistemic virtues upon organisms that possess it. Third, I shall argue that intrinsic features of modeling aspects of the world can themselves present barriers to strong unifications of knowledge, whether in the sciences or more generally.[SH1] In Parts III and IV, I shall then turn to the implications of Cognitive Pluralism for epistemology and metaphysics, and its ability to cast light upon philosophical puzzles, such as a tension between the semantic features we actually find in human concepts and the features required of predicates in standard logics.

What is Cognitive Pluralism?

The basic thesis of Cognitive Pluralism is that the mind employs multiple special-purpose models of parts, aspects and features of the world, rather than (1) a single, consistent and integrated model of everything or (2) a long inventory of more specific and independent individual beliefs. If a “worldview” is construed as a comprehensive and consistent model of the world, then we possess nothing that answers to the description of a worldview. But our understanding is at the same time far more systematic than simply a list of beliefs, even if they happen to cohere into a consistent set.

There are really two different kinds of claims that it is important to distinguish. First, there is a de facto Cognitive Pluralist claim about what we actually find in the minds of particular human beings or other animals – for example, that your ways of understanding the world or mine lack such unity. Second, there is a stronger Cognitive Pluralist claim about what is possible for minds like ours: namely, that we cannot achieve the kind of integrated, comprehensive and consistent understanding of the world envisaged, for example, by Spinoza. I shall, in the course of the book, present arguments for both of these theses, but they should be understood as written in very different tones. The weaker, empirical claim is advanced as a theory that is an attempt to re-orient our view of cognition. It can probably be sharpened and adjusted in many ways I have not anticipated, each of which can be empirically tested. But as a high-order theoretical claim, the real test of it is how well it brings to light fundamental features of cognition. The stronger thesis, about what kind of understanding we can achieve, is speculative and diagnostic. There is no way to directly explore what forms of thinking human minds are, in principle, capable of, and so claims about what cannot be done are impossible to verify and sometimes embarrassingly easy to refute. So what I shall argue, more exactly, is this: that the kind of pluralistic cognitive architecture I shall describe can take forms that present principled barriers to the integration of understanding from different domains. I shall remain officially noncommittal on the question of whether the human mind suffers from these particular limitations, while presenting the possibility of principled limitations as an open problem for further research and discussion.

The type of “plurality” that is at the heart of Cognitive Pluralism is primarily a representational plurality, and the type of “unity” that is denied is a representational unity. Cognitive Pluralism is the thesis that our ways of understanding the world are all partial, idealized, and cast in individual representational systems, and perhaps cannot be reconstructed into a single representational system that is at once comprehensive and consistent. These different representational systems are attuned to particular phenomena in the world, and weakly optimized for pragmatic goals in interacting with them. Some of these representational systems are weakly nativistic and take species-typical forms. Others are acquired through trial and error, social learning, and the special processes involved in learning technical theories like those found in mathematics and the sciences; and which ones are acquired may vary widely between individuals, and over the course of a lifetime in a single individual. Through them, we “triangulate” a common reality without the construction of a comprehensive and consistent worldview.

Cognitive Pluralism need not be committed to denying other, non-representational types of cognitive unity, such as personal identity, the transcendental unity of apperception, the unity of individual intentional states or perceptual gestalts, or the ability to combine insights originating in separate models through logic and language. Nor need the Cognitive Pluralist be hostile to the project of unification as a regulative ideal. Seeking such unifications of understanding as can be found is compatible with the belief that there may be principled limits to how far such a project can succeed, and indeed presumes that our current understanding is not unified in the desired fashion. Nor, at this point, do I wish to draw any implications for epistemology and metaphysics. Those will be addressed in later parts of the book, but my concern now is with Cognitive Pluralism as a claim about cognitive architecture.

Modules and Models

I propose that a pluralistic cognitive architecture can be found at a number of biological, evolutionary and cognitive levels. It is found as a design principle in many animal species generally in the form of developmentally-canalized modules. This modular architecture arguably becomes more weakly nativistic (i.e., is increasingly open to developmental factors and learning) in more sophisticated animals – i.e., those with a more complex neural structure – but is conserved, even in human beings. In humans, and perhaps some other species, pluralistic architecture takes a new turn in the ability to acquire knowledge of the world and of how to interact with it in the form of domain-specific learned models. Scientific models are a special case of the latter, whose special features lie in their regimentation and in their minimization (though not elimination) of features peculiar to the cognizer. In humans, this pluralistic architecture is supplemented by special capacities for logical and linguistic thought, which permit concepts and insights originating in modular and model-based understanding to be combined in a domain-general representational medium. However, this domain-general medium does not thereby become a universal and domain-general super-model. In humans, moreover, a great number of learned models are socially-shared or even socially-distributed, and transmitted through language and social learning.

I shall use the expression ‘mental model’ as a generic terminology for the domain-sized units of understanding we have observed in modules, Core Knowledge Systems, Folk Theories, scientific theories, and Minskian frames. Settling on terminology for domain-sized units of understanding was in some measure a matter of choice, as there were several obvious candidates, none of them perfect, and each having pre-existing usages: ‘model’, ‘schema’, ‘frame’, ‘framework’, ‘theory’. In the end, I settled upon ‘model’ largely because it seemed to cause fewer problems and misunderstandings than the others. Psychologists balked at ‘schema’ because of confusions with a well-known “schema theory” in their discipline. [] Minsky’s notion of a frame or framework is very close to what I mean by a mental model, but is likewise associated with a particular theory, and one whose associations with Strong AI (the assumption that the mind is literally a computer and that mental processes are computational) I do not wish to endorse. I do not actually think that Minsky’s frame theory requires an endorsement of strong AI; however, given that it originated in this context, in the end I decided not to use his terminology to avoid potential misunderstandings. The term ‘theory’ I prefer to reserve for a specialized form of cognition found paradigmatically in the sciences. And so I was left with ‘model’ by a process of elimination. The term does have many uses, including an importantly different use in cognitive science explored by Philip Johnson-Laird (1983) and a philosophical use in logic, and I shall attempt to situate my notion of a “model” in relation to these in [Chapter xx].

I shall treat the notions ‘module’ and ‘model’ as overlapping categories, typified on different grounds. If a cognitive system is a ‘module’, it must have a strongly or weakly nativistic (i.e., canalized) etiology and a species-typical relation between functionality and neural localization. (Including distributed localization. [Anderson]) By contrast, I shall speak of a cognitive system as a “model” just in case it affords the possibility of representations of particular states of affairs within a state-space of situations in the world, in the organism, or at the interface between organism and world. “Models” in this sense can be nativistic or products of learning, and thus some modules may be models as well. However, further examination of some modules may lead to the conclusion that they drive adaptive response without representing features of the world, the organism, or lying at the interface between the two. It is thus an empirical and theoretical question whether all modules are also models. (I apologize for the difficulties of parsing that may be caused by the phonetic similarities of the words ‘model’ and ‘module’. Unfortunately, each seems the most apt word to use for its own subject-matter.)

We may, however, contrast modules, including those that are also models, with learned models, whether the latter are acquired through trial and error learning or social transmission. It is possible that this will prove to be a continuum, rather than a clean partition, as weakly nativistic structures often require training through experience. Whether there will prove to be a continuum of ways that learning is implicated in the formation of mental models, or whether there will be a natural line of demarcation, is in large matter an empirical question.

Models and Representation

To model something is necessarily to model it in a particular way, employing a particular representational system. For example, classical mechanics modeled space under gravitation using a Euclidean metric, whereas relativistic mechanics models space under gravitation using a Lorentzian metric. One map models the surface of the Earth using a Mercator projection with lines representing roads, while another uses a polar projection with lines representing altitudes of landforms. Fechner modeled psychophysical data using a logarithmic scale; Stevens used a power function.

A model is characterized by

  • The types of objects, properties, relations and states of affairs it models
  • The set of operations or transformations among these it models
  • The space of possible representations of states of affairs that is generated by its representational system.

For example, a model of the game of chess must have ways of representing different types of pieces, the space of positions on the board, the moves possible for each piece, captures, and the overall game state. A model of space under gravitation must contain representations of objects (bodies), their relevant properties (mass, position, momentum), and laws governing their gravitational dynamics.

A model is apt to the extent that its representational system tracks the salient properties of the phenomena that it is its function to model. Aptness need not be an all-or-nothing matter. Classical gravitation models are quite apt for modeling the influence of gravity in most of the cases human beings encounter, but not for relativistic cases. Indeed, aptness may have multiple components. It may involve both fit with a range of data and suitability for things like computation. Classical models approach relativistic models asymptopically with respect to aptness of fit in low-mass, low-velocity situations, and may exceed them with respect to computational simplicity. Aptness is a pragmatic matter, and there are multiple dimensions of pragmatic virtue.

Representation

The notion of ‘representation’ is one that has a checkered history in philosophy. Indeed, in my first book (Horst 1996), I took computationalists’ theories of the mind to task for their use of a notion of “representation”. Readers of that book may thus find it surprising that I should make use of a notion of “representation” here. My concern there, however, was that a particular familiar notion of representation was being put to an illegitimate use. Computationalists rely on a notion of representation grounded in the paradigm of symbols. Paradigm examples of symbols, like those in written or spoken language, have both syntactic and semantic properties. Computationalists posit that “meaning” may be attributed univocally to symbols in a natural language, mental states, and hypothetical symbols in a language of thought. They then suggest that the meanings of mental states can be explained by the supposed “meanings” of symbols in the language of thought, on the grounds that we already know (on the basis of symbols in a public language) that symbols are the sorts of things that can have meanings. But the sense in which mental states are meaningful cannot be explained by appeal to the type of “meaning” attributable to public-language symbols, as the latter needs to be cashed out in terms of (1) the conventions of a public language and/or (2) the intentions of their authors and/or (3) the actual interpretations of hearers. To say that an inscription “means-X” just is to say something about its conventional interpretation, the intentions of its author or speaker, or the interpretation given by its reader or hearer. But this notion of “symbolic meaning” is not suitable to underwrite the meaningfulness of mental states. To posit this sort of “meaning” for symbols in a language of thought, and then to use this to explain the meaningfulness of mental states, is to fall into circularity and regress, as each mental state would require explanation in terms of symbolic meaning, and each symbol would require explanation in terms of a prior mental state and/or convention. (For a longer version of this argument, see Horst 1996, especially Chapters 4 and 5.)

While this criticism undercuts a quick and easy way of explaining the meanings of mental states, and a too-close assimilation of whatever underlies mental states to public-language symbols, it was never meant to imply that there is no important use of the notion of “representation” in the cognitive sciences. However, it is necessary to try to make explicit the notions of “representation” and “representational system” that are relevant to the current enterprise.