Ch2 both parts

CHAPTER 2

Concept Pragmatism: Declined and Fell

Part 1: Definitions in use and the circularity arguments.

If RTM is true, then concepts are:

- constituents of beliefs

- units of semantic evaluation

- a locus of causal interactions among mental representations

- formulas in Mentalese

But what on earth does all that mean? And, supposing that there is something that it means, of what sort of things could it all be true?

In the beginning, psychologists, linguists, philosophers and the man on the Clapham omnibus, all agreed that (many) concepts have (or are) definitions; that, as I’ll sometimes say, many concepts are `constituted by’ their defining inferences.[1] Thus, the concept BACHELOR is constituted by the defining inferences `if something is a bachelor then it is an unmarried man’ and `if something is an unmarried men, then it’s a bachelor’; if you replace `bachelor’ with a word that expresses any concept other than BACHELOR, at least one of these nferences fails.

But then, until quite recently, theorizing in the RTM tradition achieved a widespread consensus to the contrary: The thing about definitions is that, by and large, there aren’t any. More generally: whatever a concept is, it’s pretty clearly not a set of conceptually necessary and sufficient conditions for being in the extension of the concept. You can’t for example, say what concept DOG is by filling in the blank in `something is a dog iff it is …..’; not, at least if you want what results to be a definition of `dog’; try it yourself and see[2]. So, cognitive scientist said to one another things like this: `There is much that we still don’t know about concepts (further research is required), but at least we know that concepts aren’t definitions. Do let me tell you about stereotypes (or protoypes, or exemplars or whatever is currently the rage.’ Thus the prevailing consensus until quite recently.

It was, however, at best imperfect. There remained a cadre of unconvinced linguists (in the `lexical semantics’ tradition) who held some or other version of the thesis that there is a `semantic level’ of grammatical representation at which definable words are represented by their definitions; at which, for example, `kill’ is represented as `cause to die’. On this view, the concept KILL is a complex out of the (relatively) primitive concepts CAUSE and DIE (and, correspondingly, the meaning of the world `kill’ is a complex out of the meanings of the words `cause’ and `die’). I don’t propose to discuss lexical semantics; I’ve done so elsewhere[3] (see `Concepts’ chs xxx ), and enough is enough. But there is a recent vogue (primarily in the philosophical literature) for reviving the definition construct in an altered version in which `definitions-in-use’ play the role of definitions-tout-court. This definition-in-use story deserves close scrutiny if only because it raises some Very Large Questions about the individuation of concepts and about what it is to have one of them, and about relations between natural language semantics and logic. I propose to consider it in some detail, starting with a word or two about definitions-tout- court just to remind you of the background.

Definitions and the containment theory.

There were differences of opinion among friends of definitions about how many words can be defined, but it was common ground that not all of them can be. There must some must be a basis of `primitives’, undefined terms in which the definitions of the others are couched. In consequence, a theorist who is committed to the claim that definition is an important notion in semantics must somehow explain how this basis is to be chosen. That turns out not to be a small matter.

The Anglophone tradition, in both philosophy and psychology, largely accepted the empiricist thesis that all concepts can be (indeed, must be) defined in a primitive basis of sensory concepts like (eg.) RED or HOT or (maybe) ROUND. This empiricist semantics was in turn supposed to be grounded in an epistemology according to which all knowledge is experiential in the long run. I think, however, that even the friends of definitions now pretty generally agree that the empiricist project failed;[4] it was a cautionary example of what happens when you try to read your semantics off your epistemology. In fact, our concepts invariably overrun their experiential basis; trees and rocks aren’t, after all, reducible to tree-experiences or rock-experiences. That should hardly seem surprising; experiences are mind dependent (no minds, no experiences), but trees and rocks are not. You can climb trees and throw rocks, but you can’t climb or throw sensations. Empiricism couldn’t but founder on such ontological truisms.

In contrast with the empiricist tradition, current discussions of the primitive basis for conceptual (/lexical) reductions often favor the idea that it consists not just of sensory concepts but also of some very abstract `metaphysical’ concepts like CAUSE, AGENT, ACTION, FACT, EVENT etc. (see. eg. Carey, 19xx) As far as I know, however, there are no serious proposals for cashing the `etc.’[5] For one thing, it’s very unclear what makes some concepts metaphysical and others not. For another thing, neither sensory concepts nor the putative metaphysical ones generally behave in the ways that primitive concepts presumably ought to. For example, I suppose that (all else equal) a primitive concept ought to be more accessible in `performance’ tasks than the concepts that it is used to define. By this criterion, however, the psychological data do not support the thesis that the primitives are either sensory or very abstract, or a combination of the two. To the contrary, it appears that the most accessible concepts, both in perceptual classification and in ontogeny, are `middle level’ ones (see Rosch xxx). In fact, it appears that children generally learn sensory concepts relatively late, whether the criterion of learning is ability to sort or ability to name. Likewise, a dog is, I suppose, a kind of animal; but subjects are faster at classifying a dog as a dog than in classifying the same dog as an animal. A killing is, I suppose, a kind of event; but there is surely less consensus about what’s an event than about what’s a killing. Subjects who agree about how many passengers the plane crash killed may thus have no clear intuitions about how many events the crash consisted in; (one for each wing that fell off? One for each engine that failed? One for each passenger who died? One for each passenger who didn’t die? And so forth.) And God only knows at what stage of cognitive development the concept EVENT becomes available; I’m not at all sure that I’ve even got one.[6]

Finally, and again no small matter, inferences that definitions license are supposed to have a special kind of modal force. If `X = A+B’ is true by definition, then `Xs are A’ and ‘Xs are B’ are supposed to be conceptually (/linguistically) necessary; they’re supposed to be `analytic’ as philosophers say. It was a virtue of the definitional account of concepts that it seemed to explain why this is so: Scanting details, the basic idea was that analytic (as opposed to nomological or logical) necessities are engendered by containment relations between complex concepts and their constituents. `Bachelors are unmarried’ is conceptually necessary because, the concept UNMARRIED is literally contained in the concept BACHELOR.

Drawing a connection between analyticity and compositionality and constituency was, I think, profoundly right-headed. For, notice, the containment theory works pretty well for analyticities that involve phrases: `brown cow  brown’ is intuitively analytic; plausibly, that’s because `brown’ is a constituent of `brown cow’. But, prima facie anyhow, the containment story often seems not so plausible when it’s applied within lexical items. Thus, to stick with the canonical example, it’s supposed that the concept KILL is the concept CAUSE TO DIE; i.e. that KILL is a structured object of which the concept DIE is literally a constituent; likewise, that the concept DOG is a structured object of which the concept ANIMAL is literally a constituent, and so forth. But then what is one to make of the concept RED? On the one hand, the inference `red colored’ would certainly seem to be as plausible a candidate for analyticity as most. On the other hand, there isn’t any X such that `colored & X  red’’ (except, of course, `red`). This is presumably an extreme case of the `X-problem’ that was mentioned in fn. 3.

So the containment theory of analyticity confronts a dilemma: On the one hand, it really is very plausible that `brown cow  brown’ is analytic because `brown’ is a constituent of `brown cow’. On the other hand, the recurrence of the X-problem suggests that if red  colored, (or, mutatis mutandis, kill die; or dog  animal) is analytic, that isn’t because colored is a constituent of the meaning of `RED’. It looks like we will have to deny either that analyticity derives from containment or that red  colored is analytic.

If those are indeed the options, the second seems the better of the two. It isn’t, after all, implausible that `red  colored’ is a fact about what redness is, not about what `red’ means; likewise that dogs are animals is a fact about dogs, rather than a fact about `dog’. The moral of the persistence of the X problem may be that God knew what he was doing when he made the lexion. `Brown cow’ looks to be a complex symbol of which `brown’ is a part. Maybe that’s because `brown cow’ is a complex symbol of which `brown’ is a part. `Red’ doesn’t look like it’s a complex symbol of which `colored’ is a part. Maybe that’s because `red’ isn’ta complex symbol, a fortiori not one of which `colored’ is a part. Appearances aren’t always deceptive; perhaps the reason there appears not to be internal semantic structure in lexical items is that there isn’t any.

In any case, it’s common ground if `red’, `kill’ `brown’ etc. have constituent structure, they do so only at the (putative) semantic level. The relevant generalization seems to be that (BACHELOR to the contrary notwithstanding ) analyticity seems unproblematic only where constituent structure seems unproblematic. Conversely, the fact that it’s not obvious that `cow  animal’ is analytic is part and parcel of the fact that it’s not obvious that ANIMAL is a constituent of COW. If what you had in mind is that constituency should come to the rescue of analyticity, thereby making the world safe for conceptual necessity, nothing muich that has happened so far that you could find encouraging.[7]

So much for the proposal to reduce linguistic (/conceptual) necessity to containment-at-the-semantic-level. Since it was the core of traditional definitional semantics, and given the troubles it ran into, you might have thought that we’d seen the last of definitional semantics. But no; definitions are back in philosophical fashion.[8] Whatever comes around comes around again. The motivation for the revival isn’t far to seek: Many philosophers think that there are proprietary philosophical truths, a mark of which is that they are knowable a priori. Well, such philosophers are prone to reason, if there are a priori truths there ought to be some story about what it is that makes them a priori; and, if there are definitions, there is indeed a story to tell. A priori truths derive from the analysis of concepts (/from the definitions of terms). Accordingly, philosophers can discover a priori truths by a process of lexical/conceptual analysis. Since there would seem to be no other plausible account of a prioricity on offer,[9] we need definitions on pain of technological unemployment among analytic philosophers. So there had better be definitions. It is possible, in certain moods in which to find this line of argument very convincing.

Definitions-in-use

We’ve just been noticing the traditionally close relation between

saying that there are definitions and saying that analytic inferences, a priori inferences, conceptually necessary inferences and the rest arise from relations between complex concepts and their constituents. It is, however, possible to separate these two theses; in particular, to hold that there are many definitions but concede that there is less constituency at the semantic level than the containment account of analyticity had supposed.[10] Enter definitions-in-use. Definitions-in-use are like definitions-tout-court in that both make certain inferences constitutive of the concepts that enter into them. But definitions-in-use are unlike definitions-tout-court in that the former don’tsuppose that defining inferences typically arise from relations between complex concepts and their constituents.

The canonical definition-in-use of `and’ will serve to give the feel of the thing. The suggestion is that the semantics of `and’ ought to explain (for example) why inferences of the form `P&Q  P` are a priori valid. What’s required in order to do so is that there are (presumably analytic) inferential rules that serve to introduce AND’s into some Mentalese expressions and eliminate them from others.[11] The idea is that a formulation of these rules would in effect provide a conceptual analysis of AND by reference to its use in inference. Traditional definitional theories propose to explain conceptual necessity and a prioricity indirectly, by postulating a semantic level at which relations of conceptual containment are made explicit By contrast, definition-in-use theories propose to explain both by a direct appeal to the notion of a defining inference. If so, then a definition in use of AND and the like would show that IRS can yield, at a minimum, a plausible account of the logical constants. Definition-in use accounts of the logical constants are thus regularly offered as a parade case for inferential role semantics at large.

The standard formulation of the rule of `and’-introduction is:

P

Q

___

P and Q

The standard formulation of the rule of `and’ elimination is:

P and Q P and Q

______

P Q

There are, in the recent philosophical literature, strikingly many passages that go more or less like this: `You want to know how meaning works? Let me tell you about `and’; the rest is just more of the same.’ Since, moreover, the usual suggestion is that the inference rules that define `and’ are what you learn when you learn what `and’ means, definition-in-use purports to exhaust not just the semantics of the logical concepts, but also their `possession conditions.’ It is, in fact, a typical claim of inferential role semanticists that an account of the individuation of a concept should do double-duty as an account of what it is to grasp the concept, and that examples like AND suggest that IRS can do that. All in all, not a bad days work one might think.

How plausible is it that definition-in-use will do for semantics what Clapham definitions weren’t able to? Not very, it seems to me. There are two main sorts of objections, each of which strikes me as adequately fatal. The first purports to show that the definition-in-use story doesn’t work even for the logical concepts; the second purports to show that, even if it did work for the logical concepts, it doesn’t generalize to anything much else. I think both these objections are well token. I propose to discuss the first of them now and the second in Part 2 of this chapter.

The circularity objection

As I mentioned above, it is a widely advertised advantage of definitions-in-use, indeed of use theories of meaning in general, that they comport with a plausible account of concept possession. The relevant point is straight-forward: Please have another look at the rules of `and’ introduction and `and’ elimination.. Notice that, in the formulation of, ‘and’ occurs; not only as the term defined, but also as part of the definition. There is thus a prima facie case that to claim that knowing its definition-in-use reconstructs knowing what `and’ means is simply circular: if there’s a problem about what it is to understand `and’, there’s the same problem about what it is to understand its definition-in-use. The same point is glaringly clear if the definition-in- use is offered as an answer to `What is learned when `and’ is learned?’ What’s learned can’t possibly be a rule in which `and’ occurs since, if it were, nobody could learn `and’ unless he already knows what `and’ means. Prima facie. That is not a desirable outcome.[12] All that seems sufficiently self-evidently; and the implication seems to be that there are problems about co-opting definitions in use as theories about what it is to understand a word (mutatis mutandis, to grasp the concept that a word expresses.).

There is, however, a standard scratch for this itch. Philosophers who accept the idea that having (/learning) a word (/concept) is knowing its definition-in-use, just about invariably also assume that the kind of knowing that’s pertinent is `knowing how’ rather than `knowing that’. The idea is that we should save the identification of definitions-in-use with possession conditions by refusing to identify reasoning with a concept with following the rules that its definition-in-use lays down. Contrary to the spirit of RTM, one doesn’t hope to reconstruct knowing what `and’ means, in terms of mentally representing its definition (or, indeed, of mentally representing anything else.) In the sense of `know’ that’s relevant to specifying the possession conditions of concepts, knowing what `and’ means consists simply in being disposed to make (/accept) such `and`-involving inferences as are licensed by its definition. It isn’t further required, for example, that one consult.the definition in drawing the inferences. The claim that `knows how’ precedes `knows that’ in order of analysis is, of course, the Pragmatist thesis par excellence. So we have at last arrived at the title of this chapter.