HLW-chap 3 rev050131.doc 5000 words 17pp
How Language Works
Chapter 3: Rules and Meta-Rules
*need lead-in quote on rules
1. Tokening rules
forms of misrepresentation
2. Second Order Rules
3. Examples of how second order rules work [expand]
*4. Innateness - Chomsky, Fodor and Pinker
5. Skepticism about Rules – Kripkenstein
6. Reply to skepticism about rules [unfinished]
7. Summary / recap
[8 = missing pieces]
How Language Works: norms to naturally mean
From spelling to pronunciation to grammar to conversational style, language is governed on all sides by rules. In particular, the tokening of sentences is governed by many rules and norms. The rules that are crucial for conventional meaning are what I will call “semantic tokening rules”. These rules link language to the world – they constrain tokenings of indicative sentences, which is where the linguistic rubber meets the road.
We saw in the last chapter that natural signs carry information about the world because they correlate with states of the world. We also noted that some animals produce signs that carry meaning. The signs carry meaning because their production is constrained by causal mechanisms so that they correlate with states of the world. Humans are capable of a uniquely powerful conventional sign system, language. Linguistic signs are governed by tokening rules that enable a correlation between the tokening, the world, and the structure of the token.
Conventional meaning rules for indicative sentences have the form: "token S only if M”, where S denotes a sentence and M denotes a state of the world. Thus an English sentence such as “There’s a beer in the fridge” is governed by a tokening rule: “Token “There’s a beer in the fridge” only if there is a beer in the fridge.” This way of expressing the rule looks redundant because both the rule and the enclosed sentence that the rule governs are in English. We can avoid this appearance if we use a metalanguage other than the object language to state the rules (e.g. use English to specify the tokening rules for German sentences). Another way might be to assign quasi-Godel numbers - Quodel numbers - to English sentences. E.g. as a first pass, let each letter of the alphabet be assigned a number 1-26, and let space be 27, period 28, and have two digit numbers for a few other standard English punctuation devices including commas, parentheses, etc. Suppose the total is 40 symbols. Then each sentence can be designated by a unique base 40 Quodel number. “The cat is on the mat.” has 22 characters, including the final period, and so would have a 22 digit base 40 numeral. Then the tokening rule would be “Token sentence 20.08.05.27.03.01.20.27.09.19.27.15.14.27.20.08.05.27.13.01.20.28 only if the cat is on the mat.” This way of putting the rule loses the appearance of redundancy – it links tokens of a specific sentence type to a condition in the world. Adherence to the tokening rule makes it possible for the sentence to carry information – to mean something.
The link between conventional meaning and natural meaning, carrying information, is intimate. When a person P adheres to the semantic tokening convention for sentence S, the result will be as if S were a natural sign of M, and the tokening of S will adhere to the natural meaning principle: that P tokens S will mean that M. This is no accident. Tokens of indicative sentences are supposed to indicate a state of affairs. Let us say that such an artificial sign S governed by the rule that it be tokened only if state of affairs M obtains represents the state of affairs M, the state it is supposed to indicate or naturally mean.
A sentence S can represent M whether or not a tokening of S actually means (in the sense of natural meaning) that M. The natural meaning principle requires that if a tokening of S means that M, then M. The conventional rules of the language are meant to constrain the production of tokens of sentences to certain circumstances, circumstances that differ as the sentences differ. But like all rules, these conventional rules can be broken. Users of language can mis-represent – they can token a sentence that is supposed to mean that M – that represents or conventionally means that M, when M is not the case. The semantic rules are normative, and confer significance by how they would constrain.
There will two common ways in which speakers mis-represent. They can unwittingly violate the tokening rule – they make a mistake. I say “there’s a beer in the fridge” when I have forgotten that we drank the last one yesterday, or because at last glance in the fridge I mistook root beer for beer. These false positives are typical with animal calls also – my dog may bark “at nothing”; a kite or shadow may elicit the overhead predator call from prairie dogs. The second way in which rules are violated is intentional – lying. I token “there’s a beer in the fridge” knowing there is none, in order to deceive. This deception is possible, of course, only because of our shared knowledge of the semantic tokening rule, and our unshared knowledge that I am violating it.
Finally there are other cases or tokening rule violation, such as play-acting, quoting, sarcasm, etc., where the rules are suspended by mutual agreement. Again, all these important violations are dependent on the existence of the conventional semantic tokening rules for their effect. Like all play, word play can only occur against a background of serious uses.
SECOND ORDER RULES
Each sentence is governed by a tokening rule. But there are unlimited numbers of sentences. This presents an epistemic bioengineering problem. How can a finite system learn to understand and produce a potential infinity of signs, each governed by distinct tokening rules? (Generally distinct rules – there can be synonymous sentences, as we shall discuss.)
The ingenious evolutionary solution is to use rules that can be generated on the fly, and a system that allows tokens to wear their rules more or less on their syntactic sleeves. At the heart of language is a link between the structure of a sentence and the semantic tokening rule that in turn forges the link between that very sentence and the world. The systematic link between tokening rules and the sentences they govern is given by rules for the construction of the tokening rules. A rule for making rules is a second order rule; thus second order rules lie at the heart of language. Knowledge of these second-order rules allows a language user to produce a potential infinity of sentences from a base list of a few thousand words, and the same knowledge allows language consumers to know by the structure of the sentence the information the sentence is supposed to convey, that is, the constraint that should govern the tokening of a sentence with that structure. This is the key to how language works.
Stephen Land notes in his study of the evolution of thinking about language in the 18th century [From Signs to Propositions: The Concept of Form in Eighteenth-Century Semantic Theory , London: Longman Group Ltd., 1974] that early Enlightenment thinkers about language were preoccupied with individual words, which were conceived as standing for ideas, which in turn were often conceived as mental images. This may be natural, but as we have seen it seriously led them astray from the true nature of language. The link between language and the world, the tokening rules, does not lie at the level of the individual word. Complete sentences are the bearers of information, and so they are what are governed by semantic rules. But these rules are the product of other rules – language is much more complex than a set of labels for mental images. The hard-won victory in the 20th Century – Wittgenstein, Kripke, Putnam, and others – is that what is going on in the speaker’s head does not determine the correctness of tokened sentences.
[begin “Second Order rules” D. Cole October 5, 2003 ]
The most familiar second order rules are procedural - for example, the rules that govern legislatures, where these bodies in turn produce the first order rules constituting the legal codes that govern ordinary individual behavior. Contract law allows individuals to create new rules (contracts) that bind the actions of specified individuals, including fictional persons, that is, corporations. Second order rules are a feature of some games, such as “Role Playing Games”. In these games the rules that govern possible moves for players are determined by higher order rules that modify those possibilities based on game play history. There are rules (HTML) about how each web page should appear on a web browser, and for web pages that are generated on the fly, higher order rules that govern those display rules. Second order rules play a role in computer programming, as in artificial intelligence, where the programs learn, or generate hypotheses that guide future behavior. It would not be surprising that what is required for artificial intelligence might be required by natural intelligence as well.
The rules that govern constitutional conventions are 3rd order. These rules determine the procedure by which the 2nd order rules will be determined. The resulting Constitution is second order, in that its clauses create rule-making powers and structure the legislative and other rule making bodies. Some second order legal rules specify not just the procedure for making legal rules, but set constraints on the contents of first order rules - for example, delimited rights, such as, in the case of the United States, freedom from ex post facto laws, or a rule that Congress may make no law (= rule) abridging freedom of religion.
The rules that govern language are similar to these Constitutional content constraints, but go much further in specifying the form of the rule. They are not just rules that permit and constrain rule-making generally (“Congress shall make no law abridging the freedom of speech….”); their reason for being is to provide a much more specific constraint on tokening. As I have argued, it is these constraints that give linguistic tokenings their content or meaning.
Let us look at examples of some rules for German, with English as the metalanguage:
Token "Es regnet" only if it is raining.
Token "Schnee ist Weis" only if snow is white.
Token “Bier ist Weis” only if beer is white.
Token "Hans ist Tot" only if the referent of "Hans" is dead.
The latter three have something in common. They are products of a general second order rule for German sentences:
Each substitution instance of the following is a tokening rule: "Token 'X ist Y' only if the referent of 'X' has the property denoted by 'Y'"
We can provide an extensional alternative form of the rule:
All instances of the following are semantic tokening rules: "Token 'X ist Y' only if the extension of the instance of 'X' lies within the extension of the instance of 'Y'".
[It appears this particular rule will apply to identities - "Hans ist Kurt" - as well as more general predications, however identities need additional rules to ensure that they are tokened only when the familiar laws of identity obtain - especially symmetry and the indiscernibility of identicals. The extensional rule above only provides reflexivity and transitivity.]
Obvious additional tokening rules will cover a subset of natural language equivalent to the representational power of first order predicate logic with identity. Insofar as the semantics of modality, and deontic and epistemic logics are understood, second order tokening rules can be formulated for these additions as well. Extensions to the rules to include event quantification, and with that tense, and possibly the important class of predicate-modifying adverbs, will provide a very powerful core of a language that can be used to represent a host of states of the world – including situating it among possible worlds.
To abide by the first order tokening rules produced by the second order rules, a language user must then have the capacity to determine extensions for terms in his or her language. In general, perception endows us with the capacity to determine extensions. Perception is non-linguistic, and presumably often involves modeling the world in a medium other than natural language. But it would be a genetic fallacy to conflate how we come to be able to use rules with what gives them their content. The content of signs lies in the character of the normative constraints that link the signs to the world, not in what must take place in the heads of speakers in order for them to comply with those constraints. This is true of instruments generally – even something as simple as my digital thermometer has internal precursors of the digital display - but the digits represent temperatures, not electrical states of the device. The correlation between display and external state is the basis for gauging the accuracy and proper performance of the thermometer. So it is with language, as we shall see when we turn later to consider the nature of truth.
Is an explicit representation of the second order extensional rule is needed for a system to act in accord with such a metarule? I doubt so, but much depends on what constitutes an explicit representation of a rule. It is clear that a language-using system must be structured so that its tokenings comply with the constraints imposed by the rules generated by this second order rule. Language users (bees, children, lay persons generally) need not be able to make explicit the constraints that govern their linguistic behavior. It also appears that they need not have an inaccessible explicit internal linguistic representation of the constraining rules. To take a parallel case, in a mechanical adding machine, the rules for addition are (arguably) not explicitly represented, but the system is built in such a way that it adheres to them. A system that can come to produce explicit representations of rules that govern its behavior, in language as with regard to other activities, must necessarily acquire additional powers of reflective understanding, but (fortunately) this does not appear to be essential to using a natural language.