50
x G61.1310 Introductory Syntax Lectures- Mark R. Baltin
Lecture #1- Preliminaries
This course is a course about syntax- the principles by which words combine to form sentences. The study of syntax tries to answer two main questions:
(i) what are the principles particular to syntax for a particular language?
(ii) what are the principles of syntax for any human language?
The study of syntax is a branch of the field of linguistics, which has as its main goal a characterization of human language. As such, linguistics can be distinguished from the field of semiotics, which studies the properties of symbolic systems in general. For example, the system of traffic lights in the U.S. is a sort of symbolic system. There are essentially three symbols: red light, yellow light, and green light. We call these three states of a traffic light symbols because each condition symbolizes a different meaning---a red light signals that the approaching traveler is to stop, and not cross the intersection, a yellow light indicates that the approaching traveler is to stop before reaching the intersection because the light is about to turn red, and a green light indicates that the approaching traveler is free to cross the intersection.
We can say that the system of traffic lights has a grammar, which can be defined as a specification of the possible expressions in the symbolic system, together with a pairing of expressions with meanings. In this case, the grammar of traffic lights has three expressions, and three pairings with meanings.
In this case, the grammar of traffic lights is extremely simple. It can be presented as follows:
(1) [[ Green]]-----> Go
[[Yellow]]-----> Stop if before the intersection
[[ Red]]------> Stop
The pairing in (1) is a specific type of relation in mathematics, known as a function. A function is a pairing of elements from two sets, such that each element in the first set is paired with no more than one element in the second set. If the first set has additional elements that are not paired with any elements in the second set, the function is said to be a partial function. If every element in the first set is paired, the function is said to be a total function. Let us assume that the function that pairs the expressions of a natural language, such as English, Chinese, Welsh, Papago, etc., is a total function.
Question: What would it mean for the function that pairs the expressions of e.g. English, to be a partial function?
I. Can the Grammar of English Be Described as Easily as the Grammar of Traffic Lights?
The grammar of traffic lights has some noteworthy features that are useful to think about in thinking about what we think of as a human language. For one thing, you can count up the number of sentences that the grammar of traffic lights allows. There are three. For another, we cannot really say that the grammar of traffic lights has a syntax. It has a list of "words"--- red, green, and yellow, but each of these words comprises a complete expression in "traffic-lightese", and none of the expressions can be combined with any other expressions.
To introduce some jargon, we would say that (i) traffic-lightese is finite, and that (ii) the sentences of traffic-lightese are bounded in length. When we say that a language is finite, we mean that there is a fixed number to the expressions of the language. When we say that the sentences of the language are bounded in length, we mean that we can precisely define how long the sentences of the language can be.
And to complete the circle, you can see what we mean by the term language. A language is simply the set of sentences that a grammar generates.
To learn traffic-lightese, necessary ( possibly, although it's unclear if you live in New York City), you had to simply memorize the "sentences" of traffic-lightese and learn the function given in (1) that pairs each sentence with its meaning. Could you memorize the individual sentences of English the way that you memorized the sentences of traffic-lightese?
Consider the following lines from Lewis Carroll's poem "Jabberwocky":
(2) The blithy toves did gyre and gimble.
(3) The blithy toves karulized elatically.
Sequences of elements in a language are called strings. The reaction of native speakers of English to the strings in (2) and (3) is rather interesting. The strings are recognized as being English-like, so that these strings are felt to be sentences of English, even though the words in these "sentences" have never been encountered before. We have a feeling that "toves" is a plural (i.e. a form denoting more than one) of "tove", which is what we learned in school as a noun ( we will soon learn what the basis of notions like noun and verb is) . Furthermore, even though we have never encountered the "words" "gyre", "gimble", and "karulized", we perceive them to be verbs, and, furthermore, that "karulized" is the past tense of "karulize". Finally, we recognize "elatically" as an adverb.
The way in which we deal with strings such as (2) and (3) illustrates an important difference between English and other languages that are termed natural languages, on the one hand, and traffic-lightese, on the other. That difference has been termed linguistic creativity-the ability to produce and understand strings of a language that have never been previously encountered. Traffic-lightese is a language with a fixed number of sentences- what is termed a finite language. Natural languages such as English, on the other hand, are infinite languages, in the sense that there are an infinite number of sentences in each natural language. What is the source of this infinity?
Well, for one thing, the words of English seem to be grouped into classes, so that we can recognize new words coming in as members of these classes. Unlike traffic-lightese, in which there are a fixed number of words (three, to be precise), natural languages have an unlimited number of words that simply have to be fitted into word-classes. The traditional grammar term for a word-class is part-of-speech. We term a word-class a grammatical category. We will soon be examining the basis for the notion of a grammatical category, and contrasting two views of grammatical categories- the notional view, in which each grammatical category has a particular meaning, and the distributional view, in which each grammatical category has a unique distribution, but the Jabberwocky example bears on the comparison of these two views. Let’s see why.
The reason that the Jabberwocky example is so striking is that the words are nonsense words. We’ve never encountered them before, so we can’t possibly know what they mean. Nevertheless, we feel that (2) and (3) are English sentences with unfamiliar words. The basis for our feeling is that the words are in the right places for words of the appropriate word-classes (let’s call them grammatical categories from now on). To see this more carefully, let’s systematically deform, for example, (2), and see if, at each stage of the deformation, we still have the feeling that the string of words is an English sentence.
Let’s start by removing the [-s] from the example in (2), and see if the
[-s]’s removal changes our perceptions of the status of the string:
(2)’ The blithy tove did gyre and gimble.
(2)’ does seem to be English-it’s talking about a single tove, who performed a compound action in the past of gyring and gimbling. Now, let us remove the did:
?(2)’’ The blithy tove gyre and gimble.
This has a somewhat shakier status as an English sentence, and the sense that I have gotten in the past, when I’ve performed this experiment in classes, is that speakers of English are split. Some people find this sentence to be non-English, while others find it to be English if tove is taken to be an irregular plural of some sort, like children or cattle. Removing the makes the string still harder to recognize as English:
?(2)’’’Blithy tove gyre and gimble.
Finally, removing the and causes the sequence of words to be felt by all speakers as being simply a string of words, with the character of a list:
(3)’’’’*Blithy tove gyre gimble.
An asterisk before a set of words is taken by convention to mean that the sequence of words is an ungrammatical sentence.
Let us stop for a minute and think about how we dealt with this example. We couldn’t have known the words. Rather, we took some words that we knew (and parts of words, such as the [-s]), and figured out details about the unfamiliar words from how they were positioned with respect to the familiar parts of English. In this sense, the distributional account of what grammatical categories are seems to fare better than the notional account. We had to be figuring out what kind of structure to assign the string based on the sequencing of its parts, looking at the unfamiliar parts and seeing where they were relative to the familiar ones.
It is important to see what we’ve just done. We’ve taken two a priori plausible views of what a grammatical category is, and we’ve tested them by seeing what predictions they each make about phenomena in the part of the world that we’re investigating (i.e., sentences).
In any event, we’ve seen one reason for the “open-endedness” of a language such as English, as opposed to traffic-lightese, and that is the fact that natural languages (human languages, for our purposes) have a syntax- a set of rules for arranging elements into more complex units of language (i.e., words into sentences). Traffic-lightese does not have these principles-every word is a complete sentence, and there are no principles for stringing words together to form more complex sentences.
As we’ll see very shortly, there are two ways in which natural languages are infinite, meaning that there’s an infinity of sentences in the language. We have seen the first way, in which sentences are said to be made up of members of grammatical categories, and new words can enter the language to instantiate these grammatical categories.
A second way in which natural languages are infinite is that, as opposed to traffic-lightese, in which you can specify the length of each sentence ( because each sentence is composed of one word and there are no procedures in traffic-lightese for combining words), there is no specifiable bound on the length of a sentence in a natural language. To see this, consider the following:
(4) a. The teacher left.
b. The teacher’s mother left.
c. The teacher’s mother’s friend left.
d. The teacher’s mother’s friend’s sister left.
e. The teacher’s mother’s friend’s sister’s boss left.
f. The teacher’s mother’s friend’s sister’s boss’s mother left.
I could have kept going with this type of example, and the sentence would have gotten continually longer. In English, as indeed in all natural languages, the grammar must contain methods or devices to create sentences of any length. Obviously, for a sentence to be a sentence of a language, it must stop at some point, but the grammar of English must allow that point to be of any conceivable length. In technical parlance, the grammar of English must generate an infinite language.
Competence and Performance
At this point, we must step back for a minute and consider what we are trying to account for. We have been trying to account for what it means to know English ( just as an example- we could have picked any language to try to account for). However, in order to get our data for English, we have relied on the intuitions of speakers of English- how speakers of English feel about the strings that are presented to them. However, it seems that we cannot go directly from our intuitions about English to inferences about whether particular strings are in the language. The reason is that the properties of particular strings may be due to factors that are not, properly speaking, part of the language at all. For example, suppose we had continued to elaborate (4)(f) by continuing to add [‘s] plus a noun, as in (4)(f)’:
(4)(f)’ The teacher’s mother’s friend’s sister’s boss’s mother’s cousin’s sister’s doctor’s’ father’s neighbor’s daughter’s friend’s teacher’s cousin’s neice’s accountant left.
This string would be felt to be unacceptable, but not, it is usually thought, because of our knowledge of English. To understand a string such as (4)(f), and to make sense of it, we have to integrate what we know about the individual words with a structure for the whole sentence, and a run-on sentence such as this taxes our ability to remember everything that has come before when we get to the end of a sentence. There are a number of studies of memory, and one thing that we know about human memory is that it is limited (there’s a classic paper by the psychologist George Miller, entitled “ The Magic Number Seven, Plus or Minus Two: Some Limits On Our Capacity for Processing Information”, Psychological Review (1957), that proposes a specific bound on short-term memory across a wide variety of perceptual domains).
In any event, if what is wrong with (4)(f)’ is due to a memory problem in understanding the whole sentence, this problem would not be felt to be a problem with the English of the string, but rather with the fact that people put their knowledge of English to use by employing the rest of their mental resources. In other words, our knowledge of English is embedded in the rest of our capacities, such as memory, limitations on articulation (the fact that our vocal tract can do some things but not others), etc.
Chomsky, in Aspects of the Theory of Syntax (MIT Press, 1965) made a distinction between what he calls competence and performance. Competence is our knowledge of language, and performance is the mechanisms by which our knowledge of language is put to use. As linguists, particularly as syntacticians, we are interested in specifying competence in a particular language, rather than performance. However, since when we try to determine what constitutes knowledge of,e.g. English, the raw data that we start with is people’s intuitions about the language, we don’t know what status to ascribe to people’s intuitions. When somebody says that a string “sounds funny”, is it because of a property of English(competence), or because of some factor other than language (performance)?