Stephen van Vlack

Sookmyung Women’s University

Division of English Language and Literature

Language Acquisition

Fall 2016

Week 10 - Answers

Lightbrown & Spada (2013), Chapter 2

1. How does second language learning differ from first regarding the conditions?

Thinking about the conditions of second language learning, there are two main concerns, those regarding external or environmental conditions and of course internal or cognitive conditions. But, of course it is not possible to separate these, as one tends to affect the other. Second language learning occurs in a range of environments; from naturalistic language learning environments to formal, classroom-based, ones. Often a second language learner will be exposed to both. Another aspect of the external conditions relate to age with the idea of age being a kind of environmental condition since people of different ages occupy different places in society and this provides not only different places and times for learning but also different outcomes. Internal or cognitive conditions center on two main ideas; that of cognitive abilities and prior knowledge.

As Lightbrown & Spada (2013) point out these different conditions determine in a more specific way, the amount, place, and manner in which the language is learned and for second language learning, this is typically seen as being quite different from first language learning. So, the general and resultingly specific conditions are seen as being quite different. Interestingly, the more the conditions for SLA match those of first language learning then the better the outcome.

2. How do researchers study the development of a second language?

Because second language researchers started by looking at older learners, they were drawn almost exclusively to the development of language and not to things like cognitive development as this was deemed to be irrelevant to SLA. This initial exclusive attention to linguistic forms has played a large role in the way second language researchers view the development of a second language. The first real SLA theory was that of the Contrastive Analysis Hypothesis (Lado, 1964). This theory dealt with the condition of prior knowledge and looked at the way prior knowledge in the form of a first language affected second language development. But of course from the first measurements in the development of the second language were based on the grammatical forms that second language people were able to employ. This theory did not look only at grammatical patterns but also at sound patterns with an emphasis on phonemic inventory and vocabulary development and really was centered on the types of errors that the second language users were making. From this a long tradition of looking at the errors a person made as a way of determining their progress was developed. Although Lightbrown & Spada (2013) don’t mention this much certainly the error analysis perspective was a major driving force in SLA research for a long time (James, 1998). The most enduring development out of the study of error analysis is the interlanguage hypothesis (Selinker, 1972).

The other big movement in SLA research was that of developmental sequences as derived from the learning of grammatical forms. This was developed right out of first language research and was used by researchers to try to show the regularity in the SLA process. The furthest development of this idea is the theory of processability (Pienemann, 1999).

3. How does vocabulary develop in a second language?

Lightbrown & Spada (2013) don’t really explain carefully or clearly how vocabulary is supposed to develop in a second language. This could be because vocabulary is such a complex entity and is very hard to cover entirely. There are also a lot of different ways of looking at words themselves, not to mention the role they play in language.

Some terms we need to be familiar with:

Word – any single unit of form that can be linked to some sort of meaning and/or engages in specific usage patterns, i.e., shows a grammatical function. (e.g., that, fish, incense, indoctrinate)

Lexeme – a multi-word form that functions like a single word. (e.g., get up, fish monger, on your bike, two in the bush is better than one in the hand)

Word family – a conglomeration of words that share a base form and that are related to each other morphologically, i.e., both inflectionally and derivationally. A collection of morphologically related words that share a similar or related meaning. (e.g., dog (N.), dogs (N. Plur.), dogged (Adj.), dog (V.), dogs (V. 3rdPers.), dogged (V. PAST.), dogging (Ger.), doggedly (Adv.), doggish (Adj.), doggone (V.), doggoned (Adj.), doggy (N.))

Lemma – a collection of words that share a base forms connected through inflections only and have a single meaning. (e.g., run, runs, ran, running)

Counting words in a language

This brings us to the idea of how many words the English language has, how many words people actually use, and how many words people need to learn to perform in the language. The numbers can be confusing chiefly because different researchers are counting in different ways. Some count words, some word families, and others count lemmas. In the larger corpus you can make the choice of searching for words in lemmtised or non-lemmatised lists. Increasingly, however, counting using word families is becoming the norm. This seems to make sense when we are thinking about words people know. Since the words in a word family are all semantically related, it is relatively easy for a speaker to use many of the different words in a word family on the basis of knowing the base form. The number Schmitt (2000) gives us for the English language (derived from Goulden, Nation & Read, 1990) is 54,000. This is a pretty big number when we consider how many individual words are in some of the larger word families. Regarding the number of word families speakers of English actually know (as measured receptively) Schmitt (2000) says 20,000 are needed for university students. The basic formula is that a person learning their first language typically learns 1,000 word families each year up until their 20th birthday. The process continues but slows down considerably after that. This is because, looking purely lexically, not all words are the same.

Frequency and words

We talked last week about word frequency in relation to the Vocabulary Control Movement. Looking beyond the idea that the most frequent words might be the most useful to learn simply because they occur more, we can state that words vary dramatically in their profiles based on frequency. Simply put, the frequency at which a word is used is related to the very nature of that word. Thus, the most frequent 1,000 or 2,000 words are used in a very different way that than words in the 10,000 or 20,000 frequency range. The use of a word relates to our knowledge of it. Really, we know very little about words at the 20,000 range. Most of this knowledge is receptive and the active knowledge we have of these low frequency words is quite limited. It is like this because the words themselves are quite limited in their behavior. Words at the high end of the frequency continuum behave very differently. In many ways they can be described as wild words because they show a very high level of variability. They are used a lot, in many different guises, and they are also constantly changing. As a result, speakers of a language need to know a lot about these types of words.

Knowing a word

This, then, leads us to the point where we need to look carefully at word knowledge. Words, in general, are complex and there are many different aspects to them that a language speaker needs to know. Following Nation (1990) 8 different attributes of words are seen as being part of knowing a word.

The meaning

The written (graphemic) form

The spoken (phonemic) form

The grammatical behavior (subcategorization information)

Collocational patterning

The register

The associations

Frequency of use

(Translation equivalents – really, this is considered part of a word’s associations)

These different aspects of a word are seen as important in knowing how to use a word – that is productive knowledge. And to really know how to use a word well, we need the full range of word knowledge shown above. Only some of them, however, are necessary for reception. For this reason reception is seen as being easier and also coming first. That is, people generally learn to recognize a word before they can use it even a little bit.

As a result of the complexity of word knowledge (a direct result of word behavior) and the fact that some aspects are easier to learn than others, Schmitt (2000) describes word learning as necessarily incremental. This is an important realization for us as vocabulary learners and also quite possibly as language teachers. Vocabulary learning is a life-long endeavor and this means not just adding new words to our list, but learning new things about the words we already know in some way. Especially the high frequency words that we use and are used so much, and as a result also vary so much, need to be learned and relearned. Much of this incremental learning is implicit, meaning we are not really aware that we have learned it. It is also not a linear process (moving from less to more complex or abstract), as described in Schmitt (2000: 4-5).

All these different aspects are part of one large (hopefully) interconnected set of neural networks. Part of the incremental learning process, when looking inside the brain is making connections between the different aspects of word knowledge, augmenting, and also strengthening those connections.

Word meaning

There are many different ways in which words acquire their meaning(s) and many different aspects to word meaning. There is no one way that a given word get its meaning. As we have seen so far in this class, different words behave differently and therefore need to se seen differently. Following the idea of variability in lexis it is also true that different words also acquire their meaning differently. Pushing this further, it should be clear that word meaning is far from the fixed kind of thing our teachers taught us. In fact word meaning is a rather slippery fish. Words have no actual meaning out of the context of a specific example of language use. Decontextualized words only have a potential for meaning. It is only when put a form in a linguistic and situational context that the intended meaning of a word (lexeme, really) is made clear.

Traditionally, philosophers have talked about two types of meaning. These are referential and sense. Modern linguistics has given us a further type. This is semantic features. But there are other aspects of word meaning that are often overlooked.

Here is a list of the elements of word meaning.

  1. Referential meaning
  2. Sense relations
  3. Semantic relations
  4. Semantic features
  5. Word associations
  6. Schema theory
  7. Register and valorization

Words and their referents (referential meaning)

Referential meaning seems very straightforward. It basically says that part of a word’s meaning comes from the link between the word form and its referent in the world. So, the word head is linked to any actual head in the world. In effect, all heads provide a possible referent for the word head. This is where it quickly becomes tricky. We know there are billions of human heads that are really quite similar, but what about all the animal heads that are not quite so similar to our own (a fish head) but are still called heads. Even plants and inanimate objects like a stone may have ‘heads’. Further afoot, a problem can come to a head, or a family typically has a head as well, but those are different types of ‘head’ really and have nothing to do with the referential meaning of head. From this we can see that what seems very simple and straight forward at first quickly unravels into something mind-numbingly complex and seemingly unsystematic.

One way of dealing with the giant mess that is referential meaning is to look at lexemes (words) as not individual units but rather as categories. So, a word is not just one word. It is a whole collection of words that are bound together into a category. So, for example, the word head is really a category called Head and this category is composed of all the words that share at least some of the (conceptual) features of head. Words that share the same conceptual base are all part of the same category. Concepts can be thought of as encompassing the generalized meanings of words. They are more fixed and are generally represented by the kinds of definitions we find in dictionaries. So, the definition you find for a word in the dictionary is not its meaning so much as its concept. The two are not the same. Meaning emerges out of the more generalized concept.

Concepts are the generalized meanings of words. They are more fixed and are generally represented by the kinds of definitions we find in dictionaries. And of course there are varying ways of looking at concepts. As discussed above we can see a concept as a category. The Aristotelian definition sees categories as being the fixed conglomeration of all its varying components parts. Once more these parts are all directly related to the concept. A more modern view, presented by Lakoff (1986) is that there is not necessarily a clear relation between the parts of a category. He sees categories as being radial, that is, having indirectly connected elements that are always in a state of flux. Either way it should be clear that the concept is NOT the meaning of a word.

One other way researchers have come up with to try to explain how categories and concepts work in the real world is prototype theory (Rosch, 1973). For each lexeme, and even more so for categories, there is a central or more basic example of the concept and all other related examples (referents) are tied to and extend out from that central or prototypical referent. The entire concept is created based on a collection of properties from the referents that the concept contains (Aristole’s definition). Following this, the boundaries between the concepts that underlay lexemes are not neat and clean, but rather are fuzzy. Take the lexeme WHALE for example. A whale is a mammal, but it is far from the prototypical ideal of MAMMAL, which is a cute, furry, land-based animal like a puppy. WHALE is necessarily included in the set of mammals, but at the very edge and definitely the edge that extends out to the lexeme FISH. Here the boundaries become fuzzy. If you don’t understand this, go to your computer and open up any kind of graphics or photo program. In it there should be a color chart. Look at the chart and try to figure out the point where one color begins and another ends. It’s next to impossible. Then try to find the prototypes of the colors. It should be easy. Have fun with it.

Referential meaning as we have seen is tricky and much more complicated than it would appear. It works better with concrete words, especially nouns and with subordinate, not superordinate, words.

Sense Relations

As mentioned in class sense relations are connections formed based on cultural associations. This is also known as connotational meaning. Based on their patterns of use in the real word (embedded in a particular culture) words are seen as being linked to certain types of use. For example the word moronic has a distinctly negative connotation that is quite different than its synonym intellectually-challenged. They are used to describe different things in different contexts. In essence, sense relations are based on how we happen to use words and for more concrete objects this also relate to the spaces they happen to occupy in the world as well as shared attributes that are real-world based. So, the word piano may be linked to the word black since pianos are often black. The word boy could be linked to nasty while girl could be linked to sweet based on one’s experiences or thoughts.

Semantic Relations

Semantic relations are often described through the use of the entailment condition. Through the careful use of the concept of entailment, researchers are able to show and relate a wide variety of different relationships between words. And it from these relationships that part of a word’s meaning comes. From this we can derive such lexical relationships as; hyponymy, meronymy, cognitive synonymy, antonymy, and complementaries. The idea that allows us to do this is the belief that these relationships between words seem to be realized through connections within the brain. All proficient speakers of a language are inherently aware of these relationships and use this knowledge to help them figure out what word to use in which situation. Word use is context-driven.

Entailment is a semantic relationship between two words or sentences, such that if the first is true, the second must be true (e.g., if it is true that something which is a ‘couch’ and it is also a ‘piece of furniture’, then couch entails furniture). (1994. Language Files, The Ohio State University, p. 457) The relationship of hyponymy relates to superordinates and subordinates, for example, ‘house’ is a hyponym of ‘dwelling’. Here ‘house’ is subordinate to ‘dwelling’. In all languages such words are arranged into patterns of hyponymy called a taxonomy. The shape of taxonomies necessarily change across language and cultural boundaries. We would, therefore, not expect all speakers of the same language to have the exact same taxonomical structure in their minds. Meronymy is a relationship between parts and wholes, such as a door is a part of a house. Then, of course, there is cognitive synonymy. The authors chose to call this cognitive because there is no such thing as perfect synonyms in any language. Even if the meaning seems to be the same and one entails the other, the use of such words is different and restricted. Antonyms are words that are entailments, but with the negative NOT. They are also one directional. `X is stupid` entails `X is not smart` works quite well, but `X is not stupid` entails `X is smart` does not work. Not everyone who is not stupid is smart. Words which are two directional (where the negative can go on either side of the equation) are called complementaries.