Richard Hudson, 1990, English Word Grammar (Blackwell)

Chapter 4

LINGUISTIC AND NON-LINGUISTIC CONCEPTS

4.1 GENERAL

When language is studied as a mental phenomenon as it is in WG (Word Grammar) the question arises how it is related to other kinds of mental phenomena. What is the relation between what we know about words, and what we know about people, places, the weather, social behaviour (other than speaking), and so on? It goes without saying that there are ways of passing between the two kinds of knowledge, when understanding and producing speech; so there must be some kind of connection between what we know, for instance, about the word CAT and whatever encyclopedic facts we know about cats. If there were no such connection, it would not be possible to infer that a thing that is referred to by CAT is likely to have whiskers, to like milk, and so on. This much is agreed. Beyond this there is considerable disagreement, regarding both the nature of these connections and the general properties of language compared with other types of knowledge.

On the one hand, there are those who are convinced that language is unique, both in its structure and in the way we process it. This view is particularly closely associated with Fodor (1983), who believes that language constitutes an 'input module' which converts the speech that we hear into a representation in terms of an abstract conceptual system (the 'language of thought') to which quite different types of mental processes can be applied in order to derive information. The linguistic system itself is not formulated in terms of the 'language of thought', nor is it processed by the same inference machinery. A similar view, though regarding knowledge rather than processing, has for long been espoused by Chomsky, and is widely accepted by his supporters. The introductory remarks to Newmeyer (1988:4) are typical:

Much of the evidence for [the popularity of generative grammar] must be credited to the massive evidence that has accumulated in the past decade for the idea of an autonomous linguistic competence, that is, for the existence of a grammatical system whose primitive terms and principles are not artifacts of a system that encompasses both human language and other human faculties or abilities.

These ideas of Chomsky's and Fodor's represent a particularly clear version of a view which is in fact typical of twentiethcentury structural linguistics. 1 think it would be fair to say that most linguists have emphasized the distinctness of the linguistic system, and have consequently assumed that the system has boundaries including one boundary, at the 'signal' side, between phonetics and phonology, and another at the 'message' side, between semantics and pragmatics (or perhaps more helpfully, between linguistic and encyclopedic information). I think it is also true that linguists who have assumed the existence of these boundaries have still not been able to produce satisfactory criteria for locating them, so they remain a source of problems rather than of explanations.

The two boundaries just mentioned are not the only ones that ought to exist if language is distinct, but which are problematic; the same is true, for example, when we consider knowledge about matters of style or other sociolinguistic matters (e.g. the difference between TRY and ATTEMPT, or between PRETTY and BONNY). Are the facts about these differences part of competence or not? Some linguists are quite sure that they are not (e.g. Smith and Wilson 1979: 37), but this is only because they can see that such knowledge is like other kinds of social knowledge, in contrast with linguistic competence which they assume to be unique. Similar problems arise in the uncertain border area between syntax and discourse (e.g. are the two clauses of I'll tell you one thing I won't do that again in a hurry! put together by rules of grammar or by pragmatic principles that govern discourse?).

The alternative to the dominant idea of structuralism is the view that language is a particular case of more general types of knowledge, which shares the properties of other types while also having some distinct properties of its own. The most obvious of these properties is that knowledge of language is knowledge about words, but this is just a matter of definition: if some bit of knowledge was not about words, we should not call it 'knowledge of language'. However it is a very interesting research question whether there are any properties which correlate with this one, without following logically from it. It is certain that in some respects knowledge of language is unique, in that it seems to draw on a vocabulary of analytical categories that do not apply outside language categories like 'preposition', 'direct object', or 'affix'. However these differences do not support a general claim that language is a unique system, in the sense of the view described above, because they coexist with a great number of similarities between linguistic and other kinds of knowledge.

The similarities between language and nonlanguage have been emphasized in a number of linguistic traditions. One of the best known is Pike's attempt (1967) to produce a 'Unified Theory of the Structure of Human Behavior'. It is true that this theory was about behaviour rather than about thought, so it did not include any subtheorles about how experience is processed, but at least the structures proposed could be taken as models for human thought about human behaviour. More explicitly mental are the theories which I referred to in chapter 1 under the slogan 'cognitivism', all of which emphasize similarities rather than differences between linguistic and nonlinguistic knowledge. As I explained there, WG is part of that tradition.

The reason why the choice between these two approaches is important is because it affects the kinds of explanation which is available for facts about language. If language is a particular case of more general mental phenomena, then language is as it is because thought in general is as it is; so the best explanation for some general fact F about language is that F is a particular case of some more general fact still which is true of thought in general. In the other tradition such explanations are obviously excluded in principle, which leaves only two possible kinds of explanation. One is the functional kind F is true of language because of the special circumstances under which language is used and the functions it has to fulfil. I have no quarrel with functional explanations, but at least some such explanations are sufficiently general to apply to systems other than language. The other kind of explanation is in terms of arbitrary genetic programming; but since the genes are an inscrutable 'black box' this is no explanation at all.

How can we decide between these two families of theories? No doubt the choice is made by some individuals on an a priori basis, but it should be possible to investigate it empirically. As far as language structure is concerned, what similarities are there between the structures found in language and those found outside language? And for language processing similar questions can be asked.

The trouble is, of course, that one first has to develop, or choose, a theory of language, and the answer will vary dramatically according to which theory one takes as the basis of the comparison: one theory makes language look very different from anything one can imagine outside language, while another makes it look very similar. Theories can be distinguished empirically and it would be comforting to believe that the facts of language, on their own, will eventually eliminate all but one linguistic theory, independently of the comparison between linguistic and nonlinguistic knowledge. This may in fact be true, but meanwhile we are left with a plethora of theoretical alternatives and no easy empirical means for choosing among them. To make matters worse, theories are different because they rest on different premises, and one of the most important of these differences is precisely the question we want to investigate: how much similarity is assumed between language and nonlanguage. It is clear, then, that the particular theory of language that we choose is crucial if we want to explore the relations between language and nonlanguage.

The most promising research strategy seems to be one which aims directly at throwing light on the relations between linguistic and nonlinguistic knowledge. The question is whether the two kinds of knowledge are in fact similar, so we need to find out whether it is possible to construct a single overarching theory capable of accommodating both. What this means is that the linguistic theory should be developed with the explicit purpose of maximizing the similarities to nonlinguistic knowledge, while also respecting the known facts about language. The result could be a theory in which the similarities to nonlinguistic knowledge are nil, or negligible; if so, then one possible conclusion is that the research has lacked imagination, but if this can be excluded, the important conclusion is that linguistic knowledge really is unique. A more positive outcome would obviously prove the existence of important similarities between the two kinds of knowledge.

This research plan may seem obvious, but it contrasts sharply with the strategy underlying most of linguistics. As mentioned earlier, most linguists take it for granted that language is unique, so they feel entitled to develop theories of language structure which would be hard to apply to anything other than language classic examples being transformational grammar and the Xbar theory of syntax. This is a great pity, because the work is often of very high quality, when judged in its own terms. However good it may be, such work actually tells us nothing whatsoever about the relations between linguistic and nonlinguistic knowledge, because all it demonstrates is that it is possible to produce a theory of language which makes them look different. It does not prove that it is impossible to produce one according to which they are similar. Admittedly, no research ever could prove this once and for all because it is always possible that some as yet unknown theory could be developed; but the failure of serious and extensive research along the lines I have suggested would be enough to convince most of us that language really was unique.

It is for these reasons that WG maximizes the similarities between linguistic and nonlinguistic knowledge. What has emerged so far seems to support the 'cognitivist' view that language really is similar to nonlanguage, but it is possible that sofar unnoticed fundamental differences will become apparent. If so, they will certainly be reported as discoveries of great importance and interest. Meanwhile I think we can confidently conclude that at least some parts of language are definitely not unique.

4.2 PROCESSING

One of the most obvious and striking similarities between language and nonlanguage is that knowledge is exploited in both cases by means of default inheritance. The principles of default inheritance outlined in chapter 3 can be illustrated equally easily in relation to linguistic and to non-linguistic data.

Let us start with a linguistic example from chapter 3, relating to some hypothetical sentence in which the first two words, wl and w2, are respectively Smudge and either purrs or purred.

[1] type of referent of subject of PURR = cat.

[2] w2 isa PURR.

[3] subject of w2 = wl.

[4] wl isa SMUDGE.

[5] referent of SMUDGE = Smudge.

From these five facts we can infer, by 'derivation', at least one other fact:

[6] type of Smudge = cat.

Now consider the following nonlinguistic facts, expressed in the same format but this time referring to two concepts cl and c2.

[7] nationality of composer of prelude of 'Fidello' = German.

[8] c2 isa 'Fidello'.

[9] prelude of c2 = cl.

[10] cl isa 'Leonora Y.

[111 composer of 'Leonora Y = Beethoven.

We can now derive fact [12], by precisely the same principles as we applied in arriving at [6].

[121 nationality of Beethoven = German.

The relevance of default inheritance to both kinds of knowledge is probably sufficiently obvious to need no further comment. Another point of similarity in processing lies in the way in which new experience is linked to existing knowledge, in such a way as to allow inheritance to apply. In both cases the linkage seems to be done by the same Best Fit Principle that we discussed in section 3.6.

Though it is a major mystery precisely how the Best Fit Principle works, it seems clear that the same principles apply to linguistic and to nonlinguistic experience. Take, for example, the string of sounds which could equally well be interpreted as a realization of either of the sentences in

(1) a Let senders pay.

b Let's end his pay.

The total string of sounds is the piece of experience to be understood, but we clearly understand it cumulatively, as we receive it, rather than waiting till 'the end' (whatever that might mean). We may assume, for simplicity, that the uttered sounds are completely regular i.e. that their pronunciations match perfectly the normal pronunciations of the stored words but even so mental activity must go on before the understander arrives at a single interpretation. At least two possible segmentations have to be considered in tandem until one of them can be eliminated, and it seems unlikely that this can take place until several words with some content have been found. It is hard to reconcile this kind of problem with the Fodorian

view of language as an input module in which 'decoding is only a necessary (and totally automatic) preliminary' to 'inferring what your interlocutor intended to communicate' (Smith 1989: 9).