Richard Hudson

Word counts:

·  ABSTRACT,57

·  MAIN TEXT,972

·  REFERENCES,344

·  ENTIRE TEXT (TOTAL + ADDRESSES etc.), 1426

Structural priming supports grammatical networks

Richard Hudson

·  institution: UCL

·  Institutional address (never used): University College London, Gower Street, London WC1E 6BT;

·  home address: 57 Park Avenue North, London N8 7RS)

·  home phone: 020 83401253

· 

·  www.dickhudson.com

Abstract

As B&P argue, structural priming has important implications for the theory of language structure, but these implications go beyond those suggested. Priming implies a network structure, so the grammar must be a network and so must sentence structure. Instead of phrase structure, the most promising model for syntactic structure is enriched dependency structure, as in Word Grammar.

Main text

Branigan and Pickering (B&P) rightly argue that we theoretical linguists should pay attention to the massive evidence for structural priming which they review, but their argument actually suggests an even more radical direction for linguistic theory. In a nutshell, structural priming shows that grammars are networks; and if that’s true, then linguists should be developing network-based models of grammar and of sentence structure.

Take Bock’s classic experiment with which P&B open their case, in which one passive sentence primes another – e.g. an experimental subject is more likely to produce a passive sentence describing lightning hitting a church tower after reading The referee was punched by one of the fans than in a neutral control situation. How does this influence work? For P&B “the neural underpinnings of priming are not well understood”, but the standard explanation for priming (Reisberg, 2007, pp. 257–280) sees it as the effect of activation in a neural network spilling over from the intended target to network neighbours, thereby making the latter more accessible. In lexical priming, for example, reading nurse primes this word’s network neighbours so that doctor becomes easier to retrieve than it would otherwise be. But this explanation only makes sense if knowledge is stored as a network of interconnected nodes; so the relevant units must be connected in a network, and if the units concerned are grammatical categories such as active and passive, these too must be part of a network.

This argument is familiar from the literature on connectionist models of processing and learning (Dell, Chang, & Griffin, 1999; Elman et al., 1996), but linguistic theories are pitched at a higher level of abstraction than the neurons that carry activation, so the two streams of research have hardly met. For B&P, as for most linguists, language consists of abstract units such as words, phrases, categories and relations; so if these are part of a network, this must be a symbolic network. On the other hand, the activation responsible for priming in this network is a property of neural networks, so it is reasonable to assume that language is a symbolic network supported by a neural network. In other words, language belongs to the mind, while activation belongs to the brain.

The network view of language is widely accepted in modern theories of the lexicon (Allan, 2006), with its multiple types of relation (meaning, realization, spelling, word class and so on) and its many-to-many mappings. Structural priming shows that networks are just as relevant to syntax: a sentence’s structure combines a network of patterns such as voice, tense, transitivity and so on, each of which is sufficiently active to prime other examples of the same pattern. These patterns are the ‘constraints’ of any constraint-based theory of syntax, including B&P’s preferred linguistic model, Parallel Architecture. In short, a sentence’s grammatical structure must be a rich network of interacting and active nodes.

But where does this leave phrase structure, which is taken for granted in virtually every modern theory of syntax (and, disappointingly, by B&P themselves)? Phrase structure is an extremely impoverished theory of the human mind which only recognises one possible mental relation: the part-whole relation between smaller and larger units. According to phrase structure, direct relations between individual words are not possible. For example, in the sentence Linguistic theories should work, the only possible relations are those shown in a tree such as the one above the words in Figure 1. For example, the word linguistic can be related to the phrase linguistic theories, but not to theories. Moreover, if phrase structure is right, phrases cannot intersect; so if linguistic theories is part of the phrase linguistic theories should work, it cannot also be part of linguistic theories work. But as we all know, both of these assumptions are really problematic: words do relate directly to one another (e.g. for agreement and government), and complex relations such as raising (from work to should) do exist.

Figure 1. Phrase structure compared with network structure

But suppose syntactic theory is actually a network, not a tree. In that case, words can relate directly to one another, and multiple links are also possible. One such analysis is shown by the labelled arrows below the words in the figure for Linguistic theories should work. The labelled dependencies from theories to linguistic and from should to theories are typical of the very ancient tradition of dependency analysis (Percival, 1990) and of more recent work in theoretical and descriptive linguistics (Tesnière, 1959, 2015; Sgall, Hajicová, & Panevova, 1986; Mel’cuk, 2009) as well as computational linguistics (Kübler, McDonald, & Nivre, 2009) and psycholinguistics (Futrell, Mahowald, & Gibson, 2015; Gildea & Temperley, 2010; Jiang & Liu, 2015; Ninio, 2006, 2006). All this work builds on the simple idea that our minds are free to recognise relations between words – an idea espoused some time ago by one of B&P (Pickering & Barry, 1991).

But the network notion takes us further than this, to the idea that such relations need not be formally equivalent to a tree. In the example, theories is the subject not only of should, but also of work – a pattern that goes well beyond the formal limits of trees. This example illustrates the enriched dependency structure of one particular modern theory of grammar, Word Grammar (Duran-Eppler, 2011; Gisborne, 2010; Hudson, 2007, 2010). In this theory, syntactic structure is so rich that it can even recognise mutual dependency in cases such as Who came?, where who depends (as subject) on came and came depends (as complement) on who. Mutual dependency is absolutely impossible in any tree-based theory, but of course it is commonplace in ordinary cognition (e.g. in social structures).

In conclusion, structural priming shows not only that a grammar is a network but also that enriched dependency structure is more plausible than phrase structure as a model of mental syntax.

References

Allan, K. (2006). Lexicon: structure. In K. Brown (Ed.), Encyclopedia of Language and Linguistics, Second Edition (pp. 148–151). Oxford: Elsevier.

Dell, G., Chang, F., & Griffin, Z. (1999). Connectionist models of language production: Lexical access and grammatical encoding. Cognitive Science, 23(4), 517–542.

Duran-Eppler, E. (2011). Emigranto. The syntax of German-English code-switching. Vienna: Braumüller.

Elman, J., Bates, E., Johnson, M., Karmiloff-Smith, A., Parisi, D., & Plunkett, K. (1996). Rethinking Innateness: A Connectionist Perspective on Development. Cambridge, MA: MIT Press.

Futrell, R., Mahowald, K., & Gibson, E. (2015). Large-scale evidence of dependency length minimization in 37 languages. Proceedings of the National Academy of Sciences, 112(33), 10336–10341.

Gildea, D., & Temperley, D. (2010). Do Grammars Minimize Dependency Length? Cognitive Science, 34, 286–310.

Gisborne, N. (2010). The event structure of perception verbs. Oxford: Oxford University Press.

Hudson, R. (2007). Language networks: the new Word Grammar. Oxford: Oxford University Press.

Hudson, R. (2010). An Introduction to Word Grammar. Cambridge: Cambridge University Press.

Jiang, J., & Liu, H. (2015). The Effects of Sentence Length on Dependency Distance, Dependency Direction and the Implications - Based on a Parallel English-Chinese Dependency Treebank. Language Sciences, 50, 93–104.

Kübler, S., McDonald, R., & Nivre, J. (2009). Dependency Parsing. Synthesis Lectures on Human Language Technologies, 2, 1–127.

Mel’cuk, I. (2009). Dependency in Linguistic Description. In A. Polguère & I. Mel’cuk (Eds.), Dependency in natural language. John Benjamins.

Ninio, A. (2006). Language and the learning curve: A new theory of syntactic development. Oxford: Oxford University Press.

Percival, K. (1990). Reflections on the History of Dependency Notions in Linguistics. Historiographia Linguistica., 17, 29–47.

Pickering, M., & Barry, G. (1991). Sentence processing without empty categories. Language and Cognitive Processes, 6(229), 259.

Reisberg, D. (2007). Cognition. Exploring the Science of the Mind. Third media edition. New York: Norton.

Sgall, P., Hajicová, E., & Panevova, J. (1986). The Meaning of the Sentence in its Semantic and Pragmatic Aspects. Prague: Academia.

Tesnière, L. (1959). Éléments de syntaxe structurale. Paris: Klincksieck.

Tesnière, L. (2015). Elements of Structural Syntax. (T. Osborne & S. Kahane, Trans.). Benjamins.