Systemic Functional Grammar
as a formal model of language:
a micro-grammar
for some central elements of the English clause
Robin P. Fawcett
Cardiff University
Part 1
A general overview of the grammar
1.1 Purposes and principles
Many linguists with interests in formal linguistic theory have a general awareness that Halliday proposed a number of insightful ideas about the nature of language in the 1960s and 1970s from a functional perspective.[1] Indeed, one still finds occasional ritual references to his work in contemporary papers by scholars who are working in a completely unrelated framework, e.g. where concepts such as (i) ‘theme’ and ‘rheme’, and (ii) ‘given’ and ‘new’ are concerned. But most formally-inclined linguists are almost completely unaware of the extent to which these ideas have been developed and formalized since those early days - to some extent by Halliday but mostly by others. This paper provides an introduction to some of this work.
Linguists with interests in grammar formalisms may also be aware that the theory of language to which the work of Halliday and others has led is known as Systemic Functional Linguistics (SFL), and that it has been found useful for an impressively wide range of tasks in applied linguistics, e.g. in the fields of language teaching, literary stylistics, critical discourse studies, forensic linguistics, speech pathology, computational linguistics, etc.
I suspect, however, that most formal linguists work on the general assumption that they do not need to consider SFL seriously as a formal theory of language, believing (i) that its basic concepts and its major claims have never been specified sufficiently explicitly by its leading theorist (i.e. Halliday) and (ii) that it is consequently hopelessly under-formalized.[2]
Those who are familiar with Halliday’s writings over the last fifty years will recognize that there is a good deal of truth in the first criticism.[3] But I want to suggest that the second criticism is unfair - and to demonstrate that SFL, once its formalisms are understood, offers a fully explicit model of language that is importantly and interestingly different from other current formally explicit models of language. It is especially different from those that have developed in the broadly Chomskyan paradigm.[4]
The two purposes of this paper are therefore:
1 to demonstrate that grammars developed in the framework of SFL can be (and indeed have been) formalized in a fully adequate manner (this formalization being expressed in terms of a different set of concepts from those used to characterize grammars developed in the Chomskyan paradigm of ‘formal linguistics’) - and
2 to illustrate this claim by presenting a small but non-trivial generative grammar that is explicitly derived from the principles of SFL.
Some scholars whose starting point was outside SFL - especially scholars working in Computational Linguistics - have discussed certain key aspects of the formal properties of generative SFGs, e.g. Patten and Ritchie 1987, Mellish 1988, and Brew 1992. However, they have restricted themselves predominantly (in the case of Patten and Ritchie) or wholly (in the case of Mellish and Brew) to the formal properties of system networks.[5] This is perhaps understandable, given the strong emphasis in Halliday’s writings on system networks as the core of the model.[6] However, system networks are, as we shall see, only one half of the SFL picture of the nature of language and, as all SFL theorists would agree, they require as a necessary complement an associated set of realization rules (Halliday 1993). Indeed, a SFG that has no realization rules simply cannot function as a generative grammar, and so as a formal model of language.[7]
This paper begins by describing the formal properties of a generative SFG. It then illustrates these by presenting a small generative grammar for some central elements of the English. But despite its small size it introduces a surprisingly large number of the central systems of English, it includes a number of interesting points about the nature of English, and it shows you how to operate the grammar.
How small is ‘small’? One practical way to describe the size of the grammar is by saying that it is small enough to be graspable in a working session of around an hour. Another is to point out that the graph representation of its system network (a concept that we shall come to shortly) can be displayed on a single page (as the Appendix shows). Another metric that is sometimes used is to count the number of systems. By this metric the grammar to be described here is very small indeed; it has just 24 systems. In contrast, the full grammar from which this one is extracted has around 7-800 ‘grammatically realized’ systems (i.e. features realized in syntax, grammatical items, intonation and punctuation), and around 5,000 systems with features that are realized in lexical items.[8]
Part 1 describes the general principles of a generative SFG and the background to the particular ‘micro-grammar’ presented here. Part 2 consists of the grammar itself, together with a set of comments on its various sections, and Part 3 provides a brief a summary of the conclusions to be drawn from this study.[9]
It would have been an interesting exercise to have additionally compared the approach to modelling language presented here with other types of generative grammar. However, the present task is sufficient for one paper, and if I had attempted that task too this paper would have become overlong. Such a comparative study must therefore await another occasion. Nonetheless I shall add occasional comments on the differences between the key concepts of a SFG and those that underlie a grammar that is the product of what we may call, using the term in a broad sense, ‘Chomskyan linguistics’.[10]
This micro-grammar covers a number of central aspects of English syntax such that (i) they are required in the generation of most clauses; (ii) many of them involve the realization of a single meaning in two non-adjacent elements; (iii) there are interdependencies between the grammar’s rules that are sufficiently complex to test aspects of the descriptive adequacy of this or any other formal grammar; and (iv) it includes linguistic phenomena that test the ability of any grammar to handle the lack of a one-to-one fit between semantics and syntax - this being, of course, one of the characteristics of language that gives it its enduring fascination. I suggest that, if a grammar can handle all of the grammatical phenomena that I shall specify in the next section in a reasonably economical and transparent manner, it is de facto one that at least deserves a second look. And if is also has a proven ability to be expandable so that it can handle a high proportion of the full range of syntactic and semantic phenomena that are currently recognized in the field - including applications to other languages than English - it is one that should claim further serious study. The grammar to be presented here is such a grammar, in that it is derived from a very large computer-implemented generative grammar, of English, and it has been applied successfully to modelling small grammars of both Chinese and Japanese (each being considerably larger than this).
Specifically, the micro-grammar to be presented here is an extract from a very full grammar for English developed between 1987 and the present for GENESYS, the ‘sentence-generating’ component of COMMUNAL.[11] COMMUNAL is a large-scale project whose goal is to explore the nature and functions of language as a component of a communicating mind, through the medium of modelling it in a computer.[12] GENESYS has been described by Halliday (1994:xii) as ‘among the largest grammars existing anywhere in computational form’, and by Butler (1993:4503) as ‘the largest computer-based systemic grammar in the world’. Indeed, the defining characteristic of a micro-grammar is (or should be) that, even though it is small, it has been extracted from a grammar with a broad coverage of syntactic, lexical, and preferably intonational and punctuational phenomena. The concept of a ‘micro-grammar’ therefore stands in clear contrast to a ‘pilot grammar’ or a ‘toy grammar’, for which there exists no large equivalent grammar.
Interestingly, many of the areas of language included in this micro-grammar involve structural phenomena for which the concept of the syntactic transformation might at first seem a neat solution. Certainly, this was Chomsky’s view in Syntactic Structures (1957) - and to this day it remains the view of a large proportion of the linguistics community. One of my purposes here is to illustrate the fact that that a Systemic Functional Grammar is able to handle many of the problems in syntax for which Chomsky first proposed as a solution the concept of the syntactic transformation - and to do so without using ‘movement’ and ‘deletion’ rules. In other words, the present grammar never builds a structure which it later needs to change. In this respect it shares at least one goal with some ‘grammars hat were born out of the Chomskyan tradition, e.g. Gazdar’s Generalized Phrase Structure Grammar (Gazdar et al 1985).
[13]
1.2 The coverage of the micro-grammar
Using traditional terminology as far as possible, I shall now describe the micro-grammar’s coverage. It provides for the following phenomena: (i) simple types of ‘mood’ (‘declaratives’, ‘Yes-No questions’, ‘imperatives’ and ‘conducive questions’); (ii) a small sub-set of the type of ‘modal verbs’ that express the performer’s assessment of the validity of what is being expressed; (iii) many ‘tense’ forms (including the two main types of ‘aspect’); (iv) the ‘passive’ construction; (v) the idiosyncratic syntactic and morphemic behaviour of the verb be; (vi) ‘emphatic’ forms of do; and (vii) simple ‘negation’.[14]
As most readers will know, many of these phenomena are handled in transformational grammars by syntactic transformations. A transformational generative grammar uses a series of transformational rules to generate, from the ‘deep structure’ of what would, if no such rules were applied, emerge as a ‘simple active affirmative declarative’ sentence, a range of different constructions. There are equivalent to each other in their ‘experiential’ meaning, but they are also one or more of the following: ‘passive’, ‘negative’, ‘emphatic’ or ‘interrogative’ - and they use the ‘affix-hopping’ transformation in order to get the right endings on the auxiliary verbs and the main verb. Here, however, there are no ‘re-write’ rules to generate the initial structure, and no transformational rules that alter any existing structures. Instead, the grammar simply stores up the choices of semantic features that are made for each semantic unit, and then generates the appropriate syntactic unit. In other words, it gets the syntax right first time - an achievement that should give those habituated to think in terms of syntactic transformations pause for thought. The grammar to be presented here requires just two and a half pages to state the relevant rules (see the appendix), and these are sufficient to represent a grammar that handles some of the more difficult areas of syntax and morphology in some central portions of the grammar of English. And, once this part of the grammar is in place, it can quite easily be extended to handle most of the other major grammatical phenomena of English - as it has been, in Fawcett, Tucker and Lin 1993 and 1996, and Fawcett, Tucker and Castel 2004 and 2006.
The main focus of this micro-grammar is on the complex interplayin the grammar of English between (i) the discontinuous realizations of meanings in the items have ... en/ed etc, be ... ing, and be ... en/ed, etc; (ii) the variations in sequences between the Subject and the Operator; (iii) the various possible ‘conflations’ between the Operator and the three most frequent types of Auxiliary Verb, and (iv) the various drastic ways in which the picture changes when the Main Verb is a form of be.[15]
We shall begin our examination of a SFG by asking: What are its main components?
1.3 The components of a Systemic Functional Grammar
In a generative Systemic Functional Grammar, the process of generation is controlled by the system networks.[16] These model the meaning potential of the language (Halliday 1970:142), and they consist of statements about relationships between semantic features. We shall see in the next section how such a system network operates. The problems of (1) getting the elements of the structure that the network generates in the correct sequence and (2) ensuring that they are expounded by the correct items is handled in the realization rules and the potential structures, and we shall examine these in Section 1.5.
Figure 1 shows (i) the two main components of the grammar (on the left) and (ii) their outputs (on the right). As the labels above the diagram suggest, it is the grammar that specifies the two ‘potentials’ of a language: one at each of the two levels of meaning and form.[17] Figure 1 also shows the outputs - i.e. the ‘instances’ - that are generated from the potentials at each of the two levels.
The grammar that is set out in Part 2 constitutes the two potentials, and, if you operate that grammar in the way to be described here, in the rest of Part 1, you will automatically generate the instances that are specified by those potentials.
Figure 1: The main components of a Systemic Functional Grammar and their outputs
Figure 1 also shows that the output at the level of meaning is the input to the level of form. Notice that, while most types of realization rule build structures (as shown by the arrow pointing to the right), one type re-enters the system network (as shown by the arrow to the left). It is through this second type of realization rule that the different layers of a tree diagram that represents the structure of a text-sentence are built up - as we shall see in Section 1.5 and, in full detail, in Part 2. For a much fuller discussion of the implications of this diagram, see Chapter 3 of Fawcett 2000a.