Paper presented to the ESRC-TLRP First Programme Conference,

University of Leicester, November 9-10, 2000

The teaching management of variation in students’ learning histories

E-mail:

Jan H.F. Meyera,b

Martin P. Shanahanb

aUniversity of Durham

bUniversity of South Australia

Paper presented at the ESRC Teaching and Learning Research Programme, First Annual Conference - University of Leicester, November 2000

Institutional Background

This paper reviews progress to date in an ongoing research programme at the University of South Australia. The central research question is how the learning environment, and university teaching in particular, can positively respond to variation in students’ engagement of learning. The research is being conducted within the discipline of economics, but the focus here is on aspects of this work that are transferable to other contexts.

The research project is a challenging one for several reasons. To begin with, the enrollment in first-year economics at the University of South Australia, as in many other universities, is comparatively large compared to other first-year subjects; in this case of the order of 1800 students per year (1200 in semester one and 600 in semester two). There are thus formidable logistical factors that impinge on any attempts to enhance the learning experiences of students. Deployable resources are finite, the student sample is large, and the organisation of the timetable is complex. Second, and again as in many other universities, failure and drop out rates are comparatively high in economics. In some institutions drop out rates of ten percent, followed by failure rates of 40 percent, are not uncommon in the first year of study. Third, the University of South Australia is a ‘new’ university in the Australian context. In comparison to the neighbouring ‘sandstone’ University of Adelaide (the U.K. equivalent of a ‘redbrick’ university), the University of South Australia student intake reflects more variation in terms of students’ socio-economic status and university entrance scores. Fourth, there is an institutional commitment to seek appropriate responses to this variation in a manner that can benefit students — responses that, within the University of South Australia, are transferable to other subject areas, and that collectively bring with them obvious strategic advantages in the competitive world of higher education provision. And finally there are ethical issues that need to be addressed. In the work reported here, for example, student participation has been voluntary and on a confidential basis.

1

Paper presented to the ESRC-TLRP First Programme Conference,

University of Leicester, November 9-10, 2000

Research background

The research background is also challenging. Although there is now some 25 years of research into student learning in higher education that has been driven by a consideration of the student experience, there is relatively little evidence of how knowledge about student learning can be explicitly and successfully be transformed into effective teaching practice in a systematic and sustainable manner across different contexts. ‘Knowledge’ refers here to both the accumulated general findings of research on student learning as well as what students may choose to disclose about their own learning engagement in a particular context; in this case on first-year entry to university. The general point here is that what students may be encouraged to disclose about their learning histories can largely be interpreted within a broader theoretical framework — in effect a ‘grounded theory’ substantively based on students’ learning experiences.

The idea of putting such knowledge to practical use for the benefit of students thus assumes, at the very least, that variation in student learning can be solicited and exhibited in a form that can specifically inform subsequent action(s) by subject practitioners rather than by educational researchers. This assumption, however, brings with it a new and challenging discourse about the individual locus of academic responsibility and accountability for managing (aspects of such) variation in student learning. And there is inevitably a measure of scepticism in any such endeavour that is based on the simple fact that, generally, research directed at the modelling of student learning outcomes is an uncertain business as attested to by a literature that abounds with references to statistically non-significant results. The simple truth is that many quantitative modelling studies are carried out in a classic mould dominated by assumptions of linearity between the various observables. The ubiquitous correlation coefficient of typical absolute magnitude circa 0.3 says much about the capacity of linear models to adequately explain variation in the multivariate complexity of human learning engagement and its presumed consequences. The linear model is a powerful one, and it undoubtedly can provide powerful insights into observed phenomena in those contexts where such a model fits the data, but it remains true that there is something fundamentally absurd in exclusively attempting to study individual differences and their effects in terms of, in the simplest and classic case, y=a+bx.

A second important assumption is that it is possible to address individual differences (variation) between ‘real people’ in terms of what they disclose rather than in terms of statistical abstractions of what they disclose. That is, there needs to be an appeal wherever possible to statistical procedures that retain the status of the individual, or at least of statistically similar subgroups of individuals.

Key concepts

There are three interrelated key concepts in particular that underpin the present study; (a) qualitative variation in the manner in which students engage the content and the context of learning, (b) dissonant aspects of this variation and, (c) aspects of dissonance that may be modelled in terms of risk in the sense of consequential learning outcomes that represent low academic achievement or failure.

1

Paper presented to the ESRC-TLRP First Programme Conference,

University of Leicester, November 9-10, 2000

Qualitative variation. There are two aspects of qualitative variation that are relevant to the present study. To begin with, any exhibited variation in student learning engagement (in both a univariate and multivariate sense) contains an evaluative component in both a temporal and consequential sense. (See Meyer (1999) for examples and a full development of this argument.)

The second aspect emanates substantively from phenomenographic research findings within the disciplines. These findings, as already mentioned, collectively represent a ‘grounded theory’ and, as briefly summarised here, they provide (as will be demonstrated in the case of economics) a useful starting point in any attempt to isolate sources of explanatory variation in student learning that may be discipline-specific. This is an important point generalisable to other contexts. To illustrate by way of a question: The ‘deep/surface’ metaphor is a powerful one but how might the qualitative contrast(s) that it represents be reconstituted in a discipline-specific form?

A starting point to the answer of this question is the simple fact that individual students differ (or vary) from one another in the manner in which they engage the discipline content and the context of learning. There is wealth of research evidence to support this assertion of variation in both a qualitative and quantitative (statistical) sense.

At the most basic level and, in terms of transferability across disciplines, qualitative (but especially phenomenographic) research has provided a rich source of evidence in support of the classic distinctions between, for example, ‘deep’ and ‘surface’ forms of learning engagement (Marton & Säljö, 1976a,b)[1], contrasting conceptions of learning[2] (Säljö, 1979; Marton, Dall’Alba & Beaty, 1993), contrasting forms of learning processes based on ‘memorising’[3] (Dahlin & Regmi, 1995), as well as contrasting forms of understanding of subject matter as a whole, as in introductory accounting (Lucas, in press) and in mathematics[4] (Crawford, Gordon, Nicholas & Prosser, 1994). There is also much evidence of qualitative variation in terms of conceptions within a range of disciplines such as the ‘balance sheet’ in accounting (Lucas, 2000)[5], the ‘mole’ in chemistry[6] (Lybeck, Marton, Strömdahl & Tullberg, 1998), ‘price’ in economics[7] (Dahlgren,1984; Dahlgren & Marton, 1978; Pong, 1999), and so on. The work on ‘price’ in economics is particularly relevant to the present study in terms of the psychometric operationalisation of contrasting conceptions of price determination — the refinement of which carries with it some salutary experiences in terms of the more general problem of determining whether qualitative variation in students’ experience of some phenomenon also represents a source of statistical variation (see Meyer & Shanahan (in submission-a) for a fuller treatment of this topic).

1

Paper presented to the ESRC-TLRP First Programme Conference,

University of Leicester, November 9-10, 2000

Foundation exploratory work

Work on the modelling project began in 1997 with the administration of a ‘general purpose’ inventory of student learning to incoming first-year students at the University of South Australia. This inventory owed its conceptual origins in part to the Approaches to Studying Inventory (ASI) reported by Entwistle & Ramsden (1983), but had undergone considerable development by the first author in the context of engineering. It was recognised at the time that the inventory was, at best, a proven instrument that might empirically reconstitute, in a recognisable form, and in a new response context, various generic aspects of learning engagement variation within dimensions such as intention, motivation, process, and so on. This in fact proved to be the case as exhibited in terms of a conceptually interpretable common factor model.

The general ‘response validity’ of the inventory was thus established; that is, there was no reason to reject the assumption that incoming first-year students of economics were exhibiting patterns of variation in their (school leaving) learning engagement that differed from those of other entering first-year samples. It was furthermore established, via a general modelling procedure, that a small number of entry level observables (an individual ‘risk’ classification based on inventory scores, entry level matric score which ranks all school leavers’ final school year results, and whether or not students were exposed to large or small group teaching sessions) explained some 46 percent of the variation in the end of semester one learning outcomes (Cowie, Shanahan & Meyer, 1997). This explanatory power was considered quite remarkable given the conservative nature of the model and the intervening period of several months. Simply put, there appeared to be justification for asserting that students’ ‘learning histories’, in particular, represented a generic source of explanatory variation, presumed relatively stable features of which had an observable effect on learning outcomes several months later.

Thus encouraged, further exploratory work was undertaken in 1998. A new ‘learning history’ inventory was trialled that retained many of the features of the earlier inventory, augmented by a first generation of subscales intended to capture variation in aspects of learning engagement that were specific to economics. These specific subscales were inspired, in particular, by the earlier referred to work of Dahlgren and his colleagues (related to conceptions of ‘price’ determination) as well as a qualitative analyses of students’ written responses to simple questions relating, for example, to what an economic analysis is, and what economists do when they analyse the economy. These qualitative analyses suggested further dimensions of explanatory variation related, in particular, to misconceptions about economic phenomena. A further set of categorical observables captured whether economics was studied as a subject at school, the level of mathematics achieved, the status of English as a second language, and gender. End of semester one learning outcomes (in three parts: A, B, and C) were once again modelled with further encouraging results. Part A of the end of semester examination is intended to measure factual understanding, Part B the application of economic concepts, and Part C a deeper understanding of economic phenomena.

1

Paper presented to the ESRC-TLRP First Programme Conference,

University of Leicester, November 9-10, 2000

Univariate analyses. The explanatory value of four of the categorical observables was clearly established (see Figure 1). It is clear from Figure 1a that students who studied economics at school scored higher average marks in each section of the final examination than students who did not. Figure 1b reveals that students who completed the lowest ‘level’ of school mathematics (business mathematics: BM, or quantitative studies: QS) recorded a lower mean score in each section of the final examination results than students who completed Maths 1 (designed for students intending to study mathematics and sciences at university and Maths 1S (a subject designed to expose students to some higher level mathematics). The largest mean differences is seen in Figure 1c where students for whom English is their second language score lower than others, while Figure 1d reveals a difference in the mean marks by gender. These observations are, again, quite remarkable in their statistical significance, as well as their multivariate import, given the modelling period and a formal exposure in that period to learning experiences that are implicitly assumed to erase (or at least reduce) the effects of entry level differences.

Note: All figures in terms of University of South Australia data (1998)

1

Paper presented to the ESRC-TLRP First Programme Conference,

University of Leicester, November 9-10, 2000

However, the apparent long term and statistically significant effect of the fifth observable (a quasi-continuous observable capturing variation in first-year entering students’ economic misconceptions, but modelled here as a categorical observable) was even more dramatic (see Figure 2a). In the first trialling of an ‘economic misconceptions’ source of explanatory variation there is clear evidence that formal exposure to the teaching of the subject during the first semester fails to remove the (statistically significant) effect of entry level misconceptions about the subject. Of further interest (Figure 2b) is the fact that this observable exhibited a similar statistically significant effect in respect of incoming first-year students of economics at the neighbouring University of Adelaide whose end of semester one learning outcomes (in three parts: A, B, C) are conceptually comparable to those at the university of South Australia. (See Meyer & Shanahan (in submission-b) for further details of this comparative aspect of the 1998 trialling.)