1

Surveying the student experience …

Surveying the student experience: institutional collaboration and benchmarking

Mantz Yorke, Lancaster University and Bernard Longden, Liverpool Hope University

Abstract

This paper discusses methodological issues relating to a survey of the experience of part-time students in higher education, and indicates that the methodology used is adaptable to a variety of situations. Whilst the focus of the paper is on methodology and its transferability, some initial findings from the survey are presented.

Introduction

In the UK, the experiences of part-time students have received little research attention – save for intra-institutional studies that are rarely communicated beyond the boundaries of the institution – since the pioneering study carried out by Bourner et al (1991), despite the increasing numbers of students who choose to participate in higher education via this mode of study. In 2005-06, there were more than 850,000 students enrolled part-time in higher education programmes at sub-degree, first degree and taught master's level across the whole of the UK.

The lack of attention to the experiences of part-time students is all the more surprising because of national policy imperatives regarding both participation and workforce development. Changes being mooted in the funding methodology for England in order to support flexibility in provision and completion (HEFCE, 2007, para 59ff) add an extra dimension.

Bourner et al (1991) attracted 2876 usable responses which constituted roughly a 70% response to their wide-ranging survey of students on part-time degree programmes. There was a widespread satisfaction with the academic courses, but the responses relating to institutional facilities were more mixed. A survey of 556 part-time students in Scotland (Schuller et al 1999) came to broadly similar conclusions.

More recently, as the third strand in a three-strand study of part-time students for Universities UK, Callender et al (2006) conducted an on-line survey of 2654 part-time students that focused to a considerable extent on to economic aspects of engagement and barely touched on students’ course experiences – indeed, they took the view that this could largely be left out of consideration because of the National Student Survey.

Data from the NSS in 2005 and 2006 (Table 1) show that, at the time when their studies were approaching their conclusion, students had strongly positive perceptions regarding their experiences in higher education. It should be noted, however, that part-time students more often responded to the NSS by phone, which tends to be associated with more positive ratings on the NSS (Surridge, 2007).

Source: Surridge (2007, Annex D)

The authors, who conducted a pioneering survey of the experiences of full-time first-year students in the UK (Yorke and Longden, 2008), saw the need for a similar study of the experiences of part-time students, at all stages of engagement with higher education (i.e. not relying on the retrospection inevitable with the NSS), which would provide ‘baseline’ data emphasising their experiences of their programmes that would be useful sectorally for a number of purposes. With a substantial proportion of part-time study at undergraduate level being concentrated in ‘post-1992 universities’, it seemed sensible to focus attention on these institutions in the first instance. Approaches were made to various potential funders for support, but without success despite the policy significance of the study and the potentially high level of return on any funder’s investment that could accrue from the proposed methodology (see below).

First thoughts

The original plan had been for a postal hard-copy survey which would be optically readable in order to produce, from ‘check box’ responses, a data matrix for statistical analysis. There would in addition be opportunities for respondents to comment, at whatever length they chose, on matters that they felt needed to be expressed.

The costs of such a survey (design, production, administration and postages) are roughly £2 per person surveyed. In the post-1992 institutions part-time student enrolments run up to approximately 16,000 [1], though the median enrolment was considerably lower. Involving around a dozen institutions of any size would represent roughly 10 per cent of the total sectoral enrolment, and would push the total costing towards £200,000. It was appreciated that raising such a level of funding would, in the relatively short time available for seeking it, be likely to be challenging in the extreme. An alternative approach offered a better prospect of achieving results.

A number of post-1992 institutions were approached regarding their possible involvement on a partial self-help basis. If they would be responsible for distributing surveys to their part-time students, the costs of central administration and management could be reduced to a figure of around £40,000. In return, they would have access to data – in a suitably anonymised form – from the other (broadly cognate) institutions which were participating in the study. The majority of the approached institutions agreed to participate on this basis, and it was a major disappointment that the search for funding proved unavailing. If the above approach could be termed a Plan B, a Plan C would be needed if anything at all were to be achieved during the academic year 2007-08.

Cutting according to the cloth

The only feasible way of making progress was to cut out the relatively expensive print-based survey and instead to seek to exploit the potential of the internet. The co-directors of the project had available a small amount of money which could be used to cover the costs of convening meetings of participating institutions and to set up an internet-based survey. The institutions that had expressed their willingness to engage in the print-based survey indicated that they were prepared to contribute appropriately to the electronic survey. In practice, this meant that their major commitment would need to be to construct lists of relevant students’ electronic addresses and to e-mail to the students an invitation to participate (and probably a reminder).

Design issues

The response rate to electronic surveys is often very low (a survey of part-time students for Universities UK attracted an overall rate of 4.7%: Callender et al, 2006), and the authors’ experience had shown that lengthy surveys (in the UK, if not in Australia and the United States) were susceptible to respondents’ attenuation of engagement. This meant that the survey of in-institution experience would have to be shorter than originally envisaged for the print-based approach, particularly bearing in mind the wide variation in demographic background that exists in the part-time student body which would necessitate a considerable number of demographic questions if analyses were to be meaningful.

The identification of institutions

A majority of the ten post-1992 universities (from across the UK) that were originally approached had indicated their willingness to participate, though this majority included only institutions from England and Scotland. For an initial survey that would inevitably be limited in size, it was felt that representativeness of the sample in all respects was less important than having a group of institutions that were committed to the work: subsequent studies would need to take representativeness more fully into account. Other post-1992 universities were approached on an ad hoc basis, in the main where they had expressed an interest in part-time provision and/or institutional research activity. Eleven institutions made up the final sample.

The development of the survey instrument

A meeting was convened at Liverpool Hope University in early December 2007, whose purpose was to develop amongst the participants a shared understanding of the survey and what it could reasonably be expected to achieve. It would also lead to agreement on the nature of ‘the deal’ between the participants, including how data would be shared and handled, and what principles would influence the reporting of the work to a wider audience.

Although the original idea had been to focus attention on undergraduate-level study, there was sufficient desire amongst the participating institutions to include postgraduate taught programmes in the survey. These were therefore included.

The starting-point was the survey instrument prepared for the first-year experience [FYE] survey (Yorke and Longden, 2007). This was deemed to be too long for the electronic survey, and in any case its item-pool was not entirely appropriate. A subset of the FYE items, to which was added some new items (mainly demographic), was considered by project team members and iterated between them electronically until all ‘could live with’ the revision even if it was not exactly what they would have preferred.

‘The deal’ between the institutions and the project

With collaborative work, it is important to be clear from the outset as to the nature of ‘the deal’ between the various parties involved in the work (see earlier comment). With this in mind, documentation prepared for the initial meeting of the project team included a paper dealing with such matters. There is always the possibility of further issues arising as the project progresses, and it is important to be alert to them, and attend to them without delay since mutual trust can be compromised if matters are not opened to discussion or are allowed to drift.

Ethical issues

The survey was planned to align with the ethical guidelines published by the British Educational Research Association (BERA, 2004). Participating institutions took differing stances regarding the need to send the proposal for the survey through their ethics committees. The majority took the view that the survey was part of the institution’s normal quality assurance and enhancement operations, and that this was sufficient as regards approval. Two, however, took the view that the involvement of human subjects required them to seek formal approval from their ethics committees. Whilst this added an extra complication to the organisation of the survey, it did throw up some useful points that were incorporated into methodological practice (for example, making the survey accessible in some form to all possible respondents). It was made clear that the mode of reporting would be such that no individual or institution would be identifiable, and that the only people to deal with the individual responses would be the project’s co-directors. A report of the survey would be made available to the students in the institutions concerned.

Constraints

It was originally envisaged that the survey would be run during March 2008. This would allow students in their first year to comment on feedback on work – an issue that has consistently been shown to be of concern in responses to the NSS. However, institutions would be at that time involved in encouraging final-year students (some of whom would be part-time) to complete the NSS, and it was decided to defer the administration of the survey until the following month, after the (unusually early) Easter break. It had to be left to institutions to decide exactly when invitations to participate would be sent out, since local considerations (such as term dates and internal surveys) had to be taken into account. It was hoped that the vast bulk of responses would be received by 9 May, but ‘late entries’ would be accepted for the final analyses. In practice, some variation in the dates of e-mailing invitations and in receiving responses would be unlikely to have a significant impact on the results.

Some initial results

The dataset that had accumulated by 9 May 2008 was cleaned of duplicated and other questionable responses. The results reported below are drawn from this dataset, and are interim. At the time of writing the dataset is continuing to grow, and is expected to near 3,000 by the time that the survey is closed. There are currently 815 responses from taught postgraduates, 854 from first-degree students, and 357 from students on foundation degrees, HNC/D programmes and other sub-degree courses. In Table 2, the interim data from first-degree and other programmes at undergraduate level are compared with those from Phase 1 of the study of the first-year experience: the postgraduate experience is arguably rather different.

Table 2. Comparisons of interim findings from the survey of part-time students with

those from the study of the first-year experience.

Notes. All items are scored such that higher ratings indicate a more positive perception to the matter at issue.

Items in bold are identical in the two surveys.

Items in normal font are closely similar. The item on academic grouping (marked ##) was

stated negatively in the FYE study.

Items in italic are particular to the survey of part-time students. n/a = not asked.

Bearing in mind the provisionality of the data in Table 2, the following points seem to be emerging.

·  Part-time provision is perceived as less well organised than first-year programmes.

·  First degree PT students in particular seem to find their programmes stimulating.

·  Feedback issues seem to be common to both studies.

·  PT students appear to have been more diligent regarding background reading.

·  PT students have greater difficulty in balancing academic and other commitments.

·  PT students appear to be better at coping with their workload.

·  Whilst friendship formation is less strong for PT students, they seem to have a stronger sense of ‘belonging’ to an academic grouping (but particular caution is needed on the latter point because of the reversal of the questionnaire item).

·  Library and computing resources are less highly rated by PT students

·  PT students are more positive about financing their way through higher education.

From the demographic data, around three quarters of undergraduate respondents indicated that they had opted for part-time study because of the need to study alongside other commitments. Fee issues and flexibility were, for a minority, co-reasons. The source of tuition fees was roughly evenly divided between the students and their employers. However, the provision of funding for ancillary expenses fell overwhelmingly to the students themselves, with only around 1 in 8 indicating a contribution from an employer.

The questionnaire asked respondents to indicate the best and worst aspects of their experiences to date as a part-time student. As is evident from Table 2, the bulk of student experiences were positive – sometimes very strongly so (space considerations preclude their inclusion in this paper). However, and arguably of greater importance for the enhancement of the student experience, there were some strongly-expressed negative comments: