The construction of a model of qualitative evaluation to support the development of the policy and practice of raising student satisfaction in an institution in the higher education sector
Using focus groups as a research instrument in the pursuit of qualitative based research
PETER TOWNLEY
Presented at the Higher Education Close Up Conference 2, Lancaster University, 16-18 July 2001.
ABSTRACT
The paper is based around on going doctoral research. There is initially a discussion of the prospects for the qualitative monitoring and management of student satisfaction in higher education. This is followed by a report on the findings of focus group research which is part of a broader based qualitative approach. The results and issues around this form of research are discussed. The conclusion is that focus group research as part of a wider strategy of ethnographic qualitative based research can inform and illuminate higher education practices and policies with regard to the management of student satisfaction.
The importance of measuring student satisfaction
The premise being developed is that the measurement and management of student satisfaction is fundamental to the continuing success of higher education institutions in the UK. However, the evidence for prioritisation of this phenomenon by management is much less clear and even when it is measured, the almost exclusive use of quantitative methods produces insecure results.
Higher education institutions face an increasingly competitive market for attracting undergraduate students. This is the result of change and reform (Joseph and Joseph 1997). Competitive advantage was a concept seldom encountered in higher education prior to the 1990s but it is now important for institutions to recognise that they are in a market (Oldfield and Baron 2000). There is a need to maintain market share in order to secure government funding based on the concept of full-time equivalent students. To maintain market share, service quality has become a major strategic variable (Donaldson and Runciman 1995). It is the components of service quality which consumers evaluate to form overall judgements about the service and determine their level of satisfaction (McDougall and Levesque 2000). Thus service quality and satisfaction are concepts which are closely linked together and are therefore being researched in combination.
Service quality is in many respects a nebulous concept (Cronin and Taylor 1994). Oldfield and Baron (2000) think it is made up of three dimensions. First, service processes which are concerned with the system of policies adopted by a service provider. Second interpersonal factors in which the frontline employees can influence the degree of satisfaction that a customer experiences. Third, physical evidence. Students spend much time in contact with the physical elements of their educational experience and therefore it is likely that that they will be influenced by their physical facilities. Thus although customers cannot see a service they can see and experience various tangible elements associated with the service.
McDougall and Levesque (2000) state there are just two overriding dimensions to service quality and these are the core or outcome aspects of the service (what is actually delivered) and the relational or process aspects which relates to Oldfield and Baron’s interpersonal factors. However what is really crucial is that these factors are thought to be directly related to customer satisfaction (Zeithaml et al., 1996).
Stahl (1997) writing about health services believes that only health units which focus on customer satisfaction will survive in an increasingly competitive market. Similarly in higher education given the need for greater efficiency in the use of scarce resources by institutions and the need at the same time to improve the quality of learning for students it has been argued there is the need for a new paradigm in which the service recipient role is re-focused from the student to the customer (Havarnek & Brodwin 1998). The interesting point of this argument is that the focus shift will create higher education institutions that are more responsive to students. They believe that a new paradigm for institutions of higher education deserves exploration. It will be a student-focused paradigm which in their opinion can be accomplished by the use of smaller direct service units through a student-focused management organisation.
Most of the monitoring of customer satisfaction in all service sectors has been quantitative in nature, via the use of carefully constructed surveys. The qualitative paradigm has not been widely implemented as a monitoring device, although there is some evidence of increasing use being made of focus groups. These focus groups however, are designed to provide the raw data for the construction of quantitative based surveys. This dominant paradigm is now being increasingly challenged as some researchers become concerned by results which have uncertain reliability and validity. (Mendelsohn 1998, Swan & Bowers 1998, Swan et al 1996).
There is also controversy about how much customer satisfaction theory can be applied to higher education students. This is because some writers reject the notion of student as customer (Barrett 1996; Gould 1998; Johnson 1998), others accept that students are client-based consumers (Brocato & Potocki 1996, Galloway 1998). There are however, writers who have adopted ideas of students as customers of the product “higher education”. (Sanders & Burton 1996; DiDomenico & Bonnici 1996: Hill 1995, Havarnek & Brodwin 1998). This current study very firmly views the student as the customer of the product higher education, whilst accepting that other groups may also be viewed as customer, for example, employers and parents.
The UCE approach to measuring and managing student satisfaction
Some of the most sophisticated and detailed work on the measurement and management of service quality and student satisfaction in higher education has been performed by the Centre for Research into Quality at the University of Central England, (UCE 2000).
UCE (2000) acknowledge that most higher education institutions, around the world, collect some type of feedback from students about their experience of higher education. The feedback usually covers the key service areas such as the quality of teaching and learning, the learning support facilities, student services and the external aspects such as finance and the environmental infrastructure.
Feedback varies in nature although feedback at a module level is now widespread (UCE 2000). An example of detailed feedback mechanisms is found at the John Hopkins University in the USA. The assessment of each student’s satisfaction begins with obtaining fast-feedback questionnaires from each student at the end of each class session. Numerical scores of importance and satisfaction are obtained. Brocato & Potocki (1996) think that the numerical scores collected in this way allow some insight into the value the student places on the importance of the class material and into the student’s satisfaction with the class.
Feedback can also be at the programme level and this considers student experience with regard to the whole programme of study. However, based on the evidence of the present study, the link between student feedback and remedial action is missing.
There are some institutions that go further and systematically collect institution-wide feedback on all or some aspects of the student experience. The University of Central England has pioneered this form of data collection and it has been imitated by many other institutions including the one that is the focus of this research. UCE (2000) argue that because of their scope, they are almost always quantitative surveys and are designed, usually, to provide management with information to initiate improvements. There is thus an implication here that large scale inevitably results in the need to administer a questionnaire. It is at this point that inconsistency comes into the UCE approach. They argue for qualitative work for small-scale evaluation and questionnaires for institution wide work. There is no clear academic justification for this dichotomy in approach.
The leader of the UCE team, Lee Harvey (1997) claims that he has developed a student satisfaction approach which integrates student views into management strategic decision-making and in so doing is able to develop a quality enhancement tool designed to improve the quality of the student experience. Within this institutional approach there is still a commitment to qualitative investigation but this is subsumed by an overarching quantitative strategy. This strategy has been written in the form of a staged manual.
Harvey (1997) claims five main reasons why an institution would benefit from an investment in student satisfaction:
It demonstrates the institution’s commitment to its principal stakeholder – students. Student satisfaction involves taking student views seriously and acting on them.
It focuses on the student learning experience and is instrumental in enhancing student learning opportunities
It provides a clear set of procedures for a process of continuous quality improvement.
It ensures that strategic management decisions are based on reliable and valid information about student concerns.
It provides a means of benchmarking against which progress over time can be assessed.
In this methodology students determine the questions to be presented in a questionnaire on the basis of feedback from qualitative based focus group sessions and also from comments on the previous years questionnaires.
Unfortunately Harvey does not provide any rationale for the development of this strategy. Indeed he does not provide any references preferring instead to present the procedures in the form of a staged mechanistic manual. He further claims that it is a portable methodology capable of being used by other institutions. The questionnaire however, is 20 pages in length and according to Harvey takes about 45 minutes to complete. Given that the questionnaire is distributed via post, this is a daunting proposition for most potential respondents and this is borne out by evidence from the 1998 report (UCE 1999), which confirms that the response rate was only 38.1%.
The lack of a clearly defined rationale for the methodology adopted by UCE is frustrating. There is an underlying assumption that a survey-based quantitative approach is capable of providing valid data without any accompanying articulated support.
The problem facing the questionnaire approach is that unlike physical goods, services are momentary to the extent that they can be consumed only as long as the process or activity continues. Thus service quality can vary from one situation to the next within the same organisation (Hill 1995). It is this problem of variability that the UCE approach does not adequately address.
SERVQUAL analysed
The questionnaire based approach at institutions such as the University of Central England and also at the institution where the research is being undertaken is based loosely around the gap approach developed by Parasuraman et al (1985). This method set out to provide a reliable and valid measure of service quality.
Consumers are believed to form expectations of product performance characteristics prior to purchase. Subsequent purchase and usage reveal actual performance levels that are compared to expectation levels. The judgement that results from this comparison is labelled negative disconfirmation if the product is worse than expected, positive disconfirmation if better than expected and simple confirmation if as expected (Oliver and Desarbo 1988).
The approach acknowledges that because of intangibility, firms found it difficult to understand how consumers perceived their services and evaluated service quality. Thus to help develop a reliable instrument, focus group interviews with consumers and in-depth interviews with executives were conducted to develop a conceptual model of service quality. This research revealed ten dimensions that consumers used in forming expectations about and perceptions of services and they claimed that these were dimensions that transcended different types of service. These dimensions were combined into the SERVQUAL instrument. The methodology required that respondents separately recorded both their expectations and perceptions or outcomes of the service, which allowed the calculation of the gap between the two, defined as the consumer’s perception of service quality (Smith 1995a). If outcomes match expectations, customer satisfaction is predicted. If outcomes exceed expectations, then customer delight may be produced. If expectations exceed outcomes, then customer dissatisfaction is predicted. This overall approach is referred to as the disconfirmation model.
This instrument and approach is fraught with problems. (Buttle 1996; Cronin and Taylor 1994; Andersson 1992; Smith 1995a). Perceived quality cannot simply be measured in terms of disconfirmation because quality is also an attitude (Cronin and Taylor 1994). In addition, the model does not adequately consider the psychology of perception (Andersson 1992) and therefore is unable to adequately measure the extent of service quality and consumer satisfaction. Buttle (1996) has conducted one of the most extensive reviews and concluded that the instrument is flawed in relation to both face validity and construct validity. There is concern about whether the LIKERT scale measures what it purports to measure (face validity) and whether the instrument assesses all of the characteristics and only the characteristics of the construct it is purported to assess (construct validity).
The development of “gap” methodology is thus highly problematical. There are key issues relating to reliability, discriminant validity, spurious correlations and variance restriction (Smith 1995a, Higgins 1997, Mendelsohn 1998, Robinson 1999, Swan & Bowers 1998, Bitner et al 1990).
The case for qualitative research in measuring and managing satisfaction
Ruyter and Scholl (1998) contend that qualitative research does not have a good track record amongst marketers and academics. Its failure to provide hard data and its reliance on small samples cast doubts on its representativeness and ultimately its validity. They go on to suggest that qualitative research does not measure but rather it provides insight.
It is flexible, small-scale and exploratory and the results obtained are concrete, real-life like and full of ideas.
Ruyter and Scholl (1998) p 8
The contention of this study however, is that qualitative research if constructed carefully can measure at least as effectively as quantitative survey based studies. It can provide both insight and a measurement system because measurement is about more than the generation of statistics; it is about the perception of attitudes and feelings and their impact on the process of satisfaction.
Interpretative research has been used sparingly in the field of consumer research during the last 15 years (Szmigin and Foxall 2000). The main reason for this is that there is concern amongst researchers that interpretative work may not be deemed to be “scientific”, (Ruyter and Scholl 1998).