TALKING ABOUT STUDENT RATINGS FEEDBACK WITH PEERS

Angela R. Penny

Research Student

School of Education

University of Durham

Paper presented at the Scholarship of Academic and Staff Development: Research, Evaluation and Changing Practice

Joint Conference of the Staff and Educational Development Association and

Society for Research in Higher Education

Wills Hall, University of Bristol, UK

9 – 11 April, 2003

Abstract

With reduction in financial resources available to higher education one-on-one consultation over student ratings with an ‘expert’, although very important, is no longer practical. This paper describes teachers’ perception of the efficacy of using peer support groups, as a consultation model. The study investigated the effect of teachers examining their student ratings feedback in small groups where there is mutual support to learn from the feedback to improve teaching effectiveness. It suggests that the use of peer support groups is an effective strategy to support teachers in learning from student feedback for teaching improvement.

INTRODUCTION

The use of student ratings feedback as an indicator of teaching quality is now a well-established feature in the higher education scene. It is not the only indicator of teaching quality but has become the most widely used method for evaluating university teachers (e.g. Seldin, 1999).

The literature commonly cites student feedback on teaching as having four purposes (Marsh & Dunkin, 1992):

  • To provide diagnostic feedback to teachers that will be used for teaching improvement
  • As one measure of teaching quality for promotion and tenure decisions
  • To provide information for course selection by students
  • As an outcome measure for research on teaching

In more recent times a fifth purpose has been discerned, to provide information to satisfy quality assurance requirements and to maintain funding. For example, in the UK the recent White Paper, “The Future of Higher Education” (DfES, January 2003) proposed that student feedback be given an even higher priority to inform on quality.

The first two purposes are often regarded as the primary purposes of student ratings, that is, for formative and summative evaluation, or improvement and accountability. The purposes co-exist but do get in each other’s way from time to time as data intended for teaching improvement purposes might be used for accountability by administrators (Johnson & Ryan, 2000). It is this increasing trend that worries many university teachers.

Teachers do support the collection of student feedback (McKeachie 1997; Baxter, 1991). They are only discontented that the principal motive behind the collection of student feedback might not be teaching improvement, as commonly expressed in the rhetoric about improving the quality of student learning. Rather, the emphasis seems to be on use of the data as a politically expedient measure in quality monitoring. This criticism can be understood in the context that too few teaching evaluation systems are situated in a comprehensive evaluation system that provides information on performance with sufficient support for teachers to actually alter their teaching behaviours (Ory, 2000).

Utility of student feedback ratings

Despite the arguments and counter arguments on the validity and reliability of student ratings data, the predominant response in the literature is that ratings data provide useful information to both teachers and university administrators. The problem is, administrators expect to “see” results from the mere provision of feedback to faculty. By itself, student feedback is not powerful enough to bring about the level of improvement desired.

The influential meta-analysis by Cohen (1980) showed that providing teachers with student ratings did have a positive effect on teaching effectiveness but only to a modest degree (mean effect score .20). Instead, teaching improved substantially (mean effect score .64) when student feedback was used in conjunction with individual consultation.

An update of Cohen’s synthesis by Menges and Brinko (1986) reported even stronger effects for studies that combined student ratings with consultation (mean effect score 1.10). Using Menges and Brinko’s results the indications are that combining student feedback with consultation, consultative feedback, could mean an improvement level of 86 percent for the teachers receiving student feedback combined with consultation compared with 50 percent in teachers who do not receive consultative feedback.

Teaching consultation

Teaching consultation, also referred to as consultation, and instructional consultation, is now widely recognised as a useful strategy to improve teaching. Menges (1997: v) described it in this way, “no other service provided by teaching centres has greater potential for producing deep and enduring effects on academics and teaching.” Marincovich (1999) supports this view, stressing that a teaching consultation service is one of the most important steps universities can take to increase the effectiveness of their teaching evaluation system.

Consultation, as a strategy of the teaching improvement process, is a structured collaborative problem-solving process that uses information from teaching evaluations, as a basis for discussion about improving teaching and student learning.

By tradition, consultation over student feedback is offered as a one-on-one interaction with a faculty developer. However, individual, one-on-one, consultation is not only time consuming, is also very costly for under-financed faculty development units. Many universities simply do not have the resources to deal with a high demand for consulting services, even against the background that many university teachers have had no teacher training and might need assistance to improve their teaching.

University teachers, tend to identify with their disciplines and even more with their specialisation. From this perspective, it is accepted that efforts to improve teaching through activities such as consultation should be located within academic units where teachers can work with their colleagues from their disciplinary perspectives (Jenkins, 1996; Boud, 1999;
Shulman, 1993).

METHOD

This paper discusses the qualitative data on teacher’s perception of the efficacy of consulting over student feedback in peer support groups. The data is drawn from in-progress PhD research on the effect of combining student ratings feedback with peer support groups as a model of consultation. For this component of the study, data was collected through participant observation, informal conversations, and structured interviews.

Sample

The total sample for the study comprised 79 teacher-volunteers in two universities in Jamaica. Following the collection of mid-term ratings, participants were made to remain in their natural work settings in terms of academic units. These units or clusters were then randomly allocated to either the intervention or control group. In that way, all teachers in the same cluster were given the same treatment. The 39 faculty members in the control group received neither mid-term ratings results nor intervention materials until the end of the study. Forty participants (20 female and 20 male) received the intervention.

Eleven clusters representing peer support groups were formed to provide mutual support to group members to improve the two or three areas of teaching for which they received low ratings and had targeted for improvement. The size of peer groups depended on the number of volunteers from a particular unit and ranged from 2 to 7 members. Of the 11 groups only 6 groups involving 24 (60%) participants actually met with colleagues in support groups, while 16 (40%) did not meet at all. There were no clear differences in the interactions among peer partners in the different sized groups.

The Intervention

The intervention to be evaluated was the provision of mid-term ratings and the examination, interpretation, and development of improvement strategies through discussion with colleagues in a structured group setting as a model of consultative feedback.

A results packet was returned to participants and included the ratings results, a double-sided interpretation guide, and nine double-sided sheets with teaching tips in the intervention group between week 8 and week 9 of a 16-week semester. The ratings report provided descriptive statistics for specific items and rating factors. The report also displayed bar graphs that compared teacher self-ratings and ratings received from students.

The intervention itself lasted only four weeks. Typically, peer groups met only once during this period for between 40 and 60 minutes. Most meetings were held during the lunch hour or between classes.

Interviews

To gain an understanding of the factors that might have prevented full participation in peer support group sessions, and to ascertain faculty perceptions of the worth of the intervention and its materials, structured interviews were conducted with 35 of the 40 academics. This involved all 16 participants who did not meet in the peer support groups and 19 of the 24 who met with their peer partners. Written notes were taken during interviews.

Data Analysis

A content analysis was conducted on the interview and observation data. Themes were identified and checked for patterns in the responses between groups and between the two universities.

RESULTS

Theme: Talking with Colleagues

Overall, participants seemed comfortable with the idea of engaging in dialogue with their peers about their student ratings feedback. For the most part dialogue, in the peer groups, was more in terms of students, than on teaching. Although implied, at no time was reference explicitly made to improving the quality of student learning.

It was evident, that lack of group cohesiveness in one department influenced participation. This was the only peer group in which all its members could not seem to find the time to meet. As another indicator, the typical response to the question on willingness to talk with colleagues was “it depends on who is in the group”.

Two main issues were explored in the interviews under this theme: (a) identification of the factors that affected full participation in peer meetings, and (b) the effect, if any, of the group process on individual teachers.

Factors affecting participation

When asked about the factors that affected their full participation in peer group meetings, without exception all teachers, those who had not met with their peer partners at all, and those who had met only once, identified the factor of time. This might be understood against the background that these teachers did not receive release time from teaching duties or incentives to participate in the study, unlike other accounts in the literature. There is therefore less artificiality about the context of this study.

The view that time was a constraining factor was expressed by references to the lack of common time for partners to meet, and on the lack of time due to heavy workloads.

Lack of common time. Even though teachers were in the same building or shared the same office space it proved rather difficult for teachers to meet because they had such different work schedules:

With different class schedules it is not easy to find a time for group members to meet during the day. (UA24)

There’s just no common time for teachers to meet. (UA19)

Lack of motivation did not appear to be a problem but the events of the semester and busyness associated with first semesters probably magnified the sense of not having enough time:

I really wanted to do this but as you can see this semester has not been a good one. (UB4)

Classes were interrupted for approximately three weeks at the beginning of the semester due to torrential rain associated with two tropical storms and the threat of violence from a planned national election.

Lack of unpressured time. Academics in Jamaica might not be as actively engaged in the research function as their counterparts in the United Kingdom, for example, but these academics are equally pressured by the demands associated with their teaching and service functions. Some academics teach as many as four different courses per semester in addition to institutional and community service obligations. It is not uncommon for a teacher’s workday to extend from 8 am to 9 pm, Monday to Friday, and 9 am to 5 pm on Saturdays.

Many teachers expressed feeling overwhelmed with academic and administrative duties that leave them with little time to share with colleagues:

This place is nothing but work. …You can’t even find time for yourself … it’s just too much … M., had so many meetings we were not been able to meet again as planned. (UA9)

I just couldn’t find the time as hoped with having to prepare documents for____ [a quality assurance exercise]. It’s just crazy around here” (UB3).

Effect of peer groups

Nineteen teachers contributed responses for this section. Teachers may not have received much assistance to improve low rated areas, because of the tendency to focus on the ratings in general and the written comments made by students. The novelty of the approach may have played a role here. The main benefits came in the form of opportunity to reflect, and motivation to make changes to practice.

Reflection. Motivation to “look more closely” was the predominant comment on benefits received from talking with colleagues in a group setting. Several teachers mentioned about becoming ‘aware’ which led them to pay more attention to the ratings:

Yes, the meeting made me aware of more things to take into consideration, and motivated me to act on it [the ratings]. (UA2)

In a sense it prompted me to look closely at my teaching methods. (UA12)

It [meeting with peers] propelled me to try new methods … and realise the significance of ratings. (UA4)

The results somehow caused me to reflect on my performance. It may just be that teaching was taken for granted before. (UB1)

Being enlightened was also mentioned as a positive outcome. Many teachers found the interaction rather revealing as they had come to realise that certain views from students were not unique to them. Teachers were pleasantly surprised to learn that students were basically saying the same things about their colleagues:

It was rather interesting to see how the responses are common across lecturers. I have come to realise that there could be some objectivity in ratings …. I am now more appreciative of the results. UA3

When one teacher, regarded as outstanding, shared remarks contained in the written comments, one peer partner could be heard saying, “I never knew they say that about you too”.

A focal point in the discussion among faculty in the different groups was their tendency to rate courses more difficult or as having a heavier workload than did students. In this case, the feedback differed markedly from students’ frequent protests in classes. On this basis many teachers questioned whether student really took the exercise seriously or understood what they were doing for that matter.

It was also very informative for teachers to hear of the experiences and ideas adopted by their colleagues. One junior teacher noted:

As this is my first teaching assignment, interacting with the more experienced teachers and those with large classes proved rather interesting and enlightening. At least I’ll have some ideas when my time comes. (UB5)

Another benefit of the peer groups for these teachers is the opportunity to actually examine the ratings in some detail. Although ratings were returned one week before the first group meeting was scheduled, it was observed that many teachers used the meeting time to examine the ratings, making comments as they peruse them.

Changes to Teaching. The interaction among teachers in the peer groups moved many teachers beyond the point of looking closely at ratings to a point of decision to take action to change certain aspects of their teaching. Many teachers reported that they took action on the feedback data that they probably would not have taken if they were working alone.

For example, when one teacher shared students’ complaints about being called “dunce” with colleagues, the peer partners exchanged ideas on how they dealt with similar circumstances. In the interview the teacher delightfully indicated that the earlier approach was revised and that interpersonal relationship in the class improved. It should be noted however, that several teachers who did not meet with their peer partners also indicated that they made changes to their teaching on the basis of the feedback received.

The value of the group interaction could also be assessed from responses on the elements of the intervention that were most valuable. A significant number of teachers identified “meeting with colleagues” as the most helpful aspect, and the strength of the intervention. They appreciated the opportunity to interact with their peers because, in the words of one teacher, “… the opportunity to reflect is lacking otherwise”. At the same time, teachers recognised the limited intervention time available for peer groups to meet as a shortcoming of the study.

The data for this theme suggest that teachers are willing to collaborate with their colleagues for teaching improvement but are constrained by the conditions of their work.

Theme: Conditions for use of peer support groups