BSE Evaluation 1

Running head: SPEECH COMMUNICATION AND THEATRE EDUCATION

Speech Communication and Theatre Education BSE

Utilization-Focused Evaluation

Jennifer Emanuel, Todd Kitchen, Kari Monsees, Donald Scott, Ruthann Williams

University of Central Missouri


Introduction

High stakes testing, national accreditation and program effectiveness are issues that challenge post secondary education faculty and program administrators to internally and externally evaluate their programs. Evaluation of programs provides colleges and universities with a viable method of identifying and implementing programs and courses that meet diverse student needs while aligning with national and state standards and accrediting body criteria. Through accreditation, post secondary institutions are able to offer programs that meet recognized standards and confer common degrees. Therefore, the purpose of this utilization focused evaluation (UFE) (Patton, 1997) was to investigate the Bachelor of Science in Education (BSE) degree program in Speech Communication and Theatre Education at the University of Central Missouri (UCM) regarding three main topics: (a) National Council for Accreditation of Teacher Education (NCATE) reaccredidation preparedness, (b) state teacher licensure examinations, and (c) alumni and student perceptions of program strengths and weaknesses.

The program being evaluated by this UFE is currently accredited by NCATE. NCATE was founded in 1954 as a national accrediting body to help establish high quality teacher, specialist, and administrator preparation programs. Through the process of professional accreditation of schools, colleges and departments of education, NCATE works to make a difference in the quality of teaching, teachers, school specialists and administrators (NCATE, 2007). By accrediting schools of education, assurance that teacher preparation programs meet national standards and have undergone impartial external review by professionals, policymakers, and representatives of the public. Accredited schools also benefit by the development of competent teachers, highly sought after and well prepared graduates, and assurance the school’s educator program has met national standards (NCATE).

For new Missouri Teachers, state licensure is contingent on achieving passing scores on the Speech Communication PRAXIS examination. This is a state requirement enforced under the auspices of the Missouri Department of Elementary and Secondary Education (MODESE). The PRAXIS is designed to measure the preparedness of new teachers to teach speech communication and theatre in the classroom. However, due to variations in teacher education undergraduate programs, not all of the material covered in the examination is always familiar to examinees (Educational Testing Services [ETS], 2007). Consequently, one focus of this UFE was to evaluate how well the program under investigation prepares future teachers to be successful at the Speech Communication PRAXIS examination based on the criterion established by the intended users (Patton, 1997).

According to Patton (1997), “Evaluation stakeholders are people who have a stake—a vested interest—in evaluation findings.” (p. 41). For the purposes of this evaluation, the stakeholders identified as those having a vested interest in the findings included program staff and faculty, program administrators, and Speech Communication and Theatre Education BSE program alumni. During the course of the last 10 to 15 years, the BSE program alumni have achieved a 100% pass rate for the PRAXIS examination (personal conversation with Dr. Julie Pratt, UCM faculty, March 7, 2007). Yet, how well prepared those students were to take the exam lingers as an unknown. With this in mind, the alumni were determined as an appropriate group to provide the feedback data being sought by this evaluation. An additional group, current students, was also selected as evaluation participants. A focus group interview was conducted to garner experiential knowledge from the students’ current program participation.

Consistent with Patton (1997), this program evaluation provided a means of “engaging program staff, managers, funders, and other intended users in examining how their beliefs about program effectiveness may be based on selective perception, predisposition, prejudice, rose-colored glasses, unconfirmed assertions, or simple misinformation” (p. 29). Additionally, the utilization focused evaluation was identified as the best method of producing the needed data while engendering commitment to the evaluation process by the intended users. Data collection methods included an online survey administered to speech communication and theatre education alumni and a focus group interview of current students. By conducting the evaluation, the investigators and stakeholders sought to answer the following three questions:

1.  How well is the Speech Communication and Theatre Education BSE program, based on national standards, preparing candidates for careers in teaching?

2.  How well did the program[s] content prepare students for state teacher licensure examinations (PRAXIS)?

3.  What Speech Communication and Theatre Education BSE program changes are suggested to improve program content?

Consistent with Patton (1997), this evaluation endeavored to produce findings that provide a basis for “rendering judgments, facilitating improvements, and/or generating knowledge” (p. 65). Discovering the merit and worth of the program as it is currently structured, collecting formative data to support program improvement, and generating lessons learned by stakeholders, provides essential knowledge needed by intended users as they attempt to make program improvements and/or program decisions (Patton). To this end, the following sections provide a review of related literature, evaluation methods, data collection and analysis, findings, and discussion and recommendations.


Literature Review

According to Patton (1997), corporations, philanthropic foundations, and nonprofit agencies have turned to evaluators for help in enhancing their organizational effectiveness. In the opinion of Lefevre (1959), a true profession involves comprehensive study, and some measure of successful attainment of standards designed to ensure the quality and integrity of a program, thus the purpose of this utilization focused evaluation (Patton, 1997) was to investigate the Bachelor of Science in Education (BSE) degree program in Speech Communication and Theatre Education at the University of Central Missouri (UCM) regarding three main topics: (a) National Council for Accreditation of Teacher Education (NCATE) reaccredidation preparedness, (b) state teacher licensure examinations, and (c) alumni perceptions of program strengths and weaknesses. This literature review provides a brief examination of the bodies of literature and research that address: (a) the purposes and strengths of the utilization focused evaluation, (b) the history of theater and speech programs as an area of study, and (c) the need for an established set of accrediting standards to govern the UCM Speech Communication and Theatre Education program.

Program Evaluations

Program evaluations are a frequently occurring process at educational institutions (DeValenzuela, Copeland & Blalock, 2005). According to Patton (1997) “program evaluation is the systematic collection of information about the activities, characteristics, and outcomes of programs to make judgments about the programs, improve program effectiveness, and/or inform decisions about future programming” (p. 23). Additionally, Patton describes the purpose of an evaluation as determining the worth, merit, or value of a program. Patton further describes useful evaluations as those supporting action, thus highlighting the distinguishing point between a utilization-focused evaluation, and a standard evaluation or review—the generation of knowledge or truth versus useful information that supports action. Moreover, DeValenzuela, Copeland & Blalock assert:

Colleges and universities are increasingly being required to demonstrate their efficacy on the basis of standardized student scores and through more traditional measures of program evaluation. Such program evaluations can be costly in terms of time and resources. Therefore, it is important that the benefits of program evaluation outweigh the costs. It is vital that the information produced as a result of program evaluation be used by stakeholders to improve program implementation and produce positive outcomes. (p. 2227)

In the opinion of King and Pechman (1984), effective evaluations do not necessarily prompt the implementation of any recommendations, nor do they guarantee that immediate decisions will be made based on the information revealed during the evaluation process. Furthermore, King and Pechman assert evaluation results influence users in indirect or gradual ways. Patton (1997) counters this view by encouraging the use of utilization focused evaluations as a way to promote use toward program improvement.

Evaluation Strengths

According to Eurich, Pace and Ziegfeld (1942), evaluations are undertaken for a variety of reasons. However, in educational settings, the strengths of evaluations are elucidated. Eurich, Pace and Ziegfeld describe evaluations as: (a) a method for checking the effectiveness of schools in terms of behaviors changes and/or student achievement, (b) a catalyst to plan future educational programs and procedures, and (c) a tool to accredit institutions of learning. Eurich, Pace and Ziegfeld further assert while most educational assessments focus on testing individual skills gained in courses, program evaluations seek a more comprehensive examination, and the overall effectiveness of an educational experience may best be gauged after students leave the school.

To accomplish effective evaluations for improvement, Patton (1997) recommends evaluations employ methods of analysis appropriate to the question; support answers with legitimate evidence; list assumptions, procedures, and modes of analysis; and eliminate competing evidence. According to Patton, the strengths of a utilization focused evaluation include:

1.  The evaluation is designed to support, reinforce, and enhance attainment of desired program outcomes.

2.  Evaluation data collection and use are integrated into program delivery and management. Rather than being separate from and independent of program processes, the evaluation is an integral part of those processes.

3.  Program staff and participants know what is being evaluated and know the criteria for judging success.

4.  Feedback of evaluation findings are used to increase individual participant goal attainment as well as overall goal attainment.

5.  There are no, or only incidental, add-on costs for data collection because data collection is part of program delivery and implementation.

6.  Evaluation data collection, feedback, and use are part of the program model; that is, evaluation is a component of intervention.

History of Theater and Speech Programs

In the opinion of Hobgood (1964), university theatre programs are unique. American colleges and universities have been impacted greatly by such programs. “Theatre in U.S. higher education has known two major periods of growth. In the decades following World War I and World War II, when the nation’s colleges and universities were expanding, …” (p. 145). By the 1920s, campus theatre programs were very limited and informal. Conversely, when colleges and universities expanded their instruction, there were sharp increases in theatre instruction across the county. Hobgood further documented that by the 1960s, over 300 colleges and universities had the equivalent of an undergraduate major in theatre. Still, less than half of colleges and universities offering the equivalent of an undergraduate major, awarded degrees in theatre as a field of study. Hobgood noted, “Theatre programs are usually administered in association with the program of another field or in support of such a field, e.g. Speech” (p.144).

The Need for Accreditation

According to Kriley, Pritner and Roberts (1977), the necessity for a set of standards for degree programs emerges from three needs or concerns. These include a lack of common understanding of commonly awarded degrees, a need for a clearer representation of existing programs, and increased accountability to students, colleagues, and their administrators. Moreover, Kriley, Pritner and Roberts believed the greatest need for colleges and universities is to offer programs that can be certified as meeting minimum standards.

Research Methods

This study used a form of research known as utilization focused research to evaluate student experiences within the University of Central Missouri’s BSE certification program for speech communication and theatre teachers. Students within this degree program take coursework from each department, but select one as an area of emphasis. This study utilized both qualitative and quantitative research designs, with a focus group of current students and a survey of graduates used to gather the required data. A primary component of utilization focused research is an intimate relationship between the research team and the stakeholders within the organization for which the research is conducted (Patton, 1997). Consequently, the speech communication and theatre education program faculty who teach classes within the BSE program were included in the evaluation process. The team of stakeholders expressed interest in the perspectives of both current and former students of the program to facilitate program improvement. The evaluation team pressed the faculty stakeholders to reveal what they would accept as evidence of need for recommending improvements (Patton), and together chose surveys as the primary data gathering instrument.

Sample Selection

The UFE consisted of two samples: current students in the program and graduates of the BSE degree program. Both were purposeful samples in that they met established criteria for the study (Merriam, 1998). The faculty members of the BSE program gathered the contact information regarding possible participants and provided the evaluation team with 25 program alumni from the previous 15 years, giving thoughtful consideration to generating the greatest amount of information for the program evaluation (Patton, 1997). However, the alumni sample was limited by the fact that only those with available contact information were selected for participation. The group of current students was a convenience sample (Kruger & Casey, 2000) consisting of those who chose to attend a monthly luncheon with a BSE faculty advisor. Eleven program students who attended a regularly scheduled meeting agreed to provide feedback as a part of the focus group.

Instrumentation

After a preliminary meeting to establish the general outline of the evaluation, a second meeting involving the evaluation team and stakeholders was held to determine the types of information to be gathered from subjects, as suggested by Patton (1997). Evaluation team members used the information garnered from this meeting to develop a draft survey instrument for program graduates. This draft was emailed to stakeholders prior to a follow up meeting designed to further refine the instrument. After the subsequent meeting, final revisions were made and approved by all stakeholders via email channels. The final product was a 44 question survey containing 18 demographic questions, 22 professional standards questions, two questions detailing the content of specific courses, and two open ended questions (Appendix A).

A modified version of the survey was generated for the focus group members, along with open ended questions to generate group discussion, with the input of BSE faculty members. The focus group survey and discussion questions were circulated via an email to stakeholders to allow input as needed (Appendix B).

Data Collection

An invitation to participate in the online survey, coauthored by BSE stakeholders, was sent to the available sample of program alumni. The survey was available for a two week period. A follow up email request was sent to those not responding after the first week. Once the survey was closed, a data report was generated through the survey software.