Top of Form

Warning: your search results may be incomplete - there were some problems with the database. Please inform HW Wilson technical support if this problem persists.

AUTHOR: / KENNETH E. VOGLER; Novick, Bernard; Kress, Jeffrey S.; Elias, Maurice J.
TITLE: / THE IMPACT OF HIGH-STAKES, STATE-MANDATED STUDENT PERFORMANCE ASSESSMENT ON TEACHERS' INSTRUCTIONAL PRACTICES
SOURCE: / Education (Chula Vista, Calif.) 123 no1 39-55 Fall 2002

The magazine publisher is the copyright holder of this article and it is reproduced with permission. Further reproduction of this article in violation of the copyright is prohibited.

ABSTRACT
The purpose of this study was to determine if the public release of student results on high-stakes, state-mandated performance assessments influence instructional practices, and if so in what manner. The research focused on changes in teachers' instructional practices and factors that may have influenced such changes since the public release of high-stakes, state-mandated student performance assessment scores. The data for this study were obtained from a 54-question survey instrument given to a stratified random sample of teachers teaching at least one section of 10th grade English, mathematics, or science in an academic public high school within Massachusetts. Two hundred fifty-seven (257) teachers or 62% of the total sample completed the survey instrument. An analysis of the data found that teachers are making changes in their instructional practices. The data show notable increases in the use of open-response questions, creative/critical thinking questions, problem-solving activities, use of rubrics or scoring guides, writing assignments, and inquiry/investigation. Teachers also have decreased the use of multiple-choice and true-false questions, textbook-based assignments, and lecturing. Also, the data show that teachers felt that changes made in their instructional practices were most influenced by an "interest in helping my students attain MCAS assessment scores that will allow them to graduate high school" and by an "interest in helping my school improve student (MCAS) assessment scores. It may be interpreted from the data that the use of state-mandated student performance assessments and the high-stakes attached to this type of testing program contributed to changes in teachers' instructional practices. The changes in teachers' instructional practices have included increases in the use of instructional practices deemed by educational researchers as the "best practices."
There has been much debate about the usefulness of state assessment as an instrument for reforming educational practice. Opponents of measurement-driven reform assert that high-stakes assessment creates negative side effects such as dumbing down the curriculum, de-skilling teachers, pushing students out of school, and generally inciting fear and anxiety among both students and educators (Darling-Hammond & Wise, 1985; Gilman & Reynolds, 1991; Jones & Whitford, 1997; Madaus, 1988a, 1988b; Shepard, 1989). According to the opponents of measurement-driven reform, these side effects outweigh any possible benefits of measurement-driven reform.
Proponents of measurement-driven reform have argued that "if you test it, they will teach it" and that assessment can guide the educational system to be more productive and effective (Popham, 1987). These proponents of measurement-driven reform add that the recent development of performance-based assessment offers a technology for assessing higher-order skills and deeper understanding of content. This development improved the early, and often maligned, minimum competency tests that used only multiple-choice items (Baron & Wolf, 1996; Bracey, 1987a, 1987b; Rothman, 1995).
Past research has emphasized the negative consequences of high-stakes attached to test results. In the 1980s and early 1990s, such high-stakes created pressures that encouraged teachers to place unprecedented emphasis on drill-based instruction, narrowing of content, and the regurgitation of facts (Corbett & Wilson, 1991; Smith, 1991). In addition, substantial teaching time was lost in test preparation activities -- i.e., learning the test formats rather than additional content. However, the tests that were studied by educational researchers were multiple-choice, basic-skills-oriented tests, not the newer performance-based assessments (Firestone, Mayrowetz & Fairman, 1998).
Performance-based assessments test student knowledge differently than multiple-choice, basic-skills tests. Multiple-choice tests only require students to fill in an oval; performance-based assessments require students to show their knowledge by constructing a response -- i.e., writing an essay or showing how they solve a mathematical problem. This form of testing assesses higher level thinking skills while the other tests memorization (Rothman, 1995).
Although the test format has changed, from all multiple-choice questions to some or all constructed response questions, the use of stakes as a way to exert significant influence on classroom learning and instructional practices has remained constant. These stakes have included incentives such as cash awards to schools or individual teachers who demonstrate high levels of student performance. They also have included consequences for schools, individual teachers, and students; these consequences include public reporting of test results, prevention of grade-to-grade promotion and high school graduation, and possible takeover of schools that continue to demonstrate low levels of student performance. These incentives and consequences are all based on one thing- the test score. But do these test results influence instructional practices? If so, in what manner? The point of this study is to answer these questions.

PURPOSE OF THE STUDY
The purpose of this study was to determine if the public release of student results on high-stakes, state-mandated performance assessments influence instructional practices, and if so, in what manner. This study was designed to answer the following questions:
Central question:
In what manner do the student results from high-stakes, state-mandated performance assessments influence instructional practices?
Additional guiding questions include the following:
1. Have teachers changed their instructional practices since the public release of high-stakes, state-mandated student performance assessment scores?
2. In what way(s) have teachers changed their instructional practices?
3. What factors have influenced such changes?
4. Are these changes consistent with current research on "best" practices?

SAMPLE
On June 18, 1993, the Governor of Massachusetts signed the Massachusetts Educational Reform Act into law. The law addressed and mandated fundamental changes in the state's public education system. Among the areas affected by the legislation were school finance, school demonstration, teacher tenure and certification, and curriculum and assessment.
In the area of curriculum assessment, new curriculum frameworks and learning standards were created in the academic areas of English language arts, mathematics, science, history and social science, and world languages. A high-stakes, state-mandated performance assessment called the Massachusetts Comprehensive Assessment System (MCAS) was designed to evaluate progress in meeting the state's new learning standards in the curriculum frameworks.
Beginning in 1998, every student in grades 4, 8, and 10 was required to take the MCAS examination. In November, 1998, the Massachusetts Department of Education reported the results of the initial MCAS examination to the public.
Although it could be argued that all Massachusetts teachers are impacted to some extent by the MCAS, there were two important factors that contributed to the methodology used for sample selection in this study. The first factor was the academic subjects tested in the initial MCAS examination. The scores reported to the public were only in the academic areas of English, mathematics, and science. The second factor was the high-stakes attached to the scores earned by students taking the MCAS. The high-stakes attached to the MCAS examination include a passing score that students need for high school graduation. Beginning with the class of 2003, students will need to pass the 10th grade MCAS to graduate from high school. Taking these two factors into account, the researcher chose to sample 10th grade English, mathematics, and science teachers teaching in strictly academic high schools.
Narrowing the number of possible survey participant teachers down to only those teachers teaching at least one section of 10th grade English, mathematics, and science in a strictly academic high school still left thousands of potential participant teachers statewide. The researcher chose to further define the scope of this study by using geographic location. The Massachusetts Department of Education divides the state into six geographic regions: Greater Boston region; Northeast region; Central Massachusetts region; Southeast region; Greater Springfield region; and the Northwest region. The researcher chose to sample 10th grade English, mathematics, and science teachers teaching in a strictly academic high school located within the Northeast region of Massachusetts.

SURVEY INSTRUMENT
The information used to answer the central question and the four additional guiding questions of this research study was obtained from a survey instrument (see Appendix A).
The survey instrument used for this study has three parts and an "Additional Comments" section. Part I of the survey instrument asked respondents to use a scale ranging from large decrease to large increase to indicate the extent to which they have decreased or increased the use of twenty instructional strategies, seven teaching techniques, and 13 instructional materials and tools since the public release of the Massachusetts Comprehensive Assessment System (MCAS) scores. In Part II of the survey instrument, respondents were asked to use a scale ranging from strongly disagree to strongly agree to indicate the extent in which ten factors had influenced changes in their instructional practices since the public release of MCAS assessment scores. Part III of the survey instrument asked respondents demographic information such as gender, teaching experience, highest level of education obtained, and their teaching assignment. The "Additional Comments" section of the survey instrument solicited comments from the respondents.
Four hundred thirteen (413) surveys were distributed to a stratified random sample of teachers who were teaching at least one section of 10th grade English, mathematics, and science in a strictly academic public high school located within the Northeast region of Massachusetts. Two hundred fifty-seven (257) surveys or 62% of the surveys were returned.
The data obtained from the survey instrument were analyzed using descriptive explanatory techniques. Descriptive techniques included frequency tables for Parts I, II, and III, and response means were used to analyze the data provided by Part I and II of the survey instrument. Explanatory techniques included cross tabulations between Part III and item responses to Parts I and II of the survey instrument. Finally, comments made by respondents in the "Additional Comments" section of the survey instrument were reviewed and grouped according to content.

FINDINGS AND DISCUSSION
The findings and discussion of the data will begin with Part III of the survey instrument, the demographic information. Table 1 is a comparison of the entire survey sample and survey response sample in terms of teaching assignment. Table 2 is a comparison of the survey response sample and the Massachusetts teacher population in terms of teaching assignment and gender.
As can be seen in Table 1, the percentage of 10th grade English, mathematics, and science teachers who responded to the survey is similar to the percent of English, mathematics, and science teachers who received the survey. The sample is a representative distribution of all 10th grade English, mathematics, and science teachers teaching in a strictly academic public high school within the Northeast region of Massachusetts.
Table 2 shows the comparison between gender and teaching assignment of the respondents and data obtained from the 1998 Grade 10 Teacher Questionnaire administered to all Massachusetts public school teachers of science and mathematics during the MCAS examination. The response sample is a representative sample of all 10th grade mathematics, and science teachers in Massachusetts public schools. Unfortunately, because there was no comparison data available, only frequencies of the response sample could be reported for English teachers. their instructional practices?
Table 3 shows notable increases in many of the instructional practices or tools listed in the survey instrument.
It should be noted that a large number of respondents chose "not applicable" for some of the items in this section of the survey instrument. Items for which more than fifty responses are reported as "not applicable" are noted at the end of Table 3.
According to the information provided by Table 3, the largest percent of reported increases in teachers' instructional practices were for open-response questions (81.6%), creative/critical thinking questions (68%), problem-solving activities (63.7%), use of rubrics or scoring guides (61.3%), writing assignments (59.8%), and inquiry/investigation (56.3%). These six items are not only instructional practices advocated by the Massachusetts Department of Education (see Common Chapters of the Massachusetts Curriculum Framework, 1996), but also are the same types of questions or activities tested and evaluated by the MCAS.
An examination of Table 4 shows that teachers have decreased the use of four of the forty instructional practices listed in the survey instrument. Decreases were reported in the use of lecturing (37.8%), true-false questions (29.1%), multiple-choice questions (19.3%), and textbook based assignments (13.3%). These four items do not develop students' higher level thinking skills, nor are these instructional practices advocated by the Massachusetts Department of Education (see Common Chapters of the Massachusetts Curriculum Framework, 1996).
The results of this study indicate that teachers have changed their instructional practices since the public release of highstakes, state-mandated student performance assessment scores. The finding that teachers have increased instructional practices that are used to help develop students' higher-level thinking skills and decreased instructional practices that do not develop students' higher level thinking skills, supports the need for performance assessments (see Mehrens, 1991; Rothman, 1995; Wiggins, 1989a, 1989b; Worthen, 1993). This finding also supports the assertion made by Wiggins (1992) that performance assessments are "tests worth taking."
A chi squared analysis was performed between the demographic information and response frequencies. This analysis revealed many significant differences in responses among the various groups of respondents. The largest number of differences were found among teachers with different years of teaching, and among teachers at different teaching assignments (English, mathematics, and science).
An examination of Table 5 reveals statistically significant differences among teachers with different levels of years of teaching for ten items. For eight of the ten items, teachers with 13-19 years of teaching report the highest percentage of increase. Also, the most experienced teachers (28 years or more) report the lowest percentage increase for seven of the ten items.
A possible explanation for the finding that teachers with the most experience in this study report the lowest percent increase for seven of the ten statistically significant items could be that these teachers may think that MCAS is just another educational fad, and will soon fade away like so many other educational reform efforts they have witnessed throughout their careers. Howe and Thames (1996) noted that "as years of teaching experience increases, teachers are less likely to use assessment related items such as open-ended questions, performance events, rubrics or grading criteria" (p.26). Although it could be interpreted that the findings from this study support this conclusion, another possible explanation could be that teachers with the most experience in this study were already using the type of instructional practices that are supported by the Massachusetts Educational Reform Act and that MCAS had little effect on their instructional practices.
Table 6 reveals numerous differences in responses given by teachers at different teaching assignments (English, mathematics, and science). Open-response questions (90.2%), problem-solving activities (71%), calculators (69.6%), and inquiry/investigation (64.1%) were among the 15 statistically significant items in which mathematics teachers reported the highest percent increase. Science teachers had the highest percent increase for newspaper/magazines (39%) and primary source materials (32.5%). English teachers report the highest percent increase for the use of rubrics or scoring guides (74.7%) and use of response journals (42.5%). The items with the highest percent increase for English, science, and mathematics teachers are all instructional practices that are not only advocated by the Massachusetts Department of Education, but are tested and evaluated on the MCAS.
The findings of this study suggest that teachers have changed their instructional practices since the public release of highstakes, state-mandated student performance assessment scores. Teachers have increased the types of questions or activities that are both advocated by the Massachusetts Department of Education and tested and evaluated on the MCAS.