Opening Doors for Accountability
By Sharon F. Rallis and Margaret M. MacMullen
The authors have studied schools that are successfully using an ongoing inquiry cycle to improve student learning. They describe that process here -- a process that builds the capacity to improve as the school's knowledge base increases.
DEMANDS for greater productivity and increased accountability in America's public schools are loud and ubiquitous. Reform and restructuring efforts have addressed almost every aspect of schooling, and state and federal policy makers are increasingly setting standards and specifying criteria for assessment that schools must meet. In this atmosphere of heightened accountability, a series of questions driving school reform have become central: How can schools help all students meet high standards? Who sets those standards? How is student progress best assessed? Who should do the assessing -- the state, the district, the school? What is the relationship between the external mandates and student achievement?
Recent accountability reforms have moved both the standards and the criteria for meeting them outside the school. But we have learned that what happens inside a school is key. Some schools, which we refer to as inquiry-minded schools, have already incorporated issues related to standards and assessments into their culture and are improving as a result.1 These school communities recognize that improving teaching and learning is an intentional and ongoing process. They ask themselves important questions and have the courage to act on their findings. Because they recognize that evaluation leads to action and that every action creates new questions, they have institutionalized the process of reflective inquiry. These schools have become internally responsible for the creation and maintenance of standards.
At the Annenberg Institute for School Reform at Brown University, we have been studying recent accountability movements and their influence on schools. Our aim is to help schools and their communities use data effectively and systematically and to help policy makers attend to the realities of school life as they craft accountability systems. This past year, we observed the work of 18 Annenberg Challenge schools in six states. These schools had been brought together to share their data-based practices. We also followed other schools participating in accountability projects. All these schools are attempting to reconcile the requirements of state- or district-level accountability systems with their own local needs.
In this article, we draw on the experiences of the most successful schools to offer a picture of what a school can do to take ownership of internal and external standards and to use data from state assessments and other sources to improve instruction. We define accountability, describe the mindset of schools that have institutionalized reflective inquiry, and explicate the inquiry cycle. We also consider the challenges that schools face in addressing standards and that policy makers face in developing accountability systems to support the delivery of high-quality instruction. We begin with three scenarios that describe inquiry-minded schools faced with external accountability requirements.
Monarch School
Students at the Monarch School appear to be achieving at high levels. During the current school year, 98% of the students who were tested passed the statewide tests in reading, 100% passed the tests in writing, and 97% passed the tests for mathematics. These figures exceed the statewide average by at least 12 percentage points in each area. Moreover, the school's passing rate exceeds that of other district schools by even larger margins. Gerry Macia, the principal at Monarch, is tempted to let the school rest on its laurels. Still Gerry brings the summary report sheets to the next school council meeting for discussion.
Gerry: I'm genuinely excited to share our report card and scores with you this year. Notice how much they have come up -- even since last year! I see real improvement here. This achievement is remarkable because we are not exactly a "privileged" school. Remember, more than half of our kids are on free/reduced-price lunch; they come from diverse backgrounds; many are considered "at risk." And we're not a small school. We have over 800 students in five grades. It seems we have accomplished what all this reform talk aims for.
Sandra (third-grade teacher): Great! I'd like to find out how we did this. We've brought in so many programs in the past two years as part of the Monarch 2000 Initiative -- I wonder which ones are doing the job. And are any not worth the time?
Marty (fourth-grade teacher): It's good to hear that our kids did so well. But I'm still a bit skeptical. I just don't feel that all my students know the material that well. Maybe we ought to look a bit more closely at the score reports. I still have some questions.
Yvonne (fourth-grade teacher): I think we need to ask exactly what 98% passing means. What does "passing" mean?
As the members of the group examine the reports from the state testing bureau, they notice gaps between the high percentage of students who passed and the lower percentage of those who mastered all the objectives. For example, Marty sees that, while 96% of the fourth-graders passed in reading comprehension, only 62% mastered all the objectives. Yvonne notes that the gap is even larger -- 96% passing versus 40% mastery -- in math. They discover that the standard for passing actually means meeting minimum expectations.
In response to this discovery, they decide to focus their inquiry on the gap between passing and mastery. Soon, they are generating questions about student learning. What does it mean to meet minimum expectations? What does it mean to master an objective? Which objectives do students master? Which are they not mastering? Which students master them and which do not? Does mastery tend to clump along lines of gender, race, socioeconomic status?
These questions on learning generate questions about teaching. What are we actually doing in our classrooms? What programs do we use, and are we following them? Do our standards and objectives match those of the tests? If not, how do we reconcile the test with our curriculum? Are we teaching for mastery or for minimum expectations? How are students responding to each program?
The school council first prioritizes the questions and identifies data that can help answer them. This activity leads them to propose additional data collection and analysis. For example, the council decides to analyze the state's report to understand better what the apparent gap means. Then they can focus their inquiry on a specific group of students or on a specific set of objectives. They also decide to seek data on which children participate in specific programs and to gather evidence on how children in these various programs perform on specific objectives.
Gerry concludes the meeting by reminding council members that their questions about last year's scores had led to their decision to implement a new literacy program and by suggesting that Monarch revisit that decision in light of the new data. "The scores do show that we have improved. But we can't stop here. I agree that we have to ask some hard questions about what look like perfect scores. I hear you saying that we want to aim for mastery, not just minimum expectations. It seems we aren't reaching every child, so we still have room to improve."
Valley Middle School
Several years ago, Valley Middle School adopted a curriculum and pedagogy that recognize and develop multiple intelligences. Since that decision, the state has written and adopted the New Standards for Achievement, and statewide tests are administered to determine how individual schools are meeting these standards. Valley Middle School is concerned because the state tests take into account only the traditional conceptualizations of intelligence. The school's council meets to consider how Valley might handle the state's release of the annual "school report card," which is based on the school's scores on the statewide tests.
Terry (principal): This year's scores are going to be pretty important. It's our second year using our Many Pathways curriculum. We're asking our kids to do more in class than take traditional paper-and-pencil tests, so we may find that they won't do as well on the standardized tests. How should we feel? And there might be parents who agree with us on multiple intelligences but won't be happy if they find their children can't do what they think of as the basics.
Nina (a parent): I guess I'm one of those parents. Did adopting a "multiple intelligences approach" mean that our kids can't meet the state standards? I thought this way would make it easier for all kids to do better in all areas.
Jim (sixth-grade teacher): Eventually, I expect it will. I'm concerned because we're still new at this, and that might make the scores go down for a while. I believe in it, but I want more time before I am judged.
Terry: So the test scores may raise a lot of questions, some of them legitimate. Let's think ahead. What are we going to want to know?
Carol (eighth-grade teacher): If the scores show some areas where my students' achievement has decreased, I'll want to know why. I'll want to analyze those scores in light of what I am asking my kids to do.
Maria (fifth-grade teacher): You know, I am asking my students to do things they are not tested on -- never have been tested on. Speaking of assessment, I'd like to assess how well they do these things. But they aren't things you can measure on a paper-and-pencil test. I care that my kids are achieving; I just want to be sure that the tests get at what we are asking the students to do. We need new kinds of evidence.
Nina: We parents want that too. The curriculum is supposed to be building my child's interpersonal intelligence and his spatial intelligence. Do we know that this is happening? I really do want to know what you are doing and what difference it makes for my Carl. The school has been using Many Pathways for two years. Carl moves to the high school next year. Now is the time to drop the program if it's not helping Carl learn.
Carol: The big question for all of us is, What difference is the Many Pathways curriculum making for children's learning? It seems to me that we can use the state's standardized tests along with other kinds of evidence to answer that.
Based on this discussion, the council explores questions about teacher assignments and student work. It focuses its inquiry on a list of related questions. What do teachers assign? What do they expect students to do? What does good work look like? What work are students producing, and how does it match our expectations? What exactly do the standardized test items measure? To answer these questions, the council decides to collect samples of teacher assignments and student work to analyze and compare them to the information from the standardized test.
Uncas School
The school improvement council at Uncas School, along with all such councils in the state, has been charged with amending its School Improvement Plan (SIP) in accordance with the strengths and weaknesses identified through statewide testing. Principal Pat Summers convenes the council in the fall and releases the news that nearly 80% of Uncas School students scored at the proficient or exemplary level in all areas tested. Pat is delighted with the improvement over last year and suggests that the council pinpoint what the school is doing right and support those activities in its revised SIP.
Dale (fourth-grade teacher): We had a miracle here! If I read these reports right, we've come from having only 22% of the students at a proficient level or better in math last year. And from only 9% in writing. That means we reached a lot more kids than we did before. I teach the grade that is tested, and if those scores are right I have only one or two children who can't do the work. Sounds good, but I'm not sure I buy it.
Maria (fourth-grade teacher): I don't think I did anything that different this year. I'd like to think that putting the SIP in place made me a better teacher, but I can't say how.
Leah (third-grade teacher): I can't even tell whether we went up in reading because last year's reading test used "high-medium-low" instead of "proficient" and "exemplary." Anyway, doesn't it say here that the assessment used a different test this year?
Alton (a parent): I think we need to look a bit more closely at all these scores before we make any final evaluation of our progress. Notice how last year, when our students scored quite poorly, the middle school kids did so well? Their proficiency and honors levels were all above 50%. Does this mean that our kids get smart after fourth grade? Or that teaching improves after fourth grade? The numbers alone do not show us anything.
Pat (principal): Only one thing is certain: we really cannot make our judgments based on this single assessment -- especially if we don't truly understand it. We can ask the state for further clarification, but we should also see what other data sources we have to evaluate our students' learning.
The council decides to focus its inquiry on student learning and spend the rest of the meeting identifying assessments that are relevant for the specific needs and objectives of Uncas School. The next meeting's agenda is set: to design an evaluation plan that makes sense of the data that have been gathered.
These three brief vignettes illustrate the power of strong internal accountability capacity to incorporate the demands of external accountability systems. The experience of these schools is supported by a study by Fred Newmann, Bruce King, and Mark Rigdon of the accountability structures of 24 restructuring schools. They found that, even if high-quality standards (and incentives) were provided, many schools would be unable to meet them because they lacked the necessary professional knowledge and skills, appropriate curriculum, and adequate materials and facilities. In schools characterized by strong accountability systems, the authors found that:
Staff identified clear standards for student performance, collected information to inform themselves about their levels of success, and exerted strong peer pressure . . . to meet the goals. In some schools, strong internal accountability was accompanied by compatible external accountability, but in others, internal accountability existed without, or even in opposition to, external accountability requirements.2
What's Missing from Accountability Systems
Unfortunately, most external accountability approaches have paid little attention to creating the internal capacities required to carry them out. Instead, they have fostered a tension between the public's legitimate need to know and a school's legitimate need to explore its own questions. The term accountability frequently makes teachers and principals uncomfortable because they see the questions asked and the data collected as originating outside their work. Accountability appears to be public and external rather than a central component of their own practice; they see themselves as "held" accountable, not as "being" accountable.
To be accountable means to be obligated to understand and explain one's actions. Accountability relies on feedback; it links performance with results. Thus accountability in schools is not only about results but also about every aspect of teachers' actions. What are we choosing to teach our students? How should we instruct them? How will we know when they know it? What will we do when not everyone learns? Put simply, practitioners who are accountable evaluate their own practice and then use the information to improve. From this perspective, accountability is the foundation of successful practice because it entails knowing what we do and learning from that knowledge.
Often, however, assumptions about the nature of schools and their internal capacities limit the effectiveness of current accountability systems. One such assumption is that high standards + assessment + incentives (or consequences) = higher student achievement. This equation overlooks two key components of effective accountability: capacity and shared accountability. It assumes that schools have the capacity to improve and lack only the standards, assessment tools, and incentives to do so. However, policy makers must ask a key question: Once schools address standards and receive assessment data, what do they do? To answer, policy makers need to understand schools and the conditions that lead to continuous improvement. They then need to develop policies that foster the leadership, collaboration, and skills that are essential to school improvement and to accountability. To strengthen these, schools need both time and money. Accountability systems can then accommodate and support specific practices and conditions as well as provide the structure of standards, assessment, and consequences.