2006 Superintendent's Summer Institute:

“Strategies for Student Success”

The Oregon Department of Education

Session Summaries for Literacy Leaders

Video streamed presentations are available at

2006 Superintendent’s Summer Institute: “Strategies for Student Success”

Session Summaries for Literacy Leaders

TABLE OF CONTENTS

Assessment

Formative

Examining Student Work to Inform Instruction1

Lani Seikaly

Formative Assessment2

Quality Quinn

Standards in Practice (SIP)3

Paul Ruiz

Summative

Aligned Instruction4

Andrew Porter

Closing the Achievement Gap5

Andrew Porter

Coaching

An Introduction to Instructional Coaching6

Jim Knight

Content Area Literacy

Academic Success for Struggling Readers and Writers:

A Publishing Approach that Works7

Peter Pappas

Content Literacy Continuum 8

Don Deshler

Enhancing Content Area Literacy and Learning9

Elizabeth Moje

Content Area Literacy (continued)

Growing Smarter as Readers with Lexiles10

Rick Dills

Growing Strongeras Readers with Lexiles11

Rick Dills

Readable Science Textbooks12

William Schmidt

Reading in YOUR Content Area13

Quality Quinn

Rigor, Relevance and Reading for Struggling to Average Readers14

Peter Pappas

Science Education in the U.S. (Science Standards)15

William Schmidt

Using Content Enhancement Routines16

Don Deshler

Using Learning Strategies to Improve How Students Learn and Perform17

Don Deshler

Leadership

Creating a Culture of Literacy18

Patti Kinney

Leadership for Change19

Jerry Colonna

Leadership: Teachers and Administrators20

Nancy Golden

NinthGradeAcademy21

Peter Pappas

Transforming School Counseling22

Paul Ruiz

1

2006 Superintendent’s Summer Institute: “Strategies for Student Success”

Session Summaries for Literacy Leaders

Lani Hall Seikaly

Founding Partner of Hillcrest and Main, Inc. with extensive experience in school improvement planning and technical assistance to low performing schools; Project Director for the School Improvement in Maryland Web Site; former middle school principal

Examining Student Work to Inform Instruction

  • Focusing on academic content standards, aligned assessments and accountability measures results in improved achievement. In fact, the report, Quality Counts, 2006, found that factors such as per pupil spending and student demographics had less impact on student achievement than a state’s history of raising expectations and standards. There is strong evidence that the standards based approach to education works. Studies have shown that in some of our lowest achieving schools, there is a “patent mismatch between the real, taught curriculum and the actual standards that are assessed.”
  • Three concepts form the foundation for improved student achievement: (1) meaningful teamwork; (2) clear, measurable goals; and (3) the regular collection and analysis of performance data. Research shows a strong link between the use of formative data and improved student achievement. Good formative assessment has “effect sizes from .4 to .7 standard deviations, larger than most effects of intended instructional programs which are usually considered impressive with a .25 effect size.”
  • In a three year journey to get schools off the “needs improvement” list, it was found that the single most impact on student achievement was the regular examination of student work.
  • How do schools structure and monitor the ongoing collection of formative data in the classroom to determine the nature of student knowledge? An “examining student work protocol” was designed which asks grade level teachers to regularly examine student work as a team to (1) identify characteristics of proficiency (understand the target), (2) diagnose student strengths and needs, and (3) determine the next instructional step.
  • Each teacher on a team brings in three pieces of student work from a common assignment (one each from a high, medium and low level student). A team of four to six works best within a 45 minute time period.
  • The most time is spent in understanding the target. The team must reach consensus about proficiency. “What were students asked to do? Which content standard indicator will be assessed? What is considered proficient performance on the assignment? Exactly what did students need to say or write to be considered proficient?”
  • Teachers must shift from scoring student performance (a summative exam with no diagnostic information) to diagnosing student performance (a formative exam with diagnostic information). A facilitator is needed to define proficiency and diagnosis.
  • Teachers can learn several things from examining student work. They learn about student strengths and needs. Instructional next steps are discovered when teachers obtain individual feedback on how well top, middle and low students did on the assignment. Teachers also learn about their team’s understanding of the content standards.

Several sources cited: Swanson, Schmoker, Black and Wilian, Richardson, Stiggins

Quality Quinn

Author, international literacy consultant and educational aggregator whose company serves school districts, state departments of education, foundations, educational publishers and high-tech companies

Formative Assessment

  • There are 15 elements of effective adolescent literacy. Three of those elements are highlighted by the federal government as absolutely key: (1) ongoing summative assessment (external reporting of broad data), (2) ongoing formative assessment (internal reporting), and (3) professional development.
  • Previously, summative assessment was used to reference national norms. It is still used to track broad groups of students. Now we have summative state data and we are getting more sophisticated in analyzing it—right down to the teachers (e.g., who missed #17 and why; what strategies did special education teachers use to help their students do better on question #19?). These questions are important, because summative assessment results can open (and take away) funding sources—and it is also very public.
  • Formative assessment informs instruction. The power of this type of assessment is in observing and then adjusting instruction to fit student needs. Differentiated instruction cannot be done without it. Formative assessment allows teachers to see what’s been taught, what hasn’t been taught—and perhaps more importantly—what needs more practice. Also, teachers don’t know if different instructional practices work until they do a formative assessment.
  • Reading fluency assessment (speed, accuracy and expression) is a formative assessment that is particularly important. Based on students’ reading fluency in the 3rd grade, we can tell how well they will do in state testing in the 8th grade. Eighth grade fluency is a predictor of whether students go to college. Research is now showing that fluency is critical to math. Oral reading is a way to quickly assess fluency.
  • Remember that good assessment looks like good instruction. Content area teachers need to support students who are reading and writing below grade level. For example, teachers can pay attention to writing by holding students accountable for an identified element that changes every week or so. Hopefully, the teacher down the hall is doing some of the same things. If so, we can get two years growth for one year of instruction.
  • A third part of assessment is getting a grade. Grades don’t do anything to change or alter the way teachers teach. They are simply a summary of behaviors. That is why formative assessment is the critical piece. A quiz could be a formative assessment if it did something more than just fill a square in a grade book.
  • Think about how technology can be used in formative assessment (e.g., computer assisted testing, use of a CD as a reading coach, Lexile scores). Teachers should know Lexile scores of every student in their class to tell if they can read the textbook.
  • There is a need for staff development when planning for formative assessment as well as for collaboration in lesson design and operating strategies. In grade level meetings, ELL and Special Education teachers can collaborate on strategies.

Several sources cited: Hunter, Gardner, Farr

Paul Ruiz

Principal Partner and Co-Founder of the Education Trust, Inc

Standards in Practice (SIP)

  • What is Standards in Practice (SIP)? It is a professional development strategy that aligns classroom assignments with standards and then increases the rigor of those assignments so student achievement rises to meet the standards. It also provides a forum for developing the instructional strategies needed to teach rigorous academic work.
  • Teacher assignments are analyzed through student work. Student work is not the principle focus, however. The focus is on how to make assignments clearer, more purposefully aligned to state standards, and more challenging—students can do only as well as the assignments and instructional strategies push them.
  • The SIP strategy is based on work coming out of the classroom every day. It is not another add-on program, nor does it depend on expensive materials or consultants. A two-day trainer of trainers workshop is all that is needed to begin the program in a district.
  • Six steps to the process were explained in detail. A team examines assignments and (1) asks academic questions about content, context and purpose, (2) asks what a student must know to be able to accomplish the task, (3) identifies appropriate standards, (4) generates a rough diagnostic rubric to build a consensus of quality, (5) diagnoses student work using the rubric and (6) analyzes the student work to plan instructional strategies for improving student performance.
  • Because the team is asked to work with their own assignments, the biggest problem with the process is the risk of exposing those assignments to a group. To make the process easier, start by working with assignments from outside the school. Also, because we are often too gentle with colleagues, an outside voice from someone who is familiar with the process can be a big help.
  • Organizational features of the program are: (1) all activities happen at the school site using classroom work, (2) teams can be grade level, vertical, interdisciplinary, or subject matter; they can include not only teachers but administrators, higher education faculty, and parents, (3) meetings are once a week, if possible (otherwise twice a month) and ideally they should last 90 minutes, and (4) the meeting agenda is the six step model described above.

Source cited: The Education Trust

Andrew Porter

Patricia and Rodes Hart Professor of Educational Leadership and Policy at Vanderbilt University, also Director of the Learning Sciences Institute; as former Director of the Wisconsin Center for Education Research (WCER), conducted research leading to the design and content of the Surveys of Enacted Curriculum (SEC)

Aligned Instruction

  • Why do teachers select certain content to teach? Several studies showed the biggest factor was what they taught the year before. There is usually some “tweaking” and attention to Benchmarks, but with the NCLB law, more attention to alignment is critical.
  • Alignment to what? NCLB starts with challenging content standards. Alignment of content with standards is part of the law. Tests must be aligned as well as professional development and ultimately instruction. Alignment can also be done to Benchmarks, curriculum, text, and student needs.
  • Alignment starts with content. Teacher surveys of instruction (such as the Surveys of Enacted Curriculum) are tools used to determine content taught and to describe instructional practices. Surveys could be simple daily logs or end of semester or year surveys. A Content Matrix can be used to survey not only content (topics listed down the side) but cognitive demand (categories across the top). When a similar chart is done for state tests and standards, alignment can be determined between the standards, what is taught, and what is tested.
  • If everything is aligned, the patterns of the different matrices should be identical. NCLB says instructional alignment should be “1” (perfect alignment). This is not often the case. Alignment should be close, however, and it usually is not. Example: studies of alignment show the average within-state-alignment is .22; the average between-state-alignments is .23; and the average state alignment with NAEP is .39.
  • Topographical maps can be used for comparisons. They indicate topics, cognitive demand and content emphasis. The main value of these maps is in the ability to see quickly what is being taught (e.g., “mountains” indicate that a high quantity is being taught; at “sea level,” nothing is being taught).
  • Alignment tools have many uses. For example, a tool that describes instructional practices can be used for research (e.g., dependent variable in teacher decision-making or description of the implemented curriculum) and practices (e.g., teacher reflections on own instructional practices). When Surveys of Enacted Curriculum are used, specific targets should be established for the use of the information obtained (e.g., alignment to standards; building tests).
  • It is easier to correlate content taught to student gains (correlation is .5—which is impressive) than to correlate teaching methods to student gains. A better method of measuring that aspect is needed.

Sources cited: NAEP, SCASS

Andrew Porter

Patricia and Rodes Hart Professor of Educational Leadership and Policy at Vanderbilt University, also Director of the Learning Sciences Institute; as former Director of the Wisconsin Center for Education Research (WCER), conducted research leading to the design and content of the Surveys of Enacted Curriculum (SEC)

Closing the Achievement Gap

  • What is the achievement gap? It is a gap in the achievement between different groups of students (white versus black, Latino and special ed; rich versus poor; boys versus girls). The achievement gap is generally one standard deviation. If a student at the 50th percentile improved one standard deviation, that student would then be at the 84th percentile.
  • How is the gap determined? It is mainly determined by test scores. Scores must be compared over time to determine patterns.
  • The best data on the achievement gap is from the National Assessment of Educational Progress (NAEP). Starting in the 1970s, the achievement gap narrowed each year until the mid 80s. It has remained constant ever since. One reason (at least at the high school level) may be that students must agree to take the NAEP test. If many low achieving students don’t take it, the average goes up and those few low achieving students who do take it represent the gap which has remained constant.
  • The gap is not caused by schools. Studies have shown that students come to school with a gap, and the gap widens when students are out of school during the summer. Those from low income homes lose the most over the summer. They also come to school the least prepared. Studies have shown that this gap can never be totally erased.
  • What causes the achievement gap? Is it a function of validity? This is a possibility, because some tests are better than others. Every item on an achievement test is a sample. In order to be valid, the sample items on the test must be representative of what you want to test.
  • Although bias in testing has been blamed for some of the gap, studies have shown that there is no evidence to support that theory. The gap across ethnicity is largely caused by poverty.
  • What can we do in schools about the achievement gap? Pre-schools can help the situation, but only when they have good teachers and instruction.
  • Teacher reforms could help because good teachers are a critical factor. One high school study showed that students who went through the lower grades with “good” teachers were 1.7 standard deviations ahead of students who had “poor” teachers.
  • “Good” teachers, however, do not ensure a smaller gap. Studies have shown that good instruction will raise the achievement for allstudents, but it won’t necessarily close the gap. It could even widen it. While low achieving students do much better with good instruction, they don’t completely catch up to students with a higher aptitude.
  • Standards based reforms are supposed to close the achievement gap, but it’s too early to say what the effect will be.

Source cited: NAEP
Jim Knight

Educational Researcher at the University of Kansas Center for Research on Learning (CRL); developer of the CRL Instructional Coaching Model

An Introduction to Instructional Coaching

  • Why do we need instructional coaching? There is political pressure to improve instruction and meet AYP; there is moral pressure to do what we can to help students be successful; and there is social pressure to improve the way we interact with each other.
  • What is an instructional coach? “Instructional coaches are on-site professional developers who partner with educators to identify professional teaching practices that are research based and assist with change.”
  • Change is difficult. The culture of the school shapes the behavior of the staff. Often decision makers are reluctant to change and many teachers would rather function independently.
  • What are the levels of change? The first level is pre-contemplation (ignores data; sees no need). Next is contemplative (starts to see need and weighs options). Finally there is preparation for change, implementing the change and maintaining the change. Teachers need the most support in the last level (maintenance).
  • What does it take for change to stick? The research in Moving Stuck Schools shows a circular effect. The use of data and shared goals leads to collaboration which then leads to teacher competence. Activating the change and teacher competence leads to student achievement. Student achievement leads to teacher motivation and revisiting goals.
  • Getting teachers involved (enrolling) is the first big step in instructional coaching. The best way is to make it a one-on-one process (interview teachers individually and show them you are there to help).
  • The next step is to identify what you can do (the big four are behavior, content knowledge, instruction, and formative assessment). Coaches must have proven practices and a deep, thorough understanding of what has to happen.
  • Modeling is critical. Coaches write out key practices to learn. Then teachers watch the coach (cue, “you watch me”). The coach observes the teacher (do, “I watch you”). Then the two explore what happened together (review, “we do it”). Coaches are not supervisors or evaluators with the one and only answer. They work with teachers in a partnership so sharing and support are important.
  • Reflecting is the last step. It includes feedback that is direct, specific, and non-attributive. Coaches simply tell what they saw. They don’t tell teachers what they are (e.g., patient, cheerful).
  • Effective change is paradoxical. It includes top down and bottom up (the partnership approach); easy yet powerful; self organizing and tightly managed (let it evolve but manage it); and gaining commitment by not demanding commitment.

Several sources cited: KU Center for Research on Learning, Lahey, Fullan, Rosenholtz, Csikszentmihalyi