Florida Standards Assessments (FSAs)- Frequently Asked Questions

What are interim assessments?

Interim assessments are “standardized, periodic assessments of students” that focus on “providing information about knowledge, skills, and processes students have developed within a period of time” and are “more flexibly administered than their year-end assessment counterparts” (McMillan, 2013 p 58).

Why administer interim assessments? Purpose?

District-wide interim assessments, are essential to the instructional process and have been traditionally used for three specific educational purposes: instruction, evaluation, and prediction (Perie, Marion, & Gong, 2007). The scores from Lake District assessments are not factored into student course grades and serve the singular purpose of providing teachers with evidence of student understanding (Riggan & Olah, 2011) in order to increase student achievement: prioritize instructional time; target struggling and high-performing students to individualize instruction; identify individual students’ strengths and weaknesses in order to provide acceleration, intervention, and remediation; refine instructional strategies; examine district and school-wide data to identify patterns of learning in order to inform additional support; and communicate student progress to students and families (Hamilton, Haverson, Jackson, Mandinach, Supovitz, & Wayman, 2009).

Interim assessments are an invaluable diagnostic tool that provide teachers with a “profile of student(s) strengths and weaknesses relative to the curriculum,” allow districts and schools to evaluate the “effectiveness of particular educational policies or instructional programs,” and “predict performance on year-end assessments” in advance so teachers can provide students with the necessary acceleration, intervention, or remediation (McMillan, 2013 p 58). While the results of assessments may be used for multiple purposes (Hattie, 2003), the singular purpose of the LSAs is instructional: to improve teaching and learning.

Are we testing our students too much? What about actual instruction and learning?

The administration of district-level interim assessments is aligned with best educational practices. The goal of common interim assessments is to create a community of practice to increase student achievement and instructional effectiveness. Research has proven that schools with the greatest gains in achievement use common assessments (Reeves, 2004).

The administration of interim assessments does not interrupt instruction or learning. It enhances it. Teachers should be using interim assessments to determine what content areas should be retaught(Goertz, Olah, & Riggan2010). The results of interim assessments provide teachers with evidence of student understanding (Riggan & Olah, 2011) to inform on-going instructional practices.

The Lake Standards Assessments (LSAs) are part of Lake County’s balanced assessment system and fulfills the requirements of the Florida’s Student Assessment Program for Public Schools under Section 1008.22, F.S.

Why give a baseline assessment? Students haven’t been taught the information, yet!

Correct. The baseline assessmentdoes not assess what has “been taught” since nothing has been taught, “yet.” And, it should not be used for evaluative purposes. The baseline assessment informs the instructional decisions that teachers make to increase student achievement: prioritize instructional time; target struggling and high-performing students to individualize instruction; identify individual students’ strengths and weaknesses in order to provide acceleration, intervention, and remediation; refine instructional strategies; examine district and school-wide data to identify patterns of learning in order to inform additional support; and communicate student progress to students and families (Hamilton, Haverson, Jackson, Mandinach,Supovitz,Wayman, 2009).

What is a midyear interim assessment?

The midyear test is given midway through the course to show students’ progress towards mastery of the course standards. At this point, teachers and students can begin to assess student mastery of state standards through analysis of baseline and midyear data.

Why do the LSAs have so many questions?

The LSAs are aligned to FLDOE Test Design and Blueprints. The administration window was modified to 90 minutes in order to support school-site instructional schedules, hence the LSAs are marginally shorter than the state assessments.

Where did the questions on the FSAs come from?

The items for the LSAs were developed by certified subject-area teachers working directly under Florida’s Interim Assessment IBTP project. The test forms were developed by Lake County School District Program Specialists.

Is some type of item analysis or psychometric study done on the FSAs after testing?

Yes, once testing is completed on 90% of the students the county psychometrician examines the results of each test. The analysis includes an analysis of item level statistics including item difficulty (p-value) and item discrimination. In addition, feedback from teachers (subject matter experts) and Program Specialists ensures that the review of the assessments are aligned with best practices in psychometric research (HaladynaRodriguez, 2013).

We give a baseline and midyear, why not an end of year?

The baseline and midyear are part of Lake County balanced assessment system in which “teachers use classroom assessment, interim assessment, and year-end assessments to monitor and enhance student learning in relation to the state standards” (McMillan, 2013). FLDOE’s end-of-course (EOCs) assessments and Florida Standards Assessments (FSAs) are the end of year assessments.

Are the FSAs required to be given, if so, by whom?

The administration of interim assessments is aligned with best educational practices and educational research. In addition, FLDOE requires that schools in Differentiated Accountability progress monitor students twice throughout the year and complete a baseline and midyear report. Our district believes in a balanced assessment system for all of our students that includes classroom assessments, interim assessments, and summative assessments.

If I have a question about or an issue with a test question, what should I do?

We request and encourage teacher participation. Teachers are our subject matter experts. Any questions, concerns, or recommendations may be submitted on the electronic Feedback Form located on Lake County’s Planning, Program Evaluation, and Accountability website under Lake Standards Assessments (LSAs).

What should we as teachers expected to do with the results of the FSAs?

The main purpose for administering interim assessments is to help drive instruction at the classroom level by providing teachers adiagnostic point of reference as to how their students are progressing toward mastery of the required state standards. LSA data, paired with on-going teacher data (grades, mini assessments, observation, performance data etc.) can be a powerful tool in a teacher’s instructional decision making process.

Why are we using Content Focus and Reporting Category information to inform instructional decisions when the FLDOE recommends that FLDOE “Content Focus Reports should not be used to make decisions about instruction at the individual student level…and that “content focus data should not be used as sole indicators to determine remedial needs of students”?

We use Content Focus and Reporting Category information because it is a best practice in standards-driven instruction and aligned with a systematic process for using data to inform instructional decisions and addressing students’ learning needs.

To fully appreciate and understand FLDOE’s rationale for the recommendation against using the Content Focus and Reporting Category from state assessmentsas the sole indicator of student performance Hamilton, Haverson, Jackson, Mandinach, Supovitz, & Wayman (2009) provide the following discussion on the use of multiple sources of data to inform instructional practice:

“To gain a robust understanding of students’ learning needs, teachers need to collect data from a variety of sources. Such sources include but are not limited to annual state assessments, district and school assessments, curriculum-based assessments, chapter tests, and classroom projects. In most cases, teachers and their schools already are gathering these kinds of data, so carrying out data collection depends on considering the strengths, limitations, and timing of each data type and on preparing data in a format that can reveal patterns in student achievement. Moreover, by focusing on specific questions about student achievement, educators can prioritize which types of data to gather to inform their instructional decisions. (Bigger, 2006; Cromey Hanson, 2000; Herman Gribbons 2001; Huffman Kalnin, 2003; Lachat Smith, 2005; Supovitz, 2006).

Each assessment type has advantages and limitations (e.g., high-stakes accountability tests may be subject to score inflation and may lead to perverse incentives). Therefore, the panel believes that multiple data sources are important because no single assessment provides all the information teachers need to make informed instructional decisions.

For instance, as teachers begin the data-use process for the first time or begin a new school year, the accessibility and high-stakes importance of students’ statewide, annual assessment results provide a rationale for looking closely at these data. Moreover, these annual assessment data can be useful for understanding broad areas of relative strengths and weaknesses among students, for identifying students or groups of students who may need particular support (Halverson, Prichett, Watson, 2007; Herman Gribbons, 2001; Lachat Smith, 2005; Supovitz Klein, 2003; Wayman Stringfield, 2006), and for setting schoolwide(Halverson, Prichett, Watson, 2007)., classroom, grade-level, or department-level goals for students’ annual performance.

However, teachers also should recognize that significant time may have passed between the administration of these annual assessments and the beginning of the school year, and students’ knowledge and skills may have changed during that time. It is important to gather additional information at the beginning of the year to supplement statewide test results” (p 11).

Hence, FLDOE’s recommendation against using FLDOE assessment Content Focus and Reporting Category data as the sole indicator of student performance is simply a proposition that is aligned with best practices in standards-driven instruction and the use of data. In order to assess student mastery of state standards, teachers must use multiple data points. FLDOE is being transparent in noting the limitations of the Content Focus and Reporting Category reports in order to ensure that schools and teachers are using multiple data points to inform instructional practice.

Why use FAIR and a district developed English Language Arts (ELA) progress monitoring assessments?

The FAIR and FLDOE ELA Florida Standards Assessments do not measure the same construct. The district-developed progress monitoring assessments (LSAs) arealigned to the FLDOE ELA Florida Standards Assessments.

See legislation and explanation below.

The 2016 Florida Statutes

1003.4282. Requirements for a standard high school diploma.—

3) STANDARD HIGH SCHOOL DIPLOMA; COURSE AND ASSESSMENT

REQUIREMENTS.—

a)Four credits in English Language Arts (ELA)—The four credits must be in ELA I, II, III, and IV. A student must pass the statewide, standardized grade 10 Reading assessment or, when implemented, the grade 10 ELA assessment, or earn a concordant score, in order to earn a standard high school diploma.

The FLDOE went from a Reading assessment (FCAT Reading) to an English Language Arts (ELA) assessment (FSA) during the 2014-15 school year. The English Language Arts FSA is not a Reading assessment. The construct measured is DIFFERENT. The FCAT Reading measured reading achievement. The ELA FSA measures English Language Arts standards and curriculum. In order to understand the difference between the two, it might be helpful to review the FLDOE Course Description and standards for 11th grade Reading Course # 1008330 and 11 grade English 3 Course # 1001370.

As the 11th grade Reading course description illustrates, the purpose of this course is to increase reading fluency and endurance through integrated experiences in the language arts. The English 3 course description notes that “the purpose of this course is to provide grade 11 students, using texts of high complexity, integrated language arts study in reading, writing, speaking, listening, and language for college and career preparation and readiness. A cursory review of the standards in each course description reveals that the English course (English 3) covers 15 standards not included in the Reading(Reading 3) course aligned towriting, speaking, listening, and language. The following example was provided to illustrate the difference in focus between the past FCAT Reading assessment and the current English Language Arts (ELA) Florida Standards Assessment.

Now, it is important for ELA teachers to use the results from the FAIR assessment to select passages and texts to support their students’ individualized instruction and scaffolding. Reading proficiency influences student mastery of ELA standards and curriculum. However, science, math, social studies, and history teachers should also be using the scores from FAIR to inform instructional practices since many of the questions on the state assessments in those disciplines will also require students to read passages in order to answer questions requiring comparison and contrast, inferencing, analyzing, and synthesizing of content level knowledge and skills.

The district-developed ELA LSA’s utilize grade-level passages and questions that are intended to determine whether or not a student has an understanding of the ELA standards being assessed on the summative state assessment. If a student does not perform well on the FSA, the teacher is unable to determine if the student failed to reach proficiency in the standard, or if the student had difficulty with the grade-level passages.

The Florida for Assessments in Reading (FAIR) is a computer-adaptive test that obtains a precise representation of a student’s reading ability. By identifying a student’s precise reading level, teachers can formulate an instructional plan that includes the necessary interventions and remediation.

Through the use of FSA and FAIR data, teachers can provide students with a more informed and individual instructional plan.

References

Bigger, S. L. (2006). Data-driven decision-making within a professional learning community: Assessing the predictive

qualities of curriculum-based measurements to a high-stakes, state test of reading achievement at the

elementary level. Unpublished doctoral dissertation, University of Pennsylvania, Philadelphia, PA.

Cromey, A., & Hanson, M. (2000). An exploratory analysis of school-based student assessment systems. Oak Brook, IL:

North Central Regional Educational Laboratory (NCREL).

Goertz, M. E., Oláh, L. N., & Riggan, M. (2010). From Testing to Teaching: The use of Interim Assessments in Classroom

Instruction. Philadelphia, PA: Consortium for Policy Research in Education.

Hattie, J. (2003). Formative and summative interpretations of assessment information.

Haladyna, T. M, & Rodriguez, M. (2013).Developing and validating test items. New York, NY: Routledge.

Halverson, R., Prichett, R. B., & Watson, J. G. (2007). Formative feedback systems and the new instructional leadership.

Madison, WI: University of Wisconsin.

Hamilton, L, Haverson, R., Jackson, S., Mandinach, E., Supovitz, J., Wayman, J. (2009) Using

student achievement data to support instructional decision making (NCEE 2009-4067).

Washington, D.C.: National Center for Education Evaluation and Regional Assistance, Institute

of Education Sciences, U.S., Department of Education.

Herman, J., & Gribbons, B. (2001). Lessons learned in using data to support school inquiry and continuous improvement: Final report

to the Stuart Foundation. Los Angeles, CA: University of California, Center for the Study of Evaluation (CSE).

Huffman, D., & Kalnin, J. (2003). Collaborative inquiry to make data-based decisions in schools. Teaching and Teacher Education,

19(6), 569–580.

Herman, J., & Gribbons, B. (2001). Lessons learned in using data to support school inquiry and continuous improvement: Final report

to the Stuart Foundation. Los Angeles, CA: University of California, Center for the Study of Evaluation (CSE).

Lachat, M. A., & Smith, S. (2005). Practices that support data use in urban high schools. Journal of Education for Students Placed At

Risk, 10(3), 333–349.

McMillan, J. H. (2013).Research on classroom assessment Los Angeles, CA: SAGE Publications, Inc.

Perie, M., Marion, S., Gong, B., & Wurtzel, J. (2007). The Role of Interim Assessments in a

Comprehensive Assessment System.

Riggan, M., & Olah, L. N. (2011). Locating interim assessments within teachers’ assessment practice. Educational

Assessment, 16, 1-14.

Supovitz, J. A. (2006). The case for district-based reform: Leading, building, and sustaining school improvement.

Cambridge, MA: Harvard Education Press.

Supovitz, J. A., & Klein, V. (2003). Mapping a course for improved student learning: How innovative schools systematically

use student performance data to guide improvement. Philadelphia, PA: University of Pennsylvania, Consortium for

Policy Research in Education.

Wayman, J. C., & Stringfield, S. (2006). Technology-supported involvement of entire faculties in examination of student data

for instructional improvement. American Journal of Education, 112(4), 549–571.