Return to Normal View

DOE Homepage Students Educators Community Family Administrators and Staff MyFlorida.com

Florida Department of Education

DOE Home > Media Room

Media Room

 

  Media Room  

Text Index Google Custom Search

PRESS RELEASE

May 23, 2007

Tom Butler
(850) 245-0413

Statement By:

EDUCATION COMMISSIONER JEANINE BLOMBERG

Regarding Variations in Third Grade FCAT Reading Results

“The Florida Comprehensive Assessment Test (FCAT) is a valuable tool that enables educators to see how well students are performing and mastering basic skills of reading, writing, math and science. It also helps us to identify struggling students and the areas in which they may need additional instruction or remediation.

Since 2001, we have seen steady annual gains of between 1 and 3 percentage points in the percent of third graders reading on grade level. These gains have reflected continuous improvement of Florida students. In 2006, the percent of third graders reading on grade level jumped 8 percentage points. However, this year, third-grade reading performance fell 6 percentage points from the previous year. Like many of the state’s education leaders, I was troubled by this drop. I immediately directed Department of Education staff to analyze the third-grade FCAT reading data from 2005, 2006 and 2007 to better explain these variations.

Staff reviewed a number of factors, including student demographics, third grade retentions, student mobility, prior test performance of students participating in Reading First programs, teacher mobility, teacher experience levels and changes in policies. In addition, we worked with our testing vendors and outside experts to look at the test itself, how well students performed on questions that were common over multiple years and the scoring process. We rechecked and verified the scoring for both 2006 and 2007 third-grade reading test administrations. What we found was that last year’s third graders performed better on test questions that also appeared on other test administrations. However, we do not believe this alone accounted for the large increase last year followed by a decline this year.

As we continued to investigate, it became clear that the anomaly in third-grade reading results does not lie with this year’s results, which fall in line with the trend line of moderate and steady increases from year-to-year, but with last year’s third-grade reading results. Our analyses have led us to focus in on a set of test questions used as a means to ‘equate’ the scores of the test from one year to the next. These are known as anchor questions and basically ensure that FCAT scores mean the same thing from one year to the next. This helps us confirm that students are performing about where they should be and that the test’s difficulty level is the same as in previous years. We believe the set of anchor questions used in the 2006 third-grade reading test did not perform as intended nor as well as the anchor items used in 2005 and 2007.

We have found two main factors that have led us to the conclusion that last year’s set of anchor items were not as good as the sets of anchor items used this year and in previous years. Part of our process is to compare the percent of anchor questions students correctly answer to the percent correctly answered in the entire test. These percentages should fall into an acceptable range and the closer they are aligned, the better. In 2006, these percentages were not as closely aligned as they should have been.

Student performance on any test question is also dependent on where it appears on the test. That is to say, students often perform better on test questions that appear at the beginning of a test versus those that appear near the end. We take this into account when building our tests. Generally, anchor questions should appear in the same place or position on a test as they have in previous use. In 2006, a reading passage and the questions associated with that passage were placed in different position than they had in prior tests.

We believe the combination of the change in placement or positioning of anchor questions, along with the percentage of students correctly answering anchor and test questions not being as closely aligned as they should have been, resulted in an overstatement of last year’s third grade reading FCAT results. Since we use the set of anchor questions to serve as the baseline of student performance, we have what appeared to be a higher performing set of anchor questions further amplifying above average student performance. We have already reviewed the 2007 exam for these two issues and are confident this did not happen with this year’s test administration.

The Department of Education has adjusted our practices to ensure that we are consistently applying from year-to-year the same rigorous processes and statistical standards for selecting, positioning and comparing anchor items. Strict adherence to these procedures has served us well in the past and will do so in the future. To make certain no students, teachers, schools or school districts are disadvantaged, we will go back and re-equate and rescale the 2006 third grade FCAT reading exam against a new set of anchor items. This will result in a new set of reading scores for last year’s third-grade class, which we will issue as soon as they are available. No individual student will be negatively impacted by the recalculation of last year’s third grade reading scores. We will also use these new results when calculating learning gains for this year’s school grades and federal Adequate Yearly Progress. We do not plan to reissue school grades for last year.

We are committed to openness and transparency of the FCAT. We will be convening next week an external advisory group to assist the Department in identifying an independent, external group of testing experts to review the data from 2006 and recommend a procedure for establishing an annual review of the test. We will conduct an independent review of FCAT results each year from here on out.

The FCAT plays an important role in our public education system and because of this we must and will do all that we can to preserve the integrity and accuracy of the test.”