Return to Normal View

DOE Homepage Students Educators Community Family Administrators and Staff MyFlorida.com

Florida Department of Education

DOE Home > Frequently Asked Questions

Frequently Asked Questions

 

  Frequently Asked Questions  

Text Index Google Custom Search
Hide Answers

Just Read, Florida! Office

All · 2009 Florida Assessments for Instruction in Reading - 3-12 Assessments · 2009 Florida Assessments for Instruction in Reading - General · 2009 Florida Assessments for Instruction in Reading - Implementation Study · 2009 Florida Assessments for Instruction in Reading - K-2 Assessments · 2009 Florida Assessments for Instruction in Reading - Professional Development (PD) · 2009 Florida Assessments for Instruction in Reading - Progress Monitoring and Reporting Network (PMRN) · 2009 Florida Assessments for Instruction in Reading - Psychometric Properties · 2009 Florida Assessments for Instruction in Reading - Technology · 2009 Florida Assessments for Instruction in Reading – 10/09 update – NEW Questions · Elementary Reading Block · Reading Endorsement · Reading Endorsement Part 1 · Reading Endorsement Part 2 · Reading Endorsement Part 3

2009 Florida Assessments for Instruction in Reading - Psychometric Properties


1. Is the K-2 system set up to predict grade 3 FCAT performance?
   

No. The predictive validity is based on logistic regressions relating the Broad Screen to end of year norm-referenced tests (SESAT Word Reading in Kindergarten and SAT 10 Reading Comprehension in grades 1 and 2). The relationship between grade-level SAT 10 Reading Comprehension performance and scoring Level 3 on FCAT is very high (.75).




2. Why was the 40th percentile on the SAT 10 established as the cut score for determining students’ probability of reading success?
   

The 40th percentile was used as the grade-level cut point for SAT 10 because that is the cut point used nationally for federal reporting (e.g., Reading First, U.S. DOE evaluations).




3. What does the third and final assessment of the year predict to?
   

For K-2, the third and final assessment is a prediction of current year performance on SAT 10. For 3-12, the last assessment predicts to the current year’s FCAT.. The grades 3-10 Reading Comprehension screen in the last assessment period can only predict to the current year’s FCAT because future predictions require statistical analyses of current year FCAT data, which cannot be completed until June of the current school year. However, FCAT performance is highly related from year to year. Therefore, predictions in the last assessment period are most relevant to summer and fall placement decisions.




4. How is the new 3-12 Assessment predictive of FCAT?
   

FCAT simply asks about grade-level proficiency and, therefore, reports only on performance on grade-level passages. However, a more precise estimate of the underlying latent variable – reading comprehension – can be obtained by making the test adaptive and improving on the correlation between the ability estimate and FCAT performance. For example, a 9th grade struggling reader given only grade level passages (as in FCAT) on a predictive assessment is likely to give up and guess, making their prediction of FCAT very poor. However, in our adaptive screen, that 9th grader’s performance on the grade level passage would be used to provide an easier passage and possibly an additional easier passage to improve on the prediction to current year FCAT.

A series of logistic regressions to estimate the mean log-odds of success on the FCAT are generated using a reading comprehension autoregressor as well as an estimated theta score from the computer adaptive test. The score from the Reading Comprehension screen is correlated with the FCAT at .72, indicating that 52% of the variance in scores is shared.




5. Is the reliability coefficient available for the FCAT success probability?
   

Yes. The reliability coefficient will be published in the technical manual available in July 2009.




6. How will over-exposure of items to students be controlled? Will the students see the same passages over and over?
   

FCRR will continue to develop items and complete psychometric work for all assessment tasks K-12. Because students are placed into passages according to their instructional level, they will not be reading the same passages within a year. FCRR has banked additional items for all assessment tasks K-12 and plans to field test and conduct psychometric analyses on new items and passages to avoid exposing students to the same items.




7. Why is there such a wide range within the yellow probability of success zone?
   

FCRR purposely set the probability of grade-level success high (i.e. .85) to reduce the risk of under-identifying students who develop reading difficulties. The flip side of identifying success (i.e. the green zone) is identifying risk (i.e. the red zone). If we increase the probability of risk we necessarily under-identify those who truly develop reading difficulties and we increase the likelihood of over-identifying children. The solution is to pay closer attention to the actual probabilities of success – and less to color zones. A student with an 80 percent chance of grade level success is a very different student than one with a 20 percent chance. The reading teacher should use student results on all tasks to inform instructional decisions.




8. Is it possible for students taking the 3-12 Assessment to score in the green probability of success zone if they do well on a reading comprehension passage that is below grade-level?
   

It is unlikely that a student could have a .85 or higher probability of success on FCAT if below grade-level passages were read on the Broad Screen (Reading Comprehension). However, the Broad Screen is an adaptive test and it is primarily a student’s skill in answering comprehension questions that determines the next passage administered. Thus, it is possible for a student to have a passage characterized as below grade-level in the mix of passages taken but have performance improve sufficiently to score with a .85 or greater probability of passing current FCAT.




9. Why was the decision made to report data using unequal interval scores (percentiles) vs. equal interval scores (NCEs)?
   

Score types were selected that are familiar to teachers – raw (e.g. error types), percentiles, and standard/scaled scores (with a mean of 100 and a standard deviation of 15). Ability scores (like FCAT’s Developmental Scale Score) are provided to look at growth.




10. Are you releasing the technical specifications for the assessments?
   

Yes. The technical manual should be complete by the end of June 2009, after data from the spring assessments in the implementation study have been received and analyzed.