
FAST K-5 Universal Screener & Reading Fluency Assessment
Explore the FAST K-5 Universal Screener, its significance in identifying at-risk students, and the importance of Oral Reading Fluency (ORF) assessments for predicting reading comprehension. Learn how these tools play a crucial role in early intervention and student performance evaluation.
Download Presentation

Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.
E N D
Presentation Transcript
FAST Matching Game 1. 2. 3. 4. 5. Print 3-4 copies of the PDF file that contains the game pieces. Cutout each question and description. Breakout into groups of 3-4. Distribute a full set of game pieces to each group. Discuss as a group which descriptions match the questions in bold. DOWNLOAD GAME PIECES HERE >
What is FAST? Answer: FAST is a Universal Screener. A Universal Screener is the first step in identifying the students who are at risk for learning difficulties. It is a mechanism for targeting students who struggle to learn when provided scientific, evidence-based general education (Jenkins, Hudson, & Johnson, 2007). How is a screener different from a diagnostic assessment? Answer: A Universal Screener is a general temperature check. A diagnostic assessment (i.e. DDS, BRI, OS, ELA) would be used to gain more in depth information about the cause of the learning difficulty.
Why do I only progress monitor using one tool? Answer: CBMreading assessments are predictive of risk, quick to administer, and sensitive to growth. Substantial research evidence demonstrates that CBMreading is a robust indicator of reading development and a useful predictor of student performance on state tests. CBMreading is an index of word reading efficiency, which is an important ability that facilitates reading comprehension. Students who read a grade-level passage with efficiency are better able to use their cognitive resources to comprehend while reading. (fastbridge.org/cbmreading/)
What is Oral Reading Fluency (ORF)? Answer: Oral reading fluency is the ability to read connected text quickly, accurately, and with expression. In doing so, there is no noticeable cognitive effort that is associated with decoding the words on the page. Oral reading fluency is one of several critical components required for successful reading comprehension. Students who read with automaticity and have appropriate speed, accuracy, and proper expression are more likely to comprehend material because they are able to focus on the meaning of the text.
Why is Oral Reading Fluency an Important Skill to Assess? Answer: A student s level of verbal reading proficiency has a 30-year evidence base as one of the most common, reliable, and efficient indicators of student reading comprehension (Reschly, Busch, Betts, Deno, & Long, 2009; Wayman, Wallace, Wiley, Tich , & Espin, 2007). When used as a predictor of higher stakes reading comprehension tasks, an assessment of oral reading fluency performs as well as or better than many other comprehensive tests of reading (see Baker et al., 2008). Because reading fluency tasks are designed to be brief, reliable, and repeatable, they serve well as tools for universal screening for early intervention across Grades 1 6 (Reschly et al., 2009). Reading fluency tasks are also used for monitoring the progress of individual students who are at risk for later detrimental reading outcomes. Curriculum-Based Measurement of oral reading (CBM-R) is a universal term that encompasses multiple types of oral reading fluency assessments (e.g., aimsweb.com; dibels.uoregon.edu; easyCBM.com; edcheckup.com; fastforteachers.org; isteep.com). Taken together, measures of CBM-R are some of the most widely used and researched tools in educational assessment for screening and progress monitoring (Graney & Shinn, 2005). Any CBM-R set is typically represented by a standardized set of passages designed to identify students who may require additional support (through universal screening) and to monitor progress toward instructional goals. A student s current level of performance is measured by the number of words read correctly in one minute and also typically includes the accuracy of the reading expressed as a percentage. When CBM-R is used as a screening tool, it is most commonly administered to students at three different time points during the school year.
Where Can I Find Information About Evidence-based Practices in Building Oral Fluency? Answer: The What Works Clearinghouse (WWC) reviews the research base for several programs and interventions, and uses the following eligibility criteria when identifying studies to review: (i) the study is published within the last 20 years; (ii) it includes a primary analysis of the effect of an intervention; and (iii) it is a randomized controlled trial, quasi-experimental, regression discontinuity, or single-subject design type. Studies that do not meet criteria are often excluded because they do not use a comparison group, the study was not conducted within the time frame specified in the protocol, or the study does not provide adequate information about the design. To search for a review of fluency-based interventions completed by the WWC, use the following link: http://ies.ed.gov/ncee/wwc/findwhatworks.aspx. In Table 1, we display the results of a recent (summer 2013) search for peer-reviewed oral fluency interventions, including the level of evidence supporting the intervention.
What is CBM-Reading? Answer: CBMreading (Curriculum-Based Measurement for Reading) is an evidence-based, one-minute assessment used for universal screening in English or Spanish (Grades 1- 8), and for frequent progress monitoring (Grades 1-12). It is a simple and efficient procedure. FAST provides benchmark targets for performance to help identify students at-risk for academic failure. (fastbridge.org/cbmreading/)
Frequently Asked Questions Isn t this just about seeing how fast students read? Answer: No. CBM-Reading is intended to solicit a sample of the student s BEST reading. This is not a speed reading test. Speed reading without grade appropriate prosody (intonation, pauses) violates standardization. CBM-Reading is intended to provide a sample of student behavior for teacher observation. The primary outcomes are BOTH accuracy and rate of correct reading performance. Together, this sample of BEST reading indicates how well the student has automatized the written cypher.
Frequently Asked Questions Why would I do this over doing a Running Record? Answer: They take too long. A large and extended sample of reading behavior is not necessary. Too few or inadequate forms. Monitoring requires parallel alternate forms Very little evidence on technical adequacy. Running records have a different purpose (as do DRA, F&P). They provide in-depth and often qualitative information about the strength and weaknesses. They might be useful for skills analysis and instructional design. Helgren-Lempesis, V. A., & Mangrum, C. T. (1986). An analysis of alternate-form reliability of three commercially-prepared informal reading inventories. Reading Research Quarterly, 21, 209-215. Pikulski, J. (1974). A critical review: Informal reading inventories. The Reading Teacher, 28, 141-151. Spector, J. E. (2005). How reliable are informal reading inventories? Psychology in the Schools, 42(6), 593-603.
Frequently Asked Questions Does this assess comprehension? Answer: No, CBM-Reading does not provide a direct measure of reading comprehension. But, publish research indicates that 1. CBM-Reading indicates comprehension. Shinn et al., 1992; Marcotte & Hintze, 2009 2. Automaticity is a pre-requisite to reading comprehension. Free up cognitive resources to focus on comprehension (Slocum et al., 1995) 3. Comprehension enables reading rate/fluency (Jenkin et al., 2003) Jenkins, J. R., Fuchs, L. S., van den Broek, P., Espin, C., & Deno, S. L. (2003). Sources of Individual Differences in Reading Comprehension and Reading Fluency. Journal of educational psychology, 95(4), 719. Marcotte, A. M., & Hintze, J. M. (2009). Incremental and predictive utility of formative assessment methods of reading comprehension. Journal of School Psychology, 47, 315-335. Shinn, M. R., Good III, R. H., Knutson, N., Tilly III, W. D., & Collins, V. L. (1992). Curriculum-based measurement of oral reading fluency: A confirmatory analysis of its relation to reading. School Psychology Review, 21(3), 459-479. Slocum, T. A., Street, E. M., & Gilberts, G. (1995). A review of research and theory on relation between oral reading rate and reading comprehension. Journal of Behavioral Education, 5(4), 377-398.
Frequently Asked Questions What about word callers? Answer: Word Caller a person with sufficient reading rate without good comprehension Publish research indicates that Teachers over-identify word callers Incorrect nomination 93% VERY few students are word callers 2% of students at 3rd grade 10% of students in 5th grade Hamilton, C., & Shinn, M. R. (2003). Characteristics of Word Callers: An investigation of the accuracy of teachers' judgments of reading comprehension and oral reading skills. School Psychology Review, 32(2), 228-240. Meisinger, E. B., Bradley, B. A., Schwanenflugel, P. J., Kuhn, M. R., & Morris, R. D. (2009). Myth and reality of the word caller: The relation between teacher nominations and prevalence among elementary school children. School Psychology Quarterly, 24(3), 147-159. Valencia, S. W., & Buly, M. R. (2004). Behind test scores: What struggling readers really need. The Reading Teacher, 57(6), 520-531.
Frequently Asked Questions What happens if the student finishes early? Answer: Not a problem, stop the timer and mark the last work. The students score is prorated by the software.
Frequently Asked Questions Our last CBM tool included passages that seemed to vary significantly from one to the next, making it difficult to interpret progress. How are the FAST passages determined to be consistent from one to the next? Answer: The US Department of Education funded a four year study to examine this issue and improve the quality of passages. The FAST passages were extensively field tested and are highly similar to minimize variability due to instrumentation. Published studies document this (Ardoin & Christ, 2009; Christ & Ardoin, 2009) Student scores will still vary across occasions. Highly standardized conditions (quiet, clear directions, consistent time of day, consistent student motivation) will reduce variability. Ardoin, S. P., & Christ, T. J. (2009). Curriculum based measurement of oral reading: Standard errors associated with progress monitoring outcomes from DIBELS, AIMSweb, and an experimental passage set. School Psychology Review, 38(2), 266-283. Christ, T. J., & Ardoin, S. P. (2009). Curriculum-based measurement of oral reading: Passage equivalence and probe-set development. Journal of School Psychology(47), 55-75.
Frequently Asked Questions If we ve screened and found a student to be above the Benchmark score for low-risk, do we need to screen again that year (or ever)? Why/why not? Answer: Yes, high functioning students have the right to benefit from instruction and continue with their progress. The teacher and screening system must consider all students. Students on both ends of the spectrum are at risk to not benefit from core instruction, which is often targeted at typically developing students. Teachers and students should all have goals. Ongoing monitoring at least three times per year will inform progress toward those goals.
Frequently Asked Questions Are these passages written at grade level? How is that determined? Answer: Passages were written below the Lexile (readability) band that typically defines grade level material. Why? The evidence supports equivalent validity and reliability for high and low difficulty passages. Less difficulty passages ensure a larger sample of reading behavior (i.e., teachers observe students reading more words). Less difficult passages ensure accessibility for less skilled readers, who are those most frequently monitored. Automaticity of reading skills is most closely related to high frequency words, phrases, and patterns that are familiar to most readers.
Frequently Asked Questions Do the passages use fiction or non-fiction? Why? Answer: All of the passages are narrative fiction. Why? The passages are highly controlled to ensure that performance across passages is comparable. This requires a consistent text structure. The passages include controls for Decodability of words Frequency of words Goal-Action-Outcome story structure There is equivalent reliability and validity of fiction and nonfiction along with informational and narrative story structures.
Frequently Asked Questions If I have a struggling student, can I give him/her a lower grade level passage? Answer: Whenever possible, monitor the student in grade level passages. The goal for the student is to meet grade level standards. If a student reads fewer than 10 words on grade level then it might be appropriate to monitor them with first grade probes. Those have more controlled text that include shorter sentences and many more decodable and high frequency words.
Frequently Asked Questions What accommodations can be used for students with disabilities? Why/why not? Answer: The application of benchmarks and norms require standardized administration. We have yet to do research on modification or accommodations. Consider these carefully before they are used. If they are used, be sure to include a description of those procedures whenever the score is reported or used to guide instruction.
Frequently Asked Questions I notice students do better if they can read the passages silently, first. Is this OK? Answer: No, these are standardized cold readings, which requires that we measure the students performance during their initial reading. We use the same screening passages all year. These are still cold readings. Evidence suggests that practice effects on CBM-Reading materials subside after approximately four weeks. This supports the use of the same screening passages three to four times per years. Hot readings (with practice) invalidate the assessment.
Frequently Asked Questions Can parents have a copy of the passages to take home? Can we share the passages with others? Answer: The passages should not be shared in a manner that would result in any student having exposure to the passages outside of testing purposes at school. This is to ensure that the data you collect are accurate and not influenced by practice effect. Similarly, the passages are licensed to your school system. Sharing with others in your community or other schools may violate the terms of that agreement. Generally speaking, it should be made extremely clear to all staff and parents that the passages are not to be used for practice or other purposes.
Frequently Asked Questions Why use composite benchmarks? Answer: Composites combine scores from an optimal set of measures. This provides a more complete score that better represents broad reading Benchmarks are designed to predict student performance at or about the (15th) and 40th percentile on nationally normed assessments and state tests. Gates-MacGinitie Reading Test Group Reading Assessment and Diagnostic Evaluation (GRADE) Minnesota Comprehensive Assessment III Pending: Georgia, Iowa, Massachusetts
Frequently Asked Questions How are the benchmark / cut scores determined? Answer: Benchmarks are selected to optimize correct classification. We try to predict whether students are likely to performance above the 40th percentile on nationally normed assessments in the spring (end of year). See the discussion in CBM-Reading & aReading materials earlyReading benchmarks include composites, which improves classification (prediction) accuracy.
Frequently Asked Questions Benchmarks and Composites GRADE COMPOSITE LETTER SOUNDS
Frequently Asked Questions Benchmarks and Composites GRADE COMPOSITE LETTER SOUNDS
Frequently Asked Questions Benchmarks and Composites: Winter GRADE COMPOSITE LETTER SOUNDS
Frequently Asked Questions What makes-up a composite? Answer: A weighted combination of the required subtests High Moderate Low The weights change each season (see next slide)
Composite: Composition Kindergarten First Grade earlyReading Subtests Fall Winter Spring Fall Winter Spring H H L L Concepts of Print Onset Sounds Letter Names Letter Sounds Word Segmenting H L L L M H M L H M M L H M M H L L Nonsense Words Sight Words Sentence Reading CBMReading M H Benchmark scores for composite TBD TBD TBD TBD TBD TBD
Frequently Asked Questions Should I interpret sub-test scores? Answer: Yes, consider performance on both the individual tests and the composite. LS are often used as the general indicator in Kindergarten CBM-Reading is often used as the general indicator in 1st The other scores round out our estimate of generalized reading achievement.
Frequently Asked Questions Why are some scores not timed? Answer: Careful analysis help the developers determine the skills for which automaticity was important. It seemed easier to eliminate timing from the administration if it was not important. Automaticity did not seem important for some skills.
10 Myths in 10 Minutes Myth Fact 1. FAST reading fluency is about asking students to read fast. FAST instructs students to do their BEST reading. If they speed read, the instructions are to start over. Asking students to read fast vs best is like shaking your wearable device. Your data may look better, but they aren t accurate! Fluency and automaticity in each underlying early literacy skill is a critical component to becoming a skilled reader. 2. FAST passages are too easy. Passages were written below the Lexile (readability) band that typically defines grade level material. Less difficult passages ensure a larger sample of reading behavior (i.e., teachers observe students reading more words). Automaticity of reading skills is most closely related to high frequency words, phrases, and patterns that are familiar to most read 3. FAST benchmarks are too high. FAST passages are highly-controlled and the benchmarks were set to predict reading outcomes. Every test you use has different targets. FAST is no different. Feel free to read a passage doing your best reading and time yourself for a minute if you doubt the benchmark.
10 Myths in 10 Minutes Myth Fact 4. Fluency passages are only for students who need a fluency intervention. Assessment is different than instruction. Fluency is still the best indicator we have for overall reading growth, regardless of the focus of instruction. Intervention should always be matched to the student s instructional need. Fluency is not an appropriate PM measure when a student has met the benchmark and is an accurate reader. 5. Students who score below target on Nonsense Word Fluency need a nonsense word intervention. Don t teach nonsense words! They are not a practical skill. We use them for assessment to really target the skill we want to know about (letter-sound decoding) This applies to each of the FAST assessments. We teach to the Iowa Core NOT to the assessment! 6. CBM does not directly measure comprehension. CBM-Reading does not provide a direct measure of reading comprehension. Automaticity is a prerequisite to reading comprehension by freeing up cognitive resources to focus on comprehension. CBM-R is highly predictive of outcomes on general reading tests, including comprehension assessments. Oral Reading Fluency is an indicator of overall reading skill and is very sensitive to growth. That makes it a great progress monitoring tool! It s not intended to be the only skill taught, though, and is just the indicator that the instruction is or is not meeting the needs of the learner. It s akin to taking your temperature, or monitoring calorie intake. 7. There isn t a progress monitoring tool for my comprehension intervention.
10 Myths in 10 Minutes Myth Fact Fluency and comprehension have a reciprocal relationship with one greatly influencing the other. When a child is a fluent reader, she reads more text and gets more access to vocabulary and comprehension opportunities. As a child gets older, he needs to read more text. With any deficit in reading, an intervention should be targeted. 8. Fluency doesn t matter if a child has comprehension skills. 9. Universal screening identifies too many learners for interventions. Universal screening benchmarks are set to predict later reading success. When learners are only slightly at risk, providing a little more practice and ensuring that previous skills are mastered may PREVENT larger reading gaps in the future. 10. FAST Assessments are used for high-stakes decisions. Ex. summer school and retention. FAST measures alone should not be used for summer school and retention decisions. The use of any single indicator of competence to make important decisions, such as child retention, teacher evaluation, or funding, violates professional standards of measurement (AERA, 1999; APA 1999). aReading is a screening assessment. The grade level PM measures are still great indicators of reading growth when a child is identified with needs through aReading 10 (+1) There isn t a progress monitoring measure for aReading? 10 (+2) Progress monitoring weekly is too often. Progress monitoring frequency should be dependent on the sensitivity of the measure to change and the frequency of decisions with the data needed. Sensitivity - FAST PM measures are highly sensitive to small changes in student skill. Frequency - we hear that schools want to make instructional changes very frequently when students aren't making sufficient progress in an intervention.
Its about time. THANK YOU! www.FastForTeachers.org