Oral Reading Fluency (ORF) Norms

Dr. Gerald Tindal

Published on: September 1, 2023

The first study to be conducted establishing oral reading fluency (ORF) ‘norms’ occurred over 100 years ago. Specifically, in 1915, Starch asserted, “In every branch of instruction in the public schools we need a definite standard of attainment to be reached at the end of each grade. If we had such standards and if we had adequate means of precisely measuring efficiency, it would be possible for a qualified person to go into a schoolroom and measure the attainment in any or all subjects and determine on the basis of his measurements whether the pupils are up to the standard, whether they are deficient, and in what specific respect…ORF data were obtained from 3,511 pupils in 15 schools in seven cities located in three states (Wisconsin, Minnesota, and New York)”[1] (p. 14) and served as the standards of attainment (see below).

Nevertheless, this measure never really took hold until Dr. Stan Deno began a program of research and development on it with the Institute of Research on Learning Disabilities. This work was based on their conceptual model of Data Based Program Modification, in which brief measures of skill performances could be captured by teachers to monitor progress and adjust instructional programs as needed. The initial idea was to consider oral reading fluency (ORF) and other measures in writing, spelling, and math, akin to a weight-height developmental measure of ‘normal’ growth in these skills. To establish these measures as technically adequate, a series of studies was conducted, noting relations with other established measures of achievement (criterion validity, both concurrent and predictive), sampling plans from various curricula (content validity), and practical aspects (amount of time, change over time, etc.). With this original research validating and supporting ORF, systemic adoption of these measures, which came to be known as curriculum-based measures (CBM), was implemented by Gary Germann in the Pine County (MN) educational cooperative. The basic process involved all students taking ORF measures in the fall to determine which students were at risk of reading problems, and then intervene, with alternate forms administered throughout the school year to monitor progress. All this foundational work was completed in the early 1980s. Today, this system is rereferred to as Response to Intervention (RTI) or Multi-Tiered Support Systems (MTSS): The foundation involved the process of identifying a large, representative sample of students by distributing their performance on the ORF scale (number of words read correctly in one minute). Hence, use of the term ‘norms’.

Quickly, it became apparent that ORF indeed was quite normally distributed and formed a bell-shaped curve. In this type of distribution, demarcations are typically made in the middle (50th percentile with half the scores below and half above). This median is different than the average, which is simply the sum of all scores divided by the number of scores. In a normal distribution, the median and mean are equal. In the research and development that then followed, several different formalized assessment systems were established. Rather than simply randomly sampling passages from the curriculum, which became a bit tedious, controlled passages were developed. Eventually, AimsWeb© was developed out of the Pine County Educational Cooperative, followed by DIBELS© (eventually also splitting off to form Akadience Learning©), easyCBM© and CBM-Reading with FastBridge Learning©. Over the years, several other publishers also have developed their own passages and entered the marketplace.

By the early 1990s, the question of comparability began to take hold. How similar were the performance levels of students using these different ORF measurement systems. Hasbrouck and Tindal analyzed the available measures (in 1992) to publish ‘norms’ because “no large-scale norms currently existed for oral reading fluency…so data were collected from 1981 through 1990 from 7,000 to 9,000 students in grades 2 through 5 in five midwestern and western states. These data were compiled to establish large-scale ORF norms.” (pp. 42-43)[2].

This normative base was updated in 2006, again by Hasbrouck and Tindal[3]. As they noted, national norms for ORF performance could aide teachers in assessing and monitoring students’ reading progress because it was a stable and useful construct for assisting teachers in making several decisions. In the 2006 publication, the usefulness of such norms was highlighted: This measure can be used for different assessment purposes: screening, diagnostics, progress monitoring, and outcome documentation.

• ORF assessments can play a role in screening with ORF norms providing percentile scores for various grade levels and times during the school year. They offer a valuable tool for making instructional or placement decisions based on students’ performance. As a screening tool, ORF serves much like a thermometer used by physicians to gauge a patient’s temperature. Just as a temperature reading provides valid and reliable information about health, a fluency-based screening measure rapidly offers valuable insight into a student’s academic “health” or “illness.” However, like a thermometer reading, it’s only one indicator among many. They also noted that screening measures are designed to be efficient and quick, although they can sometimes over- or under-identify students needing assistance. Furthermore, variance in scores may arise from uncontrollable factors, like passage familiarity or timing imprecision by incorrectly beginning or ending the one-minute interval.

• For diagnosis, fluency measures are analogous to medical assessments in determining causes of reading difficulties. Teachers examine various components of reading, including prosody, phonics, vocabulary knowledge, and comprehension. Although instructional plans need to move beyond just fluency measures to be effective, ORF can provide key diagnostic insights into these other components.

• Progress monitoring, akin to tracking patient recovery, assesses students’ reading advancement. For students reading at or above expected levels, periodic assessments help gauge if they continue to meet benchmarks. Those below grade level may need to receive more frequent assessments, often depicted graphically to interpret progress. They noted that, while fluency is crucial for reading proficiency, educators should keep it in perspective, not overemphasizing fluency scores but recognizing their role in aiding reading instruction.

• Overall, outcomes can be documented with ORF playing a crucial role in identifying students in need of assistance, diagnosing issues, and monitoring progress to ensure effective reading instruction. Systems-level data can be collected to ensure appropriate resources are allocated in a strategic manner.

Then, in 2017, another update was compiled by Hasbrouck and Tindal (2017)[4]. “These newest updated ORF norms were ultimately compiled from three assessments: DIBELS 6th Edition© (using data from 2009‐2010), and DIBELS Next© (using data from 2010‐2011), and easyCBM© ORF assessment. The easyCBM© data were from the 2013-2014 school year” (p. 8). This sample included approximately 2 million+ students assessed in each season: fall, winter, and spring.

The final study of importance was conducted by NAEP in 2018, with the following main findings:

  • A positive relation was documented between performance on NAEP reading and (a) oral reading fluency, (b) word reading, and (c) pseudo word reading.
  • As students performed in successively lower achievement levels (basic high, medium, and low), their fluency decreased.
  • With students below basic, every aspect of oral reading fluency was var: speed, accuracy, and expression.
  • Significant percentages of students from different racial groups were below basic low subgroup, with this percentage greater for Black and Hispanic students.

In summary, five large scale studies have been completed on ORF normative performance. These studies have been conducted over decades and with diverse samples of students from several grade levels, and sampling from a variety of passages. Yet, the results are remarkably similar, particularly if the amount of normal variation is considered (about 30 words correct per minute). In the table, below, we have drawn out only the 50th percentile rank and displayed the successive values for each publication.

SourceGR 1GR 2GR 3GR 4GR 5
1915 Starch publication90108126144168
1992 publication 94114118128
2005 BRT Tech Report 2006 Publication5989107125138
2017 BRT Tech Report60100112133146
2018 NAEP   120 

By taking the median of these values of words correct per minute (WCPM), with minor rounding to facilitate ease of use, the following grade level performances can be used:

  • Grade 1 – 60 WCPM
  • Grade 2 – 90 WCPM
  • Grade 3 – 115 WCPM
  • Grade 4 – 125 WCPM
  • Grade 5 – 140 WCPM

[1] Starch, D. (1915).The measurement of efficiency in reading. The Journal of Educational Psychology, 6(1), 1-24.

[2] Hasbrouck, J. & Tindal, G. (1992). Curriculum-based oral reading fluency norms for students in Grades 2 through 5. Teaching Exceptional Children, 24(3), 41-24.

[3] Hasbrouck, J., & Tindal, G. (2006). Oral reading fluency norms: A valuable assessment tool for reading teachers. The Reading Teacher, 59(7), 636-644. doi:10.1598/RT.59.7.3

[4] Hasbrouck, J., & Tindal, G. (2017). An Update to Compiled Norms. University of Oregon: Behavioral Research and Teaching, Research Report 1702.


Dr. Gerald Tindal

Dr. Tindal is currently Professor Emeritus and the Director of Behavioral Research and Teaching (BRT) – University of Oregon. He is the former Castle-McIntosh-Knight Professor in the College of Education and past Department Head of Educational Methodology, Policy, and Leadership. His research focuses on alternate assessments, integrating students with disabilities in general education classrooms, curriculum-based measurement for screening students at risk, monitoring student progress, and evaluating instructional programs. Dr. Tindal conducts research on large scale testing and development of alternate assessments. This work includes investigations of teacher decision-making on test participation, test accommodations, and extended assessments of basic skills. He publishes and reviews articles in many special education journals and has written extensively on curriculum-based measurement and large-scale testing. He has also taught scores of courses on assessment systems, data driven decision-making, research design, and program evaluation.

Related Posts

Oral Reading Fluency Research

Currently, teachers recognize the significance of fostering fluent reading skills for students, blending appropriate speed, accuracy, and expression in their reading. Although the link between reading fluency and comprehension is widely acknowledged, it's particularly...