Monday, February 25, 2008

DIBELS: Arguments

List what you understand Ken Goodman’s five most critical points are and the evidence that used to support these points? (Please answer from your point of view not his.)

1. DIBELS actually harms children’s ability to learn to read meaningfully

Anecdotal evidence (including parent and teacher feedback) is given documenting the harm that DIBELS can cause children. In the text, a parent describes in great detail what happened to her child when given DIBELS in kindergarten. Her son was categorized as a problem reader and was recommended to repeat a grade level. The mother knew her son was a thoughtful and careful learner and understood without training that the test scores did not accurately reflect her son’s reading ability. The family was so traumatized by the experience that she decided to home school her son rather than keep him in a school system that would not accurately assess his skills and abilities. Ken Goodman asks the reader to question the usefulness of an assessment instrument that would label a child who is thoughtful and reflective about reading as a poor reader simply because they cannot read fast or read nonsense words.

2. DIBELS is out of alignment with what it claims to measure

Alignment is such a key component in the validity of any assessment instrument. Does the assessment measure what it claims? In the case of DIBELS, the stated purpose of the tests is to assess student skills in 5 areas: Phonemic Awareness, Alphabetic Principle, Fluency with Text, Vocabulary, and Comprehension. The underlying assumption is that each of these skills builds on the previous (and predict success in the next) and that learning to read is a linear process. We begin with the premise that learning to read means a student is learning to make meaning from text. That process is not linear but holistic in nature. Because DIBELS measures speed and accuracy in the five skill areas, it does not effectively measure a student’s ability to make meaning from text, nor does it take into account how people learn to read. The tests are therefore out of alignment with their stated goals and are therefore invalid.

3. DIBELS encourages students to become fast readers who are not good meaning-makers

When students reach us in college, we often see students who are not readers. They don’t enjoy reading and they don’t know how to successfully read complex academic texts. This is now not surprising after reading where students have been on their way to college in terms of their reading. The Goodman text and video give many examples of the ways in which the DIBELS tests (by nature of the test and test-takers themselves) get the idea that the goal is to score well on the tests. To score well means they need to be able to read fast and accurately. However, the tests do not encourage students to read carefully and with reflection or meaning. So the act of reading becomes a game of sorts (although a high stakes one!) where participants are involved at a low cognitive level – get it done, get it done fast, and be accurate. Move up those skill levels! This instead of reinforcing messages (by way of activities, emphasis, and assessments) that encourage deeply engaged, meaning makers who are developing an understanding of the reasons why we read, the skills to read well, and an enjoyment of reading.

4. DIBELS encourages teaching to the test and curricular development based on the tests, rather than encouraging curricular development that has as its outcome, skilled readers.

Because the data from DIBELS are tied to NCLB initiatives, that means funding for schools is also tied to them. Of course, then, the scores wield an incredible power. This dynamic flips good instructional and curricular design on its head. Not only does it take time away from instruction as teachers spend more and more time on testing, but it encourages a teach to the test mentality because the stakes for schools and teachers are so high. If schools want funding, they need data to document student progress. DIBELS provides a slick package that on the outside looks like it can deliver both the data and a logical rationale for instruction and intervention (but we now understand the myriad of reasons why the tests do neither well).

In order to show student progress (and ultimately to get the needed funding), the logical conclusion is to prepare students to do well on the tests. Therefore curriculum design gets hijacked down this path (which wouldn’t be all bad IF the path led students to become effective readers). The problem, however is that the tests do NOT accurately determine whether students are good meaning–makers in relation to text and so the logical outcome is that students come out the other end of educational systems doing what DIBELS lead them to do: read quickly and accurately (but perhaps without meaning).

Assessment, when done properly, provides data that can inform both teaching and curricular design decisions. That means, then, that the assessment instruments used must be accurate, valid, and reliable. The goal of any instruction should be to help students reach the stated learning outcomes – that is in this case to become good readers (i.e. meaning-makers). Assessment is one tool that teachers can use to help students reach the objectives, but it should not be the only tool, nor should it become or replace the focus of instruction.

5. DIBELS data gatherers assume that testers will be consistent in scoring and that they are able, in just a few minutes, to assess students’ strengths and weaknesses as readers.

Reading well requires such a complex combination of skills, that I am wary of any testing system that describes itself as fast, easy, and reliable. The other issue that I take with this testing system is the impact that the data has on children. If they don’t score well, then educational decisions are made that can affect them for their entire educational careers, such as being labeled as a slow reader, or feeling like they are not good readers because they read slowly and thoughtfully, or being held back a grade because they don’t score well on the tests.. Publicly posted scores allow children to become focused on test scores and not what they might represent (and in this case what they represent is false anyway). Children don’t know the difference. They only know their score in relation to others and what that says about them.

If testers are not properly trained to score the tests consistently, then all the efforts to compare data across states are invalid. I clearly remember when our literacy center (ESL) got a new test series. All those who would be scoring the tests (which were used for placement only) met to work through how we would score the exams. We worked long and hard at bringing this into consistency across scorers – and this paid off for us because we ended up getting very effective placement into our leveled courses which we hadn’t had before. The tests were used again as we tried to determine when students should move up a level. However – I have to say that even as we scored (and got data) our guts were telling us, more often than not, almost immediately where students should be placed. So any test that also takes the experienced teacher’s judgment out of the equation is suspect to me. In our case, the test scores acted more as a confirmation than anything else for us.

Part B:

What are the five most powerful arguments opposing his points and the evidence that is used to support these points? (Please answer from your point of view.) I wrote this from the point of view of someone supporting DIBELS.

The availability of statistically meaningful data support DIBELS use

As of the printing of the Goodman text, data for over 2,000,000 students have been gathered, making a very substantial sample size. In addition, the authors of DIBELS, seek to show through the data that the outcomes of the tests are both reliable and valid.

The goals for using DIBELS match those set out by No Child Left Behind mandates (based on the National Reading Panel’s big five sub-skills).

In order to continue to receive federal funding, schools must document that they are able to get the majority of their students to show satisfactory progress in different categories. Reading is one of those key areas. So schools need a way to gather data and show progress in reading. DIBELS comes along and purports to give schools the ability to document student progress in reading in a very short amount of time and at the same time, help teachers remediate with students who are struggling in the different sub-skills. The sub-skills in DIBELS follow closely the big five areas that the NRP determined to be the most important aspects of learning to read: phonemic awareness, phonics, fluency, vocabulary, and comprehension.

DIBELS is easy to use

With only one-minute samples of a student’s work, teachers are supposed to be able to tell what students need in order to be better readers. The tests are broken down by grade level and sub-skill, so teachers don’t have to even think about when to administer the tests or to whom. The results indicate the sub-skills that need work before children can move onto the next skill.

DIBELS fits nicely into textbook planning

One of the underlying assumptions made by the makers of DIBELS is that the sub-skills are acquired sequentially and linearly as children learn to read. This makes things very nice for textbook publishers because they can simply plug in activities and assessments that target the sub-skills needed to do well on the tests. The design order is already decided, based on the tests themselves. Again, no thinking required here. Just plug it in and go. This saves time and money on the part of the publishers because design time and collaboration time is cut down.

DIBELS is a one size fits all solution

By starting with the assumption that learning to read is a sequential and linear process for all students means that designing tests based on this sequence is a straightforward and unambiguous process. Students acquire one sub-skill at a time and readiness for each subsequent skill depends on the successful acquisition of the previous skill. Since learning to read is a universal process, the tests retain their validity across cultures so they can be used by native speakers as well as second language learners equally well.

Part C: A one sentence response as to which side of the issue you stand.

Because we begin with the proposition that reading is a holistic process, the goal of which is meaning-making, the DIBELS arsenal of tests is invalid because it parses out the measurement of a student’s reading ability to disconnected sub-skills, such as the speed with which students can read nonsense words, and it is non-aligned with the actual outcome we are trying to measure: a student’s ability to make meaning from text.

4 comments:

deepbrook said...

(Note that I work for the publisher of DIBELS, please view my comments accordingly)

In my opinion, DIBELS and the DIBELS authors are mis-characterized here as supporting the idea that reading acquisition is sequential and linear. I was at the DIBELS Summit in Albuquerque a few weeks ago and heard the authors' first hand comments to the effect that the five components of reading (plus spelling and writing) develop together, not in isolation or sequence.

DIBELS measures some skills earlier than others (initial sounds, letter naming, phoneme segmentation) because children tend to acquire and master them earlier than they do connected text, but keep in mind that the "main" DIBELS measure, oral reading fluency with connected text (plus its comprehension counterpart), is introduced early, in the middle of first grade.

That timing is certainly not to suggest that some students can't read connected text earlier, but rather reflects the use of DIBELS as a screening and progress monitoring tool to help students who are at risk for reading difficulty.

Suzanne Shaffer said...

Dear deepbrook,
Thanks for your input - We are working on analyzing the pros/cons of DIBELS for our graduate class and so your comments are very informative and interesting. May I share them with our class?
Suzanne

deepbrook said...

Please feel free to share. My contact info is jeffd @ sopriswest.com should you or anyone have any follow-on comments.

And despite being employed by a publisher, I really only have one agenda when it comes to reading--that students are able to extract meaning from text ... and love doing it.

- Jeff

Suzanne Shaffer said...

Hi Jeff,
Thanks a bunch for getting back to me!! I appreciate your sharing your viewpoint - I will share it with the class as we try to wade through the pros/cons of everything!!