Action Research and the Art of Knowing Our Students #NCTE15

What happens when student data doesn’t agree with what you think you know, especially about a student’s reading skills and dispositions?

It’s a situation that happens often in schools. We get quantitative results back from a reading screener that doesn’t seem to jive with what we see every day in classrooms. For example, a student shows high ability in reading, yet continues to stick with those easy readers and resists challenging himself or herself with more complex literature. Or the flip: A student has trouble passing that next benchmark, but is able to comprehend a book above his or her reading level range.

Here’s the thing: The test tests what it tests. The assessment is not to blame. In fact, blame should be out of the equation when having professional conversations about how to best respond to students who are not experiencing a level of success as expected. The solution is not in the assessment itself, but in differentiating the types of assessments we are using, questioning the types of data we are collecting, and organizing and analyzing the various data points to make sense of what’s actually happening with our students’ learning lives.

Differentiating the Assessments

It’s interesting how reading, a discipline far removed from the world of mathematics, is constantly quantified when attempting to assess readers’ abilities. Words correct per minute, how many comprehension questions answered correctly, and number of pages read are most often referenced when analyzing and discussing student progress. This data is not bad to have, but if it is all we have, then we paint an incomplete picture of our students as readers.

Think about yourself as a reader. What motivates you to read? I doubt you give yourself a quiz or count the number of words you read correctly on a page after completing a book. Lifelong readers are active assessors of their own reading. They use data, but not the type of data that we normally associate with the term. For example, readers will often rate books once they have finished them on Amazon and Goodreads. They also add a short review about the book on these online forums. The audience that technology provides for readers’ responses is a strong motivator. No one requires these independent readers to rate and review these books, but they do it anyway.

There is little reason why these authentic assessments cannot occur in today’s classrooms. One tool for students to rate and review books is Biblionasium (www.biblionasium.com). It’s like Goodreads for kids. Students can keep track of what they’ve read, what they want to read, and find books recommended by other young readers. It’s a safe and fun reading community for kids.

Yes, this is data. That data isn’t always a number still seems like a shocker for too many educators. To help, teacher practitioners should ask smart questions about the information coming at them to make better sense of where their students are at in their learning journeys.

Questioning the Data

Data such as reading lists and reading community interactions can be very informative, so long as we are reading the information in the right way.

Asking questions related to our practice can help guide our inquiries. For example, are students self-selecting books on their own more readily over time? Also, are they relying more on peers and less on the teacher in their book selection? In addition, are the books being read increasing in complexity throughout the year? All of these qualitative measures of reading disposition can directly relate to quantitative reading achievement scores, informing the teacher with a more comprehensive look at their literacy lives.

Organizing and Analyzing the Data

12189974_506691999491569_3464530376470669609_n
Students filling out reading motivation surveys via Google Forms and Chromebooks

I recently had our K-5 teachers administer reading motivation surveys with all of our students. The results have been illuminating for me, as I have entered them into spreadsheets.

Our plan is to position this qualitative data side-by-side with our fall screener data. The goal is to find patterns and trends as we compare and contrast these different data points, often called “triangulation” (Landrigan and Mulligan, 2013). Actually, the goal is not triangulation, but responding to the data and making instructional adjustments during the school year. This makes these assessments truly formative and for learning.

Is the time and energy worth it?

I hope so – I spent the better part of an afternoon at school today entering students’ responses to questions such as “What kind of reader are you?”, “How do you feel about reading with others?”, and “Do you like to read when you have free time?” (Marinek et al, 2015). The information collecting and organizing has been informative in itself. While it takes time, by transcribing students’ responses, I am learning so much about their reading lives. I hope that through this process of differentiating, questioning, and organizing and analyzing student reading data, both quantitative and qualitative, we will know our students better and become better teachers for our efforts.

References

Landrigan, C. & Mullligan, T. (2013). Assessment in Perspective: Focusing on the Reader Behind the Numbers. Portsmouth, NH: Stenhouse.

Marinak, B. A., Malloy, J. B., Gambrell L. B., & Mazzoni, S. A. (July/August, 2015). Me and My Reading Profile: A Tool for Assessing Early Reading Motivation. The Reading Teacher, (69)1, 51-62.


Attending the NCTE Annual Convention in Minneapolis this year? Join Karen Terlecky, Clare Landrigan, Tammy Mulligan and me as we share our experiences and specific strategies in conducting action research in today’s classrooms. See the following flyer for more information.

Screen Shot 2015-11-14 at 9.04.44 PM