I sometimes get frustrated with the strong allergic reaction that even many good educators have to the request for hard data as part of an educational decision-making process. This allergy shows up in all different ways in all different places. In elementary school, for many years (and this may still be true today, for all I know), you could predict a person’s position on reading education based on their political disposition. Liberals liked whole language and conservatives liked phonics. James Moffett wrote a riveting if ultimately one-sided account of the conservative attachment to phonics (among many other things), with the more extreme proponents making claims like, “Phonics cured my daughter’s asthma!” (No, I’m not joking.) I have seen no corresponding documentation of the liberal version of this bias, but I have known many teachers, invariably liberals, who have vehemently asserted the value of whole language, where the only evidence they had was that most of the (middle-class) kids (from educated families) in their classrooms learn to read just fine. Nowhere in these debates was a discussion about whether it might be a good idea to construct some empirically rigorous tests to indicate which approach might provide what kinds of benefits under which circumstances. There are some things in education that do not lend themselves to rigorous empirical study, but this is not one of them—especially these days with the improvements we have in brain imaging equipment.
Another example: When I was a PhD student in English, I called a meeting of my fellow graduate students to discuss what we could do to improve the quality of our pedagogy. Mind you, this was at a department with a strong composition program that prided itself on its commitment to teaching. One suggestion I raised was that we could run a norming session for the English 101 final essays so that we could share best practices and ensure we were grading progress consistently across the program. This wasn’t a crazy suggestion from out of the blue; it was a successful practice that was already being employed in other universities. Yet you would have thought that I was asking everyone to get bar code tattoos on their foreheads. Somehow, taking such an step would violate their fundamental rights to individuality as professors, and besides, you can’t “norm” a thing like that. Teaching is an art, I was told.
The same sort of tension frequently creeps into educational technology conversations when it comes to anything remotely smacking of assessment, grading, or analytics. Test engines and grade books are derided as mere “management” tools, while retention early warning systems are apparently the first wave in the Rise of the Machines. As with the cases of phonics and final exam norming, these conversations are immensely frustrating to me in large part because I often find myself arguing with some of the talented, creative teachers who I respect the most. And I don’t always handle these situations well. (See for example, my somewhat snippy response to my friend Joe Ugoretz regarding the value of grade books and test engines or my regrettably snarky swipe at Jim Groom regarding the death of the LMS.) I think it’s mostly because I’m trying to convince myself that I’m not crazy, that I’m not some kind of Cylon or Terminator sleeper bot. Why am I the only teacher who sees it this way? What’s wrong with me?
It is therefore with great relief that I read Education Secretary Arne Duncan’s comment on the value of analytics:
Using technology to improve student achievement makes teachers feel almost as if “they’re cracking a code,” he explained. With adequate student data, teachers come to realize that effective instruction is not based on “just a guess or an assumption or a hunch, and all that is being driven by technology.”
Yes. That is how I feel.
I think there is a fear—a legitimate fear, bourne out by history and experience—that the bureaucracy will take the raw data as a substitute for judgment of teachers and students. I get that. It’s something to be fought vehemently. And maybe the tools we have today have more bureaucratic influence on their design than they should. But this is no reason to reject the value of data, or the scrutiny of peer review, or the use of tools that provide visible and measurable data regarding student activities. Like doctors and engineers, teachers are professionals. Nobody seriously imagines that the existence of an fMRI machine makes a doctor’s judgment less important. To the contrary, the more data we have, the more we benefit from the judgment of a trained and experienced expert.