The Chronicle has an article out today, “Can the Student Course Evaluation Be Redeemed?”, that rightly points out how student course evaluations are often counter-productive to improving teaching and learning. The article refers to a Stanford professor’s call for an instructor completed “inventory of the research-based teaching practices they use”, but most of the article centers on revised course evaluation tool from a Kansas State University spin-off (the IDEA Center). One of the key problems described is that “administrators often take their results as numerical gospel” as well as faculty misapplying the results.
However they’re used, a lot of course evaluations simply aren’t very good, [IDEA president] Mr. Ryalls says.
But as flawed as they are, faculty members still turn to them as some gauge of effectiveness in the classroom. About three-quarters of instructors use formal evaluations and informal feedback “quite a bit” or “very much” when altering their courses, according to the Faculty Survey of Student Engagement.
One limitation of many tools is that they ask students things they don’t really know. A frequent example: Was your instructor knowledgeable about course content?
There is one additional problem with most student course evaluations that is not explicitly covered in the Chronicle articles – students newly involved in active learning approaches often rate the course and instructor poorly even if they end up learning more effectively. We saw this in our e-Literate TV case study at UC Davis. In a previous post we highlighted how the routine hard work required of students in active learning courses can lead to poor evaluations, but later in the interview student course evaluations came up as a major barrier to improving teaching practices.
Phil Hill: Catherine, especially with even more of a firsthand view, what do you see as the biggest barrier?
Catherine Uvarov: Well, in a way, I was fortunate because I was more a newbie instructor, so I didn’t have like 20 years of experience where I had done it this other way. Just coming in and telling instructors, “Hey, that thing that you’ve been doing for 20 years. You could be doing it better.” They don’t want to hear that. That thing that you’ve been doing for 20 years. You could be doing it better. They have worked very hard over the past 15-, 20-plus years to optimize their instructional methods to the best of their ability within their set of norm practices.
Chris Pagliarulo: And the feedback that they were getting.
Catherine Uvarov: And the feedback, so there is a huge emphasis on student evaluations and how much students like you, which is not really correlated at all with how much they’re actually learning. So, if that’s the only measure of student learning or a student—anything in the class—is student evaluations, then that’s what the instructor is tuning for.
They’re not really figuring out if their students are learning or turning the mirror on themselves and saying, “What can I do to improve my student’s learning?” They’re just saying, “What can I do to make my students like me better?”
Phil Hill: Actually, I’d like you to go a little bit more detail on course evaluations as they’re currently used. I think I heard you say those are more based on, “Do students like me?” So, what do the current course evaluations really measure? What direction does it push faculty?
Catherine Uvarov: In my opinion, the student evaluations are pretty much worthless because the questions that they ask are very generic. It’s like, “Does the person speak loud? Are their visual aids clear?” It’s very generic and bland, and then it gets down to the only question that they really care about—rate the overall performance of this instructor.
What we have found in my flipped class and in any of these where the lecture is changing their style and making the emphasis more on the students, the students are thinking, “Well, I learned all of the material on my own, so the instructor didn’t teach me that material. I’m going to rate the instructor lower because they were not as valuable to me.
Erin Becker: When you make the students do more work, they don’t like you as much, and that hurts your course evaluations, which in turn feeds back in to the incentivization issue.
Marc Faciotti: It’s a challenge. If you’re not thinking about education all day—and most of us have research labs that occupy a lot of time as well (administrative duties and all that type of thing)—so if you don’t have training there, there’s a lot of catching up to do. Most institutions have great resources on campus. There’s people dying here at iAMSTEM to help and to catalyze some of these things. So, seek help, be realistic about how much you’re going to change the first time around, and have kind of a long-term plan for what you’d like to achieve.
Marco Molinaro: I think the biggest barrier we have right now is that the faculty rewards system doesn’t yet take in to account this type of experimentation and doesn’t really promote a faculty member based on the quality of their instruction and the effects that they’ve had on student learning.
Later in the Chronicle article there is a discussion about whether to scuttle student evaluations altogether. I strongly agree with this conclusion:
For Mr. Ryalls, of IDEA, the problems with students’ evaluations shouldn’t scuttle their use altogether. “What drives me crazy,” he says, “is this notion that students don’t know what the hell they’re talking about.” They spend more time than anyone else watching faculty members teach, he says. “Student voice matters.”