This post at ProfHacker reminded me to write about something I’m trying this semester in my calculus classes (the only freshman-level class I have right now). I’m giving not one but four course evaluations during the semester. I’ve given midterm evaluations on occasion in the past, but it seemed to me that even twice a semester isn’t really enough. So, I’m giving evaluations at the end of the third, sixth, ninth, and twelfth weeks of the semester.
The first three of these are informal and very loosely structured. They each have three basic questions:
- What do you LOVE about this course?
- What do you HATE about this course?
- If you could change ONE THING about this course, what would it be?
The 6- and 9-week evaluations have two additional questions: What’s changed for the BETTER since the last evaluation? and What’s changed for the WORSE since the last evaluation? In week 12, students will do the official college evaluation for my course which has all the usual questions on it that probably are found anywhere.
To give credit where it’s due, I stole this idea from Eric Mazur in his book Peer Instruction: A User’s Manual, wherein he steals it again from another professor at UC Berkeley. I modified the idea and form so it works for a series of evaluations rather than just a one-time evaluation. I really like the questions, even though they’re loaded and ambiguous, because the elicit only the one or two most strongly-held opinions about the course from students rather than a laundry list of minutiae or slogans that often show up on written evaluations. When you bring students to the point, the things they often complain about the most are not things that they really feel all that strongly about — certainly not strongly enough to call it “hate”. Similarly for “love”. It’s easy to write about nice little things that happen in a class, but students aren’t often asked to reflect on what they “love” about a class.
We’re about to start the eighth week of the semester, so I’ve already given the first two of these evaluations. I posted them as a questionnaire on the course Moodle site and let students fill in their responses over a Friday-Monday period. They were not required to do so, but I ended up with nearly 100% participation. What’s been great about the results is the change in the responses from week 3 to week 6.
In week 3, the responses were all over the place. The students — mostly freshmen, some of them still unpacking from moving-in day — loved a lot and hated a lot. About half the class loved the technology focus of my class. The other half hated it and wanted it all gone. Many students hated that I wasn’t collecting homework from the book and grading it, hated that the class didn’t consist entirely of examples being worked on the board for them and that the tests weren’t exactly like the homework, and so on — in other words, they felt strongly about how the class was going, but it sounded a lot more like the ongoing struggle to cope with college intellectual culture than it did a serious beef with me or my teaching.
I took the results from the week 3 evaluation and discussed them with our associate dean, who works with faculty on teaching issues, to get his perspective on things. After we met, I blocked off 25 minutes — half a class — before the week 6 evaluations to debrief the students on their responses. The responses on Moodle were anonymous, so I just put them all up on the screen for students to see. This allowed students to see two complementary things: That some of the things they thought everybody hated were really just issue that they alone had, and that some of the things they thought they were alone in loving were actually shared by others. The anti-tech faction saw that there was a significant pro-tech faction, and vice versa, and so the notion of abolishing all technology and using only pencil and paper (one of the actual “one things you’d change” repsonses) became suddenly complicated. During this debriefing session, I showed them some of the things I’d done to respond to the things to change that made sense to change, and I made my case for not changing the things that didn’t make sense to change.
After the week 6 evaluation, it was clear I hadn’t made everybody happy, but the “love” section of the evaluation almost doubled in size, and the “hate” section was about half its previous size — and consisted in large part of the response “I don’t really hate anything about this course”. Given that it’s freshman calculus and I have a reputation, well-deserved, for being a hard professor, that’s kind of shocking. The statements in the “hate” or “change” section that were substantive took on a different tone: Instead of “We should stop using WeBWorK!” it because “I wish WeBWorK weren’t so hard to use.” Through this reflection and evaluation process, and my responses and ongoing conversation related to it, students are refining their ideas about what they like and dislike about a course.
These shifts are crucial for getting students to think clearly on the official evaluation of the course, which is coming up in week 12. I’m not guaranteed to get all-positive evaluations, but I think that after three practice rounds of informal evaluations, after each of which I demonstrate my seriousness in listening to their concerns and doing things about the stuff that can or should be changed, students should be able to write on the official evaluations in a serious, mature, and meaningful way — rather than latching onto one little thing in the course that bugs them and turning it into a wholesale rant, or letting feelings get the better of their judgment.
This entire process is just an example of using both formative and summative evaluations in a class, which is a mirror of the kinds of assessments we should be giving students. The formative part — my informal evaluations — let the students act as “spotters” for the course while it’s developing and running its course. The summative part — the official evaluation — is for students to look back over the entire course and evaluate it. I think that without at least one, preferably two or three, formative evaluations, it’s hard for novice learners like (most) college freshmen to know how to write a good summative evaluation.
I’m tenured, so I’m only doing this to make my students’ learning experience better. If you’re not tenured, this kind of formative-summative evaluation scheme is even more important. Having been on my college’s Promotion and Tenure Committee for five years now, I can definitely say that evaluations of a single course don’t usually provide much meaningful data. It’s the changes from one course to the subsequent ones that matter. Every instructor is going to have one course every now and then that just doesn’t work out, and the evaluations are miserable. The question that the P&T committee has is: What did that faculty member do about it? Did the same complaints crop up over and over again? Or was there some effort expended to address the issue (if the issue is worth addressing)? By giving multiple evaluations in a semester, faculty get the chance to scale the multiple-evaluation process down to fit within a single semester rather than across semesters.
Do you have a similar experience doing something like this? How did it go?
2 responses to “Course evaluations: The more, the merrier”
Myself not knowing exactly what WebWork is, the link and one of the hyperlinks it contains seemed worth checking. Someone struggling to maybe earn just a C in a Mathematics class could maybe be expected to perform just as badly or worse in the WebWork system. Many college level lower division and upper division Mathematics courses are tough enough without giving that technological, extra obstacle. That system could change a C grade to a D grade.
(I did not really wish to distract from the main topic, “Course Evaluations”)
Great idea. I have been doing both midterm and final evals but may try this idea in the spring. Like you, I am tenured but do think this would be useful for non-tenured faculty.