Category Archives: Grading

Do you re-test?

Students sitting a Mathematics C exam.

Image via Wikipedia

If you give a major, timed assessment (test, exam, etc.) and nearly all of your students do poorly on it — as in, really poorly, 3/4-of-the-class-failed-it poorly — do you give a re-test and let them try it again? Or do you stick with the grades they got the first time? Do you invoke some kind of wigged-out grade curving scheme (no offense, Dave)? Or what?

Fortunately this hasn’t happened to me this semester, but it has happened to at least one of my colleagues, and we have an email discussion going on right now about what to do about it. Here are my thoughts on this. (Most of this post is verbatim from my contribution to the email discussion.)

For simplicity, I’m leaving the question of curving the grades out of this for now, and focus on whether you simply have a do-over for the exam or not. With that choice as the only one in play, I have been known on very rare occasions to give retests. I keep a couple of basic criteria in mind every time giving a re-test comes to mind:

(1) There has to be widespread failure of student learning in the class as demonstrated on the assessment. That is, I do not give retests to individuals or small groups, for fairness reasons, unless there is some incontrovertible reason for it.

And more importantly:

(2) There has to be evidence, the preponderance of which points to me as the main source of failure. It’s not a good idea, in other words, just to give a retest because the grades were bad. Without knowing WHY, exactly, the grades were so bad, retests can actually do far more harm than good, reinforcing among students the notion that if everybody blows it on an exam, it’s OK because the prof will just give a retest. (He can’t fail EVERYBODY, can he?)

If it’s practical, I’d have a colleague look at the exam for a second opinion on its design; talk with my students or give anonymous surveys about their preparation strategies; look honestly at my teaching and office hours work prior to the exam; etc. If the evidence points back to me or the exam, then I’d be the first person on board for a re-test. But if the evidence points to students — the exam and your instruction were reasonable, but they didn’t ask questions, come to office hours, do the reading, study appropriately, participate in class, or some linear combination — then students must bear the responsibility of their actions/inactions, and they must live with and learn from the grades they earned the first time.

It is not automatic that a class-wide failure on an exam means that I failed as a teacher and must therefore somehow make it right. Students (anybody, really) can just sometimes be irresponsible in large groups, and it takes a large-scale wakeup call to get them on track. The responsibility for student learning is shared, but just as students can learn with a bad professor, sometimes large groups of students can fail despite the best efforts of a great professor. Therefore I have to know WHY the grades were the way they were before you can make an informed decision. Just giving a retest without knowing the “why” will make grades go up and students happier, but it doesn’t really solve the root problem or prepare students for the next exam — and it doesn’t do much service to my college’s stated commitment to Responsibility either.

Like I said, I think I’ve re-done exams five times at the most in the last ten years. Once it was because there was at least one horrible typo on a problem, uncaught until grading time, that made the problem ten times harder/longer than it was supposed to have been — my fault, and I had no qualms giving a retest. Other times, I forget the details. (I’m old, you know.)

When I do this, I give the retest as a “pop” retest — students are not warned that I am doing this — using the exact same exam as the first time through. That way, students who really studied and prepared for the exam — and who therefore still retain the knowledge they had the first time they took it — benefit the most, and those who didn’t prepare as well — and who flush knowledge out of their brains immediately following an exam — don’t benefit as much. Since it’s the same exam, I will grade using the same rubric and then refund half the difference between the first and second takes. So a person who made a 30 the first time and a 100 the second time would end up with a 65. (Assuming that their failure was not some unambiguous, abject failure of teaching on my part; if it is, then they are entitled to a full refund of credit if they can repeat the task, such as was the case with the horrible typo I mentioned.)

That’s my take. What’s yours?

Enhanced by Zemanta
Advertisements

14 Comments

Filed under Education, Grading, Higher ed, Life in academia, Teaching

“This is a science course. Lasers are not voodoo.”

The teacher who graded this dismal paper from a physics class is either a lot  braver than I am or cares a lot less about his/her relationships with students; and s/he certainly has better artistic skills and a lot more time on his/her hands than I do:

Read the whole essay and especially the teacher’s marginalia. I think it captures the temptation of every teacher to grade papers by unloading our own cleverness onto  hapless, writing-impaired students.

But the article has a fair question — how does something this bad get a 3/3 grade?

Reblog this post [with Zemanta]

3 Comments

Filed under Grading, Humor, Teaching

Technology in proofs?

We interrupt this blogging hiatus to throw out a question that came up while I was grading today. The item being graded was a homework set in the intro-to-proof course that I teach. One paper brought up two instances of the same issue.

  • The student was writing a proof that hinged on arguing that both sin(t) and cos(t) are positive on the interval 0 < t < π/2. The “normal” way to argue this is just to appeal to the unit circle and note that in this interval, you’re remaining in the first quadrant and so both sin(t) and cos(t) are positive. But what the student did was to draw graphs of sin(t) and cos(t) in Maple, using the plot options to restrict the domain; the student then just said something to the effect of “The graph shows that both sin(t) and cos(t) are positive.”
  • Another proof was of a proposition claiming that there cannot exist three consecutive natural numbers such that the cube of the largest is equal to the sum of the cubes of the other two. The “normal” way to prove this is by contradiction, assuming that there are three consecutive natural numbers with the stated property. Setting up the equation representing that property leads to a certain third-degree polynomial P(x), and the problem boils down to showing that this polynomial has no roots in the natural numbers. In the contradiction proof, you’d assume P(x) does have a natural number root, and then proceed to plug that root into P(x) and chug until a contradiction is reached. (Often a proof like that would proceed by cases, one case being that the root is even and the other that the root is odd.) The student set up the contradiction correctly and made it to the polynomial. But then, rather than proceeding in cases or making use of some other logical deduction method, the student just used the solver on a graphing calculator to get only one root for the polynomial, that root being something like 4.7702 (clearly non-integer) and so there was the contradiction.

So what the student did was to substitute “normal” methods of proof — meaning, methods of proof that go straight from logic — with machine calculations. Those calculations are convincing and there were no errors made in performing them, and there seemed to be no hidden “gotchas” in what the student did (such as, “That graph looks like it’s positive, but how do you know it’s positive?”). So I gave full credit, but put a note asking the student not to depend on technology when writing (otherwise exemplary) proofs.

But it raises an important question in today’s tech-saturated mathematics curriculum: Just how much technology is acceptable in a mathematical proof? This question has its apotheosis in the controversy surrounding the machine proof of the Four-Color Theorem but I’m finding a central use of (a reliance upon?) technology to be more and more common in undergraduate proof-centered classes. What do you think? (This gives me an opportunity to show off WordPress’ nifty new polling feature.)

9 Comments

Filed under Computer algebra systems, Education, Grading, Math, Problem Solving, Teaching

What are some fatal errors in proofs?

The video post from the other day about handling ungraded homework assignments went so well that I thought I’d let you all have another crack and designing my courses for me! This time, I have a question about really bad mistakes that can be made in a proof.

One correction to the video — the rubric I am developing for proof grading gives scores of 0, 2, 4, 6, 8, or 10. A “0” is a proof that simply isn’t handed in at all. And any proof that shows serious effort and a modicum of correctness will get at least a 4. I am reserving the grade of “2” for proofs that commit any of the “fatal errors” I describe (and solicit) in the video.

7 Comments

Filed under Education, Geometry, Grading, Math, Problem Solving, Teaching