We interrupt this blogging hiatus to throw out a question that came up while I was grading today. The item being graded was a homework set in the intro-to-proof course that I teach. One paper brought up two instances of the same issue.

- The student was writing a proof that hinged on arguing that both sin(t) and cos(t) are positive on the interval 0 < t < π/2. The “normal” way to argue this is just to appeal to the unit circle and note that in this interval, you’re remaining in the first quadrant and so both sin(t) and cos(t) are positive. But what the student did was to draw graphs of sin(t) and cos(t) in Maple, using the plot options to restrict the domain; the student then just said something to the effect of “The graph shows that both sin(t) and cos(t) are positive.”
- Another proof was of a proposition claiming that there cannot exist three consecutive natural numbers such that the cube of the largest is equal to the sum of the cubes of the other two. The “normal” way to prove this is by contradiction, assuming that there are three consecutive natural numbers with the stated property. Setting up the equation representing that property leads to a certain third-degree polynomial P(x), and the problem boils down to showing that this polynomial has no roots in the natural numbers. In the contradiction proof, you’d assume P(x)
*does*have a natural number root, and then proceed to plug that root into P(x) and chug until a contradiction is reached. (Often a proof like that would proceed by cases, one case being that the root is even and the other that the root is odd.) The student set up the contradiction correctly and made it to the polynomial. But then, rather than proceeding in cases or making use of some other logical deduction method, the student just used the solver on a graphing calculator to get only one root for the polynomial, that root being something like 4.7702 (clearly non-integer) and so there was the contradiction.

So what the student did was to substitute “normal” methods of proof — meaning, methods of proof that go straight from logic — with machine calculations. Those calculations are convincing and there were no errors made in performing them, and there seemed to be no hidden “gotchas” in what the student did (such as, “That graph *looks* like it’s positive, but how do you *know* it’s positive?”). So I gave full credit, but put a note asking the student not to depend on technology when writing (otherwise exemplary) proofs.

But it raises an important question in today’s tech-saturated mathematics curriculum: Just how much technology is acceptable in a mathematical proof? This question has its apotheosis in the controversy surrounding the machine proof of the Four-Color Theorem but I’m finding a central use of (a reliance upon?) technology to be more and more common in undergraduate proof-centered classes. What do you think? (This gives me an opportunity to show off WordPress’ nifty new polling feature.)

I voted “no.” Learning to complete proofs without resorting to “it looks like,” or, the problem set classic, “it’s clear that,” is tricky. Even at the graduate level, as a TA in CS I’ve seen students struggle with this. I’m all for a stronger foundation in the logic…

Pingback: Stones Cry Out - If they keep silent… » Things Heard: e37v4

Wouldn’t the work of those students fall into the category of “demonstration” rather than proof?

Good lord! These are clearly illegitimate “proofs”. Proving theorems is not about making a *convincing* argument — that is some sort of presidential speech or op/ed column in the newspaper. And these proofs both, in philosophical parlance, make strong inductive arguments for what clearly requires a valid deductive one. That is clearly fallacious, or, in other words, the justification is not legitimate. Both cases rely on an otherwise legitimate argument from authority combined with an empirical observation(s). In the case of a legitimate use of technology to prove a theorem, you simply use technology to carry out an algorithm that you have proven to do whatever it is you need it to do. (And, there is nothing wrong with an empirical assurance that the computer is correctly carrying out the algorithm — that doesn’t make your proof empirical any more than citing a previous result by another mathematician, say, would.)

So, for the second case, for instance, to be like the four color theorem, the student would know just what the calculator does and be able to make the meaningful connection between that algorithm and the result he is getting from the calculator to the problem at hand. But, the student doesn’t have any idea what the calculator does. The calculator probably uses much more sophisticated methods than the Newton-Raphson method of finding zeros of a function, but let us suppose it were that simple. Then, the student would have to prove a whole bunch of theorems about the this method of approximation — that “it works” — which would require all kinds of epsilon-delta/real analysis to do. And, then he could apply it to the problem at hand, and THAT would be a nice alternative analytic proof of that theorem. After he proves all that stuff about Newton-Raphson or whatever method the calculator uses, I would have no problem with him then just plugging it into the calculator and observing that, whatever the root is, it cannot possibly be an integer.

But, just plug it in and act like that proves it? That is clearly fallacious, and while I probably understand why it is that you went ahead and gave them full credit, such a thing should never happen. You just acquiesced that something that is (at best) strong empirical evidence actually constitutes legitimate mathematical proof. That is science not mathematics. And, we both know this would not even fly for one second at any other level. In fact, go forth and publish this! I mean, it is completely different from the standard, right? I guarantee you it has not been published anywhere and will gladly stand corrected if you can show me where serious mathematicians produced this as a legitimate proof for these theorems to other serious mathematicians (and not some sort of a pedagogical thing like you are talking about here). So, if this is a legitimate proof, then it ought to be a free paper right there! Right? (Of course, that’s ridiculous — and the only way it can be ridiculous to think that the Annals of Mathematics will put this extremely important, radical, brilliantly elegant and easy, even, alternative proof which will have far reaching implications for the field of mathematics right up front into their very next issue is because this is nothing like a legitimate mathematical proof. It is, in fact, fallacious reasoning to use such empirical methods on an a priori mathematical assertion.)

Adrian: The main point of your comment seems to be that in order to use technology in a legitimate way in a proof, students need to have proven that the algorithm their technology is implementing, actually works. Rather than just plugging an equation into (say) Maple and asking it to generate the solutions, and accepting this black-box result as a kind of oracle.

I can accept that, but how is using technology as a black-box oracle any different than using a theorem as a black-box oracle, if the student hasn’t proven that theorem as well? My geometry students are doing an exercise where they’re asked to prove that a parabola can intersect a circle in no more than four points. This is in a section on analytic geometry so they are using coordinates and equations for the conic sections. The problem boils down to a fourth-degree polynomial being set equal to zero, so the students should invoke the Fundamental Theorem of Algebra to say that such an equation has at most four real roots. But only two of them have worked through a proof of the FTA before. Should the others not be allowed to use it? Because if they use the FTA without proving it, they are just “plugging the equation in” to the FTA and having the result they want quasi-magically come out. And it’s no good saying that a proof of the FTA *exists* and therefore we can trust it, because a proof of the algorithm used on my student’s calculator exists too.

Do we have students prove *everything* they use?

I like your last point there…

I voted “no” to your question. And that vote was because I think now that I am out of school and little more mature, I would be too curious to just rely on my calculator, and I would think that it is important for students to learn how things work, not just that calculators have the answers.

But… You bring up a very good point here. We rely on all kinds of assumptions as we learn new things, so why would it be wrong to not keep using those assumptions? I say things all the time, but I dont then follow that up with a proof of how I know its true. (However, I do find myself double checking my technology at work quite a bit:)) And it would seem logical that as technology gets better and better, we would be able to rely on it more and more.

So now I’m stumped…

Well, for that matter, what did you use to prove the FTA? Don’t you have to prove all that stuff, too? And, what did you use to prove the things you used to prove the FTA? And so on. Like Spok seeking pure logic in Star Trek, we might seek pure rigor. In fact, it is impossible to be perfectly rigorous. That isn’t the point, though.

Ultimately, it is the community standards of the field that form the context of the problem you are working on. And, that, above all else, is being violated here. You cannot possibly say you are preparing these students for more advanced study with that. That is more of a concession you’re making. Because, like I say, that sort of “proof” is unacceptable at higher levels, and it’s really kind of going in the wrong direction of getting them to do stuff like they would be expected to do them at higher levels. So, I would argue that you definitely shouldn’t be *teaching* it: “Oh here, class, is a clever alternate proof!” (Accepting it on the homework and giving credit for it may be another matter.)

Of course, the response might be “But, that begs the question for me because as a mathematician I am one of the people that forms these standards. Maybe I should accept this sort of thing, and if it caught on, then this would fly at the professional level, even.” Now we know that on a practical level that’s not really true. You can just take yourself out of the picture and see what everyone else is doing. But, more relevantly there is this philosophical issue of knowing when a proof proves it or not. It is a philosophical issue because of an inherent philosophical vagueness to the matter (or, at least, the general case of it). But, it happens to be one that a mathematician is specifically designed for addressing so that it is actually in some sense THE issue of the field of mathematics rather than just some related issue that philosophers talk about. It isn’t in any particular thing mathematicians do, but more of an emergent aspect of everything they do.

At any rate, there is an answer in any particular case of what it takes to prove something (or, at least, the vast super majority of cases). That answer depends on the context it comes up in. The black box nature, in and of itself, of the pieces used to build the proof are not the problem when the proof is not sufficient. The problem is that, in some sense, “the issue” that needed to be addressed in the proof was hidden in one of those black boxes and so the argument begs the question. Again, it is philosophically vague as to just when this sort of thing happens or not. This has been proven to be so mathematically. So, no one can unilaterally say what it takes for all cases. But, that doesn’t mean it cannot be determined for particular cases, or possibly even (in principle) all cases on an individual case-by-case basis.

And, in these cases, I think it is clear that what should be being asked for in the problem set is not contained in what the students have submitted. It might be contained in some book discussing the Newton Raphson method or something else. I guess for known problems you could always just refer to someone else’s book or on the internet. Then, “proving theorems” is just about looking it up. But, clearly the hope is that the student does something more particular and elaborate than that. What these students have done isn’t it.

Adrian: I am definitely not teaching the students that using calculators as oracles is a method of problem solving. This is not even a widespread issue among the work being done by students in the class, although perhaps it should be brought up more carefully in class.

Also, keep in mind the context here: These are sophomores who have had a year of calculus and have minimal, if indeed any, exposure to proof. Everything they can do regarding proofs, they are learning in this class. So if a student turns in work that consists of sound proofs with the two exceptions I’ve noted here, I think they are making pretty good progress. What’s happened here is a philosophical issue that is deep relative to what they understand about proof at this point. Rather than deducting points for students who do this sort of thing the first time it’s done — thereby sublimating my indignance over an empirical approach to a proof — pedagogically it makes more sense to make a note to the student and use it as a “teachable moment” for discussion in the class, then decide on what our “community standards” as a class will be and put them into context of professional standards.

In other words, I think my students are doing a damned good job learning to prove theorems, and I’m reluctant to hit them upside the head with the full force of mathematical purity before they even have a sense of what that is.

To me, Adrian’s line, “At any rate, there is an answer in any particular case of what it takes to prove something (or, at least, the vast super majority of cases). That answer depends on the context it comes up in.” sums it up perfectly, although I think I reach a different conclusion from that.

When I teach Calculus, we’ll “prove” that the derivative of sin(x) is cos(x) from the definition of derivatives. In the course of that proof, we have to calculate the limit (as h->0) of sin(h)/h and also of (cos(h)-1)/h. There are geometric proofs that the first limit is 1 and the second limit is 0, but I’ll typically resort to a graphing calculator and use a table or graph to show what those limits are.

Have I failed to prove that the limit of sine is cosine? In some sense, yes, and I admit to my students that I’m being handwavy and this isn’t truly a proof, just evidence. On the other hand, I feel like spending the time to do the full-on proof would be a distraction at that particular point, when the MAIN thing I’m trying to do is justify a derivative rule and show how these derivative rules come up in the first place.

In the context you wrote about, I might well respond the same way, pointing out what part is true but not really justified but not taking off many points if that is minor. For upper-level students I might insist on an actual proof, and certainly I hope our students learn what is evidence and what is proof.

I do see a distinction between technology and a Theorem that someone, but not you, has proven. The technology isn’t really claiming that, say, there is only one root. It’s simply claiming that using whatever algorithm it does, it can only FIND one root. In particular, graphing calculators are only plotting a finite number of points and drawing conclusions — depending on the size of the window, that means they might miss changes that happen “between” those discrete points. With that in mind, I do think it’s important for students to see examples where technology fails [a favorite simple example: in Excel, type in -1 in a cell, and then fill each cell down with “the previous cell)+0.1”. The cell that is supposed to contain 0 will actually contain a small positive number.] A Theorem, on the other hand, IS claiming that something is absolutely true, not just that it looks feasible.