Tag Archives: Problem Solving

What correlates with problem solving skill?

About a year ago, I started partitioning up my Calculus tests into three sections: Concepts, Mechanics, and Problem Solving. The point values for each are 25, 25, and 50 respectively. The Concepts items are intended to be ones where no calculations are to be performed; instead students answer questions, interpret meanings of results, and draw conclusions based only on graphs, tables, or verbal descriptions. The Mechanics items are just straight-up calculations with no context, like “take the derivative of y = \sqrt{x^2 + 1}“. The Problem-Solving items are a mix of conceptual and mechanical tasks and can be either instances of things the students have seen before (e.g. optimzation or related rates problems) or some novel situation that is related to, but not identical to, the things they’ve done on homework and so on.

I did this to stress to students that the main goal of taking a calculus class is to learn how to solve problems effectively, and that conceptual mastery and mechanical mastery, while different from and to some extent independent of each other, both flow into mastery of problem-solving like tributaries to a river. It also helps me identify specific areas of improvement; if the class’ Mechanics average is high but the Concepts average is low, it tells me we need to work more on Concepts.

I just gave my third (of four) tests to my two sections of Calculus, and for the first time I started paying attention to the relationships between the scores on each section, and it felt like there were some interesting relationships happening between the sections of the test. So I decided to do not only my usual boxplot analysis of the individual parts but to make three scatter plots, pairing off Mechanics vs. Concepts, Problem Solving vs. Concepts, and Mechanics vs. Problem Solving, and look for trends.

Here’s the plot for Mechanics vs. Concepts:

That r-value of 0.6155 is statistically significant at the 0.01 level. Likewise, here’s Problem Solving vs. Concepts:

The r-value here of 0.5570 is obviously less than the first one, but it’s still statistically significant at the 0.01 level.

But check out the Problem Solving vs. Mechanics plot:

There’s a slight upward trend, but it looks disarrayed; and in fact the r = 0.3911 is significant only at the 0.05 level.

What all this suggests is that there is a stronger relationship between conceptual knowledge and mechanics, and between conceptual knowledge and problem solving skill, than there is between mechanical mastery and problem solving skill. In other words, while there appears to be some positive relationship between the ability simply to calculate and the ability to solve problems that involve calculation (are we clear on the difference between those two things?), the relationship between the ability to answer calculus questions involving no calculation and the ability to solve problems that do involve calculation is stronger — and so is the relationship between no-calculation problems and the ability to calculate, which seems really counterintuitive.

If this relationship holds in general — and I think that it does, and I’m not the only one — then clearly the environment most likely to teach calculus students how to be effective problem solvers is not the classroom primarily focused on computation. A healthy, interacting mixture of conceptual and mechanical work — with a primary emphasis on conceptual understanding — would seem to be what we need instead. The fact that this kind of environment stands in stark contrast to the typical calculus experience (both in the way we run our classes and the pedagogy implied in the books we choose) is something well worth considering.

Enhanced by Zemanta
Advertisements

11 Comments

Filed under Calculus, Critical thinking, Education, Higher ed, Math, Peer instruction, Problem Solving, Teaching

Boxplots: Curiouser and curiouser

The calculus class took their third (and last) hour-long assessment yesterday. In the spirit of data analytics ala the previous post here, I made boxplots for the different sections of the test (Conceptual Knowledge (CK), Computation (C), and Problem Solving (PS)) as well as the overall scores. Here are the boxplots for this assessment — put side-by-side with the boxplots for the same sections on the previous assessments. “A2” and “A3” mean Assessments 2 and 3.

Obviously there is still a great deal of improvement to be had here — the fact that the class average is still below passing remains unacceptable to me — but there have been some definite gains, particularly in the conceptual knowledge department.

What changed between Assessment 2 and Assessment 3? At least three things:

  • The content changed. Assessment 2 was over derivative rules and applications; Assessment 3 covered integration.
  • The way I treated the content in class changed. Based on the results of Assessment 2, I realized I needed to make conceptual work a much greater part of the class meetings. Previously the class meetings had been about half lecture, with time set aside to work “problems” — meaning, exercises, such as “find the critical numbers of y = xe^{-x}. Those are not really problems that assess conceptual knowledge. So I began to fold in more group work problems that ask students to reason from something other than a calculation. I stressed these problems from the textbook more in class. I tried to include more such problems in WeBWorK assignments — though there are precious few of them to be had.
  • The level of lip service I gave to conceptual problems went up hugely. Every day I was reminding the students of the low scores on Conceptual Knowledge on the test and that the simplest way to boost their grades in the class would be to improve their conceptual knowledge. I did not let their attention leave this issue.

Somewhere in a combination of these three things we have the real reason those scores went up. I tend to think the first point had little to do with it. Integration doesn’t seem inherently any easier to understand conceptually than differentiation, particularly at this stage in the course when differentiation is relatively familiar and integration is brand new. So I think that simply doing more conceptual problems in class and stressing the importance of conceptual knowledge in class were the main cause of the improvements.

Quite interestingly, the students’ scores on computation also improved — despite the reduced presence of computation in class because of the ramped-up levels of conceptual problems. We did fewer computational problems on the board and in group work, and yet their performance on raw computation improved! Again, I don’t think integration is easier than differentiation at this stage in the course, so I don’t think this improvement was because the material got easier. Maybe the last test put the fear of God into them and they started working outside of class more. I don’t know. But this does indicate to me that skill in computation is not strictly proportional to the amount of computation I do, or anybody else does, in class.

To overgeneralize for a second: Increased repetition on conceptual problems improves performance on those problems dramatically, while the corresponding reduction in time spent on computational exercises not only does not harm students’ performance on computation but might actually have something to do with improving it. If we math teachers can understand the implications of this possibility (or at least understand the extent to which this statement is true) we might be on to something big.

The scores on problem solving went two different directions. On the one hand, the median went up; but on the other hand the mean went down. And the middle 50% didn’t get any better on the top end and got worse on the bottom end. I’m still parsing that out. It could be the content itself this time; most of the actual problems in integration tend to take place near the end of the chapter, after the Fundamental Theorem and u-substitution, so the kinds of problems in this section were less than a week old for these students. But quite possibly the improvement in conceptual knowledge brought the median up on problem solving, despite the newness of the problems. Or maybe the differences aren’t even statistically significant.

What I take away from this is that if you want students to do well on non-routine problems, those problems have to occupy a central place in the class, and they have to be done not outside of class where there’s no domain expert to guide the students through them but in class. And likewise, we need not worry so much that we are “wasting precious class time” on group work on conceptual problems at the expense of individual computation skill. Students might do just fine on that stuff regardless, perhaps even better if they have enhanced conceptual understanding to support their computational skills.

It all goes back to support the inverted classroom model which I’ve been using in the MATLAB course, and now I’m wondering about its potential in calculus as well.

Reblog this post [with Zemanta]

4 Comments

Filed under Calculus, Critical thinking, Education, Inverted classroom, Math, MATLAB, Problem Solving, Teaching

The case of the curious boxplots

I just graded my second hour-long assessment for the Calculus class (yes, I do teach other courses besides MATLAB). I break these assessments up into three sections: Concept Knowledge, where students have to reason from verbal, graphical, or numerical information (24/100 points); Computations, where students do basic context-free symbol-crunching (26/100 points); and Problem Solving, consisting of problems that combine conceptual knowledge and computation (50/100 points). Here’s the Assessment itself. (There was a problem with the very last item — the function doesn’t have an inflection point — but we fixed it and students got extra time because of it.)

Unfortunately the students as a whole did quite poorly. The class average was around a 51%. As has been my practice this semester, I turn to data analysis whenever things go really badly to try and find out what might have happened. I made boxplots for each of the three sections and for the overall scores. The red bars inside the boxplots are the averages for each.

I think there’s some very interesting information in here.

The first thing I noticed was how similar the Calculations and Problem Solving distributions were. Typically students will do significantly better on Calculations than anything else, and the Problem Solving and Concept Knowledge distributions will mirror each other. But this time Calculations and Problem Solving appear to be the same.

But then you ask: Where’s the median in boxplots for these two distributions? The median shows up nicely in the first and fourth plot, but doesn’t appear in the middle two. Well, it turns out that for Calculations, the median and the 75th percentile are equal; while for Problem Solving, the median and the 25th percentile are equal! The middle half of each distribution is between 40 and 65% on each section, but the Calculation middle half is totally top-heavy while the Problem Solving middle half is totally bottom-heavy. Shocking — I guess.

So, clearly conceptual knowledge in general — the ability to reason and draw conclusions from non-computational methods — is a huge concern. That over 75% of the class is scoring less than 60% on a fairly routine conceptual problem is unacceptable. Issues with conceptual knowledge carry over to problem solving. Notice that the average on Conceptual Knowledge is roughly equal to the median on Problem Solving. And problem solving is the main purpose of having students take the course in the first place.

Computation was not as much of an issue for these students because they get tons of repetition with it (although it looks like they could use some more) via WeBWorK problems, which are overwhelmingly oriented towards context-free algebraic calculations. But what kind of repetition and supervised practice do they get with conceptual problems? We do a lot of group work, but it’s not graded. There is still a considerable amount of lecturing going on during the class period as well, and there is not an expectation that when I throw out a conceptual question to the class that it is supposed to be answered by everybody. Students do not spend nearly as much time working on conceptual problems and longer-form contextual problems as they do basic, context-free computation problems.

This has got to change in the class, both for right now — so I don’t end up failing 2/3 of my class — and for the future, so the next class will be better equipped to do calculus taught at a college level. I’m talking with the students tomorrow about the short term. As for the long term, two things come to mind that can help.

  • Clickers. Derek Bruff mentioned this in a Twitter conversation, and I think he’s right — clickers can elicit serious work on conceptual questions and alert me to how students are doing with these kinds of questions before the assessment hits and it’s too late to do anything proactive about it. I’ve been meaning to take the plunge and start using clickers and this might be the right, um, stimulus for it.
  • Inverted classroom. I’m so enthusiastic about how well the inverted classroom model has worked in the MATLAB course that I find myself projecting that model onto everything. But I do think that this model would provide students with the repetition and accountability they need on conceptual work, as well as give me the information about how they’re doing that I need. Set up some podcasts for course lectures for students to listen/watch outside of class; assign WeBWorK to assess the routine computational problems (which would be no change from what we’re doing now); and spend every class doing a graded in-class activity on a conceptual or problem-solving activity. That would take some work and a considerable amount of sales pitching to get students to buy into it, but I think I like what it might become.

7 Comments

Filed under Calculus, Clickers, Critical thinking, Inverted classroom, Math, Teaching, Technology

Wolfram|Alpha as a self-verification tool

Last week, I wrote about structuring class time to get students to self-verify their work. This means using tools, experiences, other people, and their own intelligence to gauge the validity of a solution or answer without uncritical reference an external authority — and being deliberate about it while teaching, resisting the urge to answer the many “Is this right?” questions that students will ask.

Among the many tools available to students for this purpose is Wolfram|Alpha, which has been blogged about extensively. (See also my YouTube video, “Wolfram|Alpha for Calculus Students”.) W|A’s ability to accept natural-language queries for calculations and other information and produce multiple representations of all information it has that is related to the query — and the fact that it’s free and readily accessible on the web — makes it perhaps the most powerful self-verification tool ever invented.

For example, suppose a student were trying to calculate the derivative of y = \frac{e^x}{x^2 + 1}. Students might forget the Quotient Rule and instead try to take the derivative of both top and bottom of the fraction, giving:

y' = \frac{e^x}{2x}

Then, if they’re conscientious students, they’ll ask “Is this right?” What I suggest is: What does Wolfram|Alpha say? If we type in derivative of e^x/(x^2+1) into W|A, we get:

The derivative W|A gets is clearly nowhere near the derivative we got,  so one of us is wrong… and it’s probably not W|A. Even if we got the initial derivative right in an unsimplified form, the probability of a simplification error is pretty high here thanks to all the algebra; we can check our work in different ways by looking at the alternate form and at the graphs. (Is my derivative always nonnegative? Does it have a root at 0? If I graph my result on a calculator or Winplot, does it look like the plot W|A is giving me? And so on.)

But how is this better than just having a very sophisticated “back of the book”, another authority figure whose correctness we don’t question and whose answers we use as the norm? The answer lies in the  “Show steps” link at the right corner of the result. Click on it, and we get the sort of disclosure that oracles, including backs of books, don’t usually provide:

Every step is generated in complete detail. Some of the details have to be parsed out (especially the first line about using the Quotient Rule), but nothing is hidden. This makes W|A much more like an interactive solutions manual than just the back of the book, and the ability given to the student to verify the correctness of the computer-generated solution is what makes W|A much more than just an oracle whose results we take on faith.

Using W|A as a self-checking tool also trains students to think in the right sort of way about reading — and preparing — mathematical solutions. Namely, the solution consists of a chain of steps, each of which is verifiable and, above all, simple. “Differentiate the sum term by term”; “The derivative of 1 is zero”. When students use W|A to check a solution, they can sit down with that solution and then go line by line, asking themselves (or having me ask them) “Do you understand THIS step? Do you understand THE NEXT step?” and so on. They begin to see that mathematical solutions may be complex when taken in totality but are ultimately made of simple things when taken down to the atomic level.

The very fact that solutions even have an “atomic level” and consist of irreducible simple steps chained together in a logical flow is a profound idea for a lot of students, and if they learn this and forget all their calculus, I’ll still feel like they had a successful experience in my class. For this reason alone teachers everywhere — particularly at the high school level, where mechanical fluency is perhaps more prominent than at the college level — ought to be making W|A a fixture of their instructional strategies.

Reblog this post [with Zemanta]

4 Comments

Filed under Calculus, Critical thinking, Math, Problem Solving, Teaching, Technology, Textbook-free, Wolfram|Alpha

How to convert a “backwards” proof into a “forwards” proof

Dave Richeson at Division By Zero wrote recently about a “proof technique” for proving equalities or inequalities that is far too common: Starting with the equality to be proven and working backwards to end at a true statement. This is a technique that is almost a valid way to prove things, but it contains — and engenders — serious flaws in logic and the concept of proof that can really get students into trouble later on.

I left a comment there that spells out my  feelings about why this technique is bad. What I wanted to focus on here is something I also mentioned in the comments, which was that it’s so easy to take a “backwards” proof and turn it into a “forwards” one that there’s no reason not to do it.

Take the following problem: Prove that, for all natural numbers n,

1 + 2 + 2^2 + \cdots + 2^n = 2^{n+1} - 1

This is a standard exercise using mathematical induction. The induction step is trivial; focus on the induction step. Here we assume that 1 + 2 + 2^2 + \cdots + 2^k = 2^{k+1} - 1 for all natural numbers less than or equal to k and then prove:

1 + 2 + 2^2 + \cdots + 2^{k+1} = 2^{k+2} - 1

Here we have to prove two expressions are equal. Here’s what the typical “backwards” proof would look like (click to enlarge):


A student may well come up with this as his/her proof. It’s not a bad initial draft of a proof. Everything we need to make a totally correct proof is here. But the backwards-ness of it — all stemming from the first line, where we have assumed what we are trying to prove — needs fixing. Here’s how.

First, note that all the important and correct mathematical steps are taking place on the left-hand sides of the equations, and the right-hand sides are the problem here. So delete all the right-hand sides of the equals signs and the final equals sign.

Next, since the problem with the original proof was that we started with an “equation” that was not known to be true,  eliminate any step that involved doing something to both sides. That would be line 4 in this proof. This might involve some re-working of the steps, in this case the trivial task of re-introducing a -1 in the final steps:

You could reverse these first two steps — eliminate all “both sides” actions and then get rid of the left-hand sides.

Then, we need to make it look nice. So for n = 1 to the end, move the (n+1)^\mathrm{st} left-hand side and justification to the n^\mathrm{th} right-hand side:


Now we have a correct proof that does not start by assuming the conclusion. It’s shorter, too. Really the main thing wrong with the “backwards” proof is the repeated — and, notice, unnecessary — assertion that everything is equal to the final expression. Remove that assertion and the correct “forwards” proof is basically right there looking at you.

Comments Off on How to convert a “backwards” proof into a “forwards” proof

Filed under Abstract algebra, Geometry, Math, Problem Solving

Resisting the urge to verify

When I am having students work on something, whether it’s homework or something done in class, I’ll get a stream of questions that are variations on:

  • Is this right?
  • Am I on the right track?
  • Can you tell me if I am doing this correctly?

And so on. They want verification. This is perfectly natural and, to some extent, conducive to learning. But I think that we math teachers acquiesce to these kinds of requests far too often, and we continue to verify when we ought to be teaching students how to self-verify.

In the early stages of learning a concept, students need what the machine learning people call training data. They need inputs paired with correct outputs. When asked to calculate the derivative of 5x^4, students need to know, having done what they feel is correct work, that the answer is 20x^3. This heads off any major misconception in the formation of the concept being studied. The more complicated the concept, the more training data is useful in forming it.

But this is in the early stages of learning. Life, on the other hand, does not consist of training data. In real life, students are going to be presented with ambiguous, ill-posed problems that may not even have a single correct answer. Even if there is one, there is no authoritative voice that says definitively that their answer is right or wrong. At least, you’d have to stop and ask how you know that the authority itself is right or wrong.

So as a college professor, working with young men and women who most of them are one step away from being done with formal education, it serves no purpose — and certainly does not help students — to pretend that training, the early stage, goes on forever. At some point I must resist the urge to answer their verifying questions, despite the fact that students take great comfort in having their work verified for them by an external authority and the fact that teachers usually are perceived as being better by students the more frequently they verify.

I’ve started making the training stage and the self-verification stage explicitly distinct in my classroom teaching. In a 50-minute class, I’ll usually break down the time as follows:

I’ll spend the first 20 minutes of class focusing in on one or two main ideas for the class along with some simple exercises, a few of which I’ll do (to help students get the flow of working the exercises and to provide training data not only on the math but also on the notation and explication) and more of which they will do, providing full answers to the “Is this right?” questions along the way. Then five minutes for further Q&A or to wrap up the work.

But then the training phase is over, and students will get more complicated problems (not just exercises) and are told: I will now answer any question you have that involves clarifying the terms of the problem. But I will not answer any question of the form “Is this right?” or provide any guidance on technology use. What I will do instead, if students persist in asking “Is this right?”, is answer their questions with more questions of my own:

  • Are the units working out correctly? Are you getting cubic feet for volume, meters per second for velocity, etc.?
  • Did you graph the function to see if the roots are really where you say they are?
  • Have you seen a problem like this before in the book, your notes, or your homework?
  • Does that answer make sense in the context of the problem? Did you get a negative derivative value for a function that is visibly decreasing?
  • What did Wolfram|Alpha (or Maple or MATLAB, etc.) say? *
  • What do your group-mates think?

And so on. Many of these are merely ripped from the pages of Polya’s How to Solve It, which ought to be required reading of, well, everybody. In other words, in this post-training phase of the class, students must simulate life in the sense that they are relying only on their wits, their tools, their experiences, and their colleagues, and not the back-of-the-book oracle.

Also, by telling students up-front that this is how the classes are going to be structured, they get the idea that there is a time for getting verification and another time for learning how to self-verify, and hopefully they learn that the act (or at least the urge) to self-verify is something like a goal of the course.

My hope here is to provide training data of a different sort — training on how to be independent of training data. This is the only kind of preparation that makes sense for young adults heading for a world without backs of books.

* You could make a good argument that Wolfram|Alpha used in this way is just a very sophisticated “back of the book” — an oracle that students use as an authority. I think there are at least a couple of reasons why W|A is more than that, and I’ll try to address those later. But you can certainly comment about it.

2 Comments

Filed under Critical thinking, Education, Math, Problem Solving, Teaching, Uncategorized

What are some fatal errors in proofs?

The video post from the other day about handling ungraded homework assignments went so well that I thought I’d let you all have another crack and designing my courses for me! This time, I have a question about really bad mistakes that can be made in a proof.

One correction to the video — the rubric I am developing for proof grading gives scores of 0, 2, 4, 6, 8, or 10. A “0” is a proof that simply isn’t handed in at all. And any proof that shows serious effort and a modicum of correctness will get at least a 4. I am reserving the grade of “2” for proofs that commit any of the “fatal errors” I describe (and solicit) in the video.

7 Comments

Filed under Education, Geometry, Grading, Math, Problem Solving, Teaching