Teachers can adopt better grading strategies to reward students for the knowledge they’ve gained, instead of penalizing them for what they haven’t yet mastered
My son’s first grade teacher told me that he was lucky to have a younger sister. Six-year-olds are not known for their empathy, and part of Mrs. Seabolt’s job was to build that trait in her students. She’d often see children impatient with classmates who couldn’t tie their shoes or recite their addition tables. Students with younger siblings had an easier time understanding why that behavior wasn’t kind. They recognized, after all, that it was inappropriate to question a toddler’s inability to talk. Those students didn’t compare their younger siblings against themselves, but rather praised them for their growth. By extension, those students were more able to see their classmates through an affirmative lens.
As a teacher, I have been part of any number of meetings about common assessments. One of the most frequently asked questions has always been how many points to deduct for various types of mistakes. Is a simple sign error worth a full point deduction? What if the students don’t label their diagrams, or they use the wrong variable? With time, I noticed that we were spending all of our energy thinking about what our students weren’t doing instead of what they were doing.
My colleague Vince Matsko satirized this mindset in a blog post that imagined students as blocks of cheese. The teacher was a meticulous cheesemonger trimming off imperfections. A dropped minus sign may only require a corner shaved off, while a more significant arithmetic error might call for more of a grating. An error as grievous as a misremembered formula would cause a slice to be removed. At the end of the semester, the remnants would be weighed, with full trust by all sides that the scale had ruled justly.
The problem with this scenario is that it misses entirely what students need from teachers. They are not the misshapen products of a negligent fromager. Or, to adopt a more apt metaphor, students are not overgrown trees in need of pruning; they’re saplings that need encouragement to flourish.
So how do we, as teachers, do that? How can we stop seeing students for what they lack, and perhaps start seeing them for what they are?
Matsko offered one solution by changing the way he graded. Instead of using a point-based system, he graded every problem on a three-tiered scale: Completely Correct, Essentially Correct, or No Credit. (Note the lack of an “Incorrect” designation.) An exam was scored based on how many CCs and ECs the student earned, ignoring any NCs. By grading more holistically, Dr. Matsko engaged with the students’ overall depth of understanding rather than listing shortcomings. I got to witness his grading system in action while he and I were teaching together at the Illinois Mathematics and Science Academy, and it was decidedly effective. His students began to see him less as an adversary and more as a partner in their learning.
Inspired by Dr. Matsko’s successes, I looked for a way to make my assessments more reflective and meaningful without abandoning the familiar points system. To that end, my PLC and I designed a brand new precalculus final exam whose scoring was focused on standards. We started by making a list of core learning targets to assess, and then we wrote problems to address them. For example, the targets of writing an equation to model a problem, solving a quadratic equation, and drawing an accurate diagram might all be tested by a single word problem. Students could earn 2, 1, or 0 points on each target, with the point values mirroring Matsko’s CC, EC, or NC trichotomy. To our surprise, the list of key learning targets we generated numbered about 50, making the entire exam worth about 100 points – a perfectly fine total for an exam.
We gave the exam in a spirit of experimentation, and we accepted up front that the results could prove disastrous. Was giving half credit for partially understood targets enough? Could it be that a student knows, say, 80 percent of each target, and then fails the exam with ECs across the board? Were we being too punitive? Luckily for us, those concerns didn’t play out. What we found was that students did better under the new system, even if only by a small margin. In fact, it was the traditional system that had been too punitive, by making it too easy for an error in a single aspect of the problem to unduly influence the judgment of other facets. I would recommend this system of exam writing and grading to any teacher looking to better highlight her students’ strengths rather than deficits.
For those looking for a less radical change, I’d suggest simply turning around the questions we ask ourselves. When we build our rubrics, rather than “How much should we take off for this mistake,” ask “How many points should we award for a student demonstrating this knowledge?” Compare the student not to your answer key but to the blank page they started with. How did the student improve that page? Did she write an equation to model the problem? Did she draw a diagram to make sense of it? Did she use her arithmetic tools to solve the problem? All of those are steps that represent learning worthy of credit. And if a student did all of those things but made a numerical error along the way, I’d report her score as a +6 out of 7 rather than a -1.
- RT @MSProfEd: MPE Deputy Director @barbie_fergi received her National Board renewal certificate! We are so proud of her and thankful for a…National BoardJan 17
- "I had two choices, give up or get it done." Check out the latest blog from @GuerraCCSD on her path to Board Certif… https://t.co/9NzglpIoU5National BoardJan 17
- Hey y'all! #NBCTstrong https://t.co/saHcsaw2KB https://t.co/25smGrcLMGNational BoardJan 17
- RT @sgullixson: Thank you, @CSSD11 DAC, for the invite to your meeting last night. The BOE is grateful for your work and commitment to @CSS…National BoardJan 17
- .@NVNationalBoard is #NBCTstrong https://t.co/geOWxeQH7LNational BoardJan 17