Notices of the American Mathematical Society
Welcome to the Notices of the American Mathematical Society.
With support from AMS membership, we are pleased to share the journal with the global mathematical community.
PDFLINK |
Mastery Assessment: What, Why, and How?
The core idea of mastery assessment is that students are assessed directly on whether they have fully met specific learning objectives rather than using a “bucket of partial credit points” to determine a grade. In most mastery grading systems, if learning objectives have not been met, the students are given further attempts to return to the material and reassess.
Why might this be preferable? First, from a student viewpoint, it provides a clear, learning-focused justification to their grade: having to earn 90% of all possible points for an A feels arbitrary, while earning an A for demonstrating mastery on all course learning outcomes has a clear justification. From an educator viewpoint, providing transparency and opportunities for reassessment can support students in better learning the material by providing clear expectations and a strong incentive to return to topics that they haven’t yet mastered. Taking the time to determine and explicitly articulate the learning objectives of your course—a good pedagogical practice regardless of your assessment structure!—also gives you a framework for thinking carefully about what you are trying to accomplish, which can help you grow as an instructor. Additionally, mastery assessments can help a course be more equitable as well, as they tend to be more flexible for students, make assessments lower-stakes and less stressful, and reward learning the material as an end goal, rather than expecting all students to learn the material at the same speed.
Below, we share three examples that help illustrate how we each have incorporated mastery assessments in our courses. The implementations are quite different: in the first course, fairly minor changes to the exam grading scheme were made; in the second, mastery grading was central to all parts of the course design; and in the third, new mastery assessments were implemented in a large coordinated introductory course.
We hope the three short examples we include below can provide a sense of why mastery assessment is valuable, some encouragement to give it a try, and some contrasting illustrations about how it can be implemented in a variety of courses with different constraints and goals. There are many math educators with experience and expertise in implementing mastery and other alternative assessments—if these three examples pique your interest, we invite you to further explore some of the many resources out there!
A Shift to Mastery Grading on Exams (Beth Wolf)
My first experience with implementing mastery grading was in a differential equations class at a small liberal arts college. I had already taught the course using a standard grading scheme, and I was confident that I could develop a new, mastery-based exam system that could be successful and complementary to other class assessments. Relatedly and importantly, I felt this was a reasonable undertaking in terms of workload given that I was not preparing to teach the course for the first time.
In my revised mastery system, the exam portion of students’ grades, which was 48% of their course grade, was based on their mastery of sixteen major course learning objectives I had delineated. For example, three of the objectives assessed whether students could: use the variation of parameters method; determine the stability of nonlinear systems; and solve a two-point boundary value problem. On each of the three in-class exams, students received a problem for each new objective that had been covered. For later exams, they could also receive a problem for any previous objective they hadn’t yet mastered. Additionally, a few “mini-exam” times, lasting roughly half of a class period, were also set aside for reassessment of objectives. Only a student’s highest score on an objective was kept. The final exam included two new objectives, but otherwise was solely an opportunity to reassess earlier objectives. Each problem was graded holistically based on whether the student’s work on this objective demonstrated full mastery (M) with few to no minor errors, significant progress (P) but had significant errors or misunderstandings, or was unsatisfactory (U). (Looking back on the scheme now, I would reword the third category as (N) for “no progress made yet” to better support a growth mindset!) Each objective was 4% of their overall grade; an M earned all 4%, a P earned 2%, and a U was not yet earning credit.
I took this step into mastery grading with the hope that it would better motivate and support my students in learning the material. Indeed, student comments from the end of the semester indicate the mastery-based exam system had these and several other advantages. For example, students wrote that “it made studying very effective,” that “it took the stress off,” and that “our small mistakes didn’t count against us.” Even better, they felt that “students… earn their grades if they strive for them” and “the chance to retake the topics encourages students to learn” (emphasis theirs).
There were also, unsurprisingly for a first attempt, a few clear drawbacks. Even if problems weren’t labeled for specific objectives, it was often artificially clear what strategy students should use to solve a problem. To address this shortcoming, I would add an explicit objective to assess whether students could choose an appropriate strategy. Also, once students mastered an objective, they didn’t need to return to it, so, while students did have other course assessments, such as homework, labs, and a paper, they may have lost the potential benefit of revisiting material later in the semester. This could be addressed, for example, by requiring students to reassess on core objectives on the final (e.g., see
Overall, I believe strongly that, while this particular scheme wasn’t perfect, the shift to a mastery grading scheme, even in a limited way, fundamentally changed student attitudes and how they approached the course and their learning, and it was a worthwhile effort to undertake.
A Mastery-Centered Course (Nina White)
In redesigning the assessment for an inquiry-based Euclidean Geometry course aimed at both math majors and future high school math teachers, I had several goals and constraints. I knew that my student body, even in this small course, would vary drastically. I wanted all students to be able to work at their “learning edge.” For this reason, I knew I wanted more personalization and less uniformity in the assessment structure. After several semesters reading about mastery-based and specifications grading (e.g., see
Putting my own spin on ideas inspired by conversations with TJ Hitchman, I first articulated the kinds of skills and processes I wanted students to learn and demonstrate. Some of these might sound like familiar learning goals: “use definitions to justify assertions” or “write arguments involving properties of parallel lines.” But some are a little less typical: “extend ideas to find or create new mathematics” or “make a clear definition to fit a new concept.” I firmly believe that these last two learning goals cannot be fairly measured on a timed exam. So what, instead, can be taken as evidence of meeting such a learning goal?
I decided to use all the data and observations available to me to record when and how students had met the learning goals. Each 80-minute class comprised both groupwork and student presentations. I took copious notes of what I observed in class, helped by an innovative iPad app called LessonNote. These notes helped me to populate a spreadsheet for each student recording when and how I’d seen them meet learning goals (often demonstrated multiple times). Instances of meeting a goal were primarily drawn from classroom observations (in their small group discussions or presentations) and evidence from written assignments (which were individualized to students’ interests and needs). A written midterm gave students additional opportunities to meet unmet learning goals; that is, no points were “lost,” only gained. The course syllabus specified a number of learning outcomes to be met and a number of written assignments to complete to earn each grade.
Some advantages of this assessment system were that students could work at their learning edge and pursue their interests. For example, two honors students in the class completed much more ambitious proofs and even some extra material. Another student had some unusual conjectures and techniques that he was able to follow much further than in a traditional class. At the same time, I could see where certain learning objectives had not yet been met and could strategize with students about work to do next to meet missing objectives. Another major advantage of this assessment system was that the midterm was low- or no-stress for students. Finally, I was very happy with how much students’ written and oral communication were engaged and supported by this design. The design of the assessment meant no student was ever at risk of being “marked down” for sharing their ideas publicly, only rewarded.
It’s important to note that I only had 11 students in this course. I think this design could still work well for up to 20 students. Much more than that and it might become untenable. It’s also important that this course was not a prerequisite for any other course. It was very much okay (and encouraged) in this context for students to build their mathematical practices over a narrow and specific set of content goals.
Some student feedback was effusive: “Customized learning experience. Challenging and rewarding. Best math course I’ve taken [at this institution].” Others saw benefits as well as challenges: “I really liked the structure of the class in that we got to write articles about topics that were interesting to us. I do think that the mastery grading is a bit difficult to get used to—I like the overall idea, but would have liked more reflections/check-ins throughout the semester so that I had a better idea of how I was doing.” This speaks to the difficulty of full transparency with such an unfamiliar system. Lastly, I’ll share a student quotation that speaks to the community in this highly interactive and student-centered class, which I believe was highly related to the flexible and supportive assessment structure: “[This course] made me understand what it means to be a mathematician. This was such a fun course! … I was so impressed with my fellow classmates as well. They were bright, creative and generous with their knowledge. I really appreciated watching them share their proofs and constructions… I gained a lot to be a better mathematics teacher.”
Mastery Assessment in a Coordinated Course (Hanna Bennett)
Math 105: Data, Functions and Graphs is one of three large coordinated courses in the University of Michigan’s Introductory Math Program. Since the early 1990s, these courses have been run in small sections, with extensive use of active learning and groupwork, and an emphasis on conceptual understanding and applications. There is a lot about this program that works very well, but we have had concerns about aspects of the assessment structure, especially with respect to equity and inclusion. In particular, we wanted a grading system that was more transparent, allowed students to feel more agency over earning the grade they wanted, and leveled the playing field for students coming in with different previous knowledge. In 2019 we began making changes toward including more mastery assessment in the course.
There were many logistical challenges in implementing such a system in such a large course (both in terms of the numbers of students and instructors: Math 105 in the fall typically has around 550 students and 25 instructors). We already had a model we’d been using in a more limited way: gateway assessments, administered through the WeBWorK system. These assessments are made of problems with randomized elements that are selected from a larger problem bank. Students take them in proctored labs, though they can practice them elsewhere at any time. They can take masteries that count towards their grade twice a day each day the assessment is open, and only their highest score affects their course grade. Previously, Math 105 had one such assessment, given at the beginning of the semester to ensure that students had (or reviewed) the prerequisite algebraic skills they needed to succeed in the course. In implementing a new mastery-based assessment system, we greatly expanded the number of these types of assessments, and the types of problems they incorporated, to include material throughout the course. We have currently settled on five mastery assessments administered in this way. Each assessment has seven problems (many with multiple parts), and students must get at least five completely correct in order to earn credit for the assessment. In this way, we are able to continue to hold students to high standards, while increasing the number of opportunities they receive to show us that they have learned the material.
We still have other assessments in this course, including written exams, which focus on skills that cannot reasonably be assessed through a computer-graded system. But the addition of mastery assessments has allowed us to substantially decrease the length and weight of these exams, giving students more time to show us what they know and greatly reducing the stress students experience while taking the exams.
This change required a large investment in resources: funding to increase the number of proctored computer labs, a course release for a faculty member in order to begin writing the problem bank, and summer funding to continue to work on problem development. Also, there continue to be challenges. Given that this is a coordinated course, we have to make sure instructors are buying into and clearly communicating the grading system to students. The fact that problems are graded by computer means that students can be frustrated when they understand the basic ideas but make small mistakes when entering answers.
Nevertheless, in a focus group discussion after the second semester of masteries, students overwhelmingly had positive things to say.
- •
“Exams have almost got an ultimatum of you do well or you fail. It’s like you do a problem and if you don’t get it right the first time, then that hurts you. What I liked about masteries was that I could do a problem and if I didn’t get it perfect the first time, that would be okay. I could learn from that and try it again. I wasn’t penalized for trying.”
- •
“I really liked the mastery or the way that it was set up because it gave students who are willing to put the work in and try hard a chance to be really successful…”
- •
“It was a nice destressor to know I could take multiple times, take my time learning it.”
Experienced instructors also generally found the mastery assessments to be a step in the right direction. They expressed some concerns: that the system was still inequitable in that it gave an advantage to students who were able to invest more time repeatedly retaking the assessments; that students were overwhelmed by the workload and course components; that the emphasis on these assessments decreased the emphasis on skills that have historically been important in the course, like students’ ability to tackle complex problems that covered multiple topics from the course; and that students had a tendency to try to memorize patterns in the problems they encountered rather than trying to learn the underlying concepts. In response to these trends, we decreased the number and weight of mastery assessments, and replaced a final mastery assessment with a written exam. We are also working on increasing the problem bank and improving messaging to students about how to study for the masteries, and how to use past attempts to learn and improve.
Additional Advice
There is certainly no one-size-fits-all implementation of mastery assessment; details of any assessment system should be carefully chosen to fit your logistical constraints, your student body, and your course goals. However, mastery assessments are a flexible and powerful tool to support student learning in many different contexts. If you’d like to implement it in your classroom but aren’t sure where to start, there are small changes that can be made, such as adjusting the grading on exams in a way similar to the first example here. An even lower-investment first implementation may be as simple as allowing exam revisions, which similarly conveys the importance of learning from mistakes and supporting students who learn at different speeds.
It is a good idea to make sure you have the extra time needed to prepare or update your syllabus, learning objectives, and other parts of your grading scheme, and that you have a plan to handle, for example, reassessments in a manageable manner. Particularly for new faculty, we also recommend talking with your chair and other senior faculty about your plans. Having their support is important. For larger changes or changes to larger courses, perhaps department leadership can even help connect you to applications to internal grants or other institutional support.
We will also mention that you should be prepared to deal with student resistance and confusion. Students who have been most successful in nonmastery grading systems may be the most hesitant about a different grading system. Students who have in the past passed courses by building up partial credit may have to adapt, since mastery assessments set a higher bar. Make time to ensure that all your students have a full understanding of the grading system, which is likely very different from what they are used to. We also suggest that you stick with holistic grading markings where possible, rather than giving a numeral score that students may try to interpret using a grading scale they are more familiar with. You can solicit student feedback early on and help assuage their concerns as they come up; you can also explain to them transparently the advantages of the grading system for their learning.
We hope you are ready to give mastery assessment a try! We’ve included some reflective questions and a few additional references below (
Reflective Questions:
- •
What are my specific goals and learning objectives for my students?
- •
Who are my students? How does that inform my assessment design?
- •
What are my time constraints as an instructor? What tools could help me more efficiently meet my assessment goals?
- •
How can I clearly and transparently communicate goals and expectations to my students?
- •
What am I willing to take as evidence of student competency? Does it have to be a timed, individual test? Or from what other activities or assessments could I infer competency?
References
[ 1] - David Clark, It’s almost final exam time, Grading for Growth (blog), https://gradingforgrowth.com/p/its-almost-final-exam-time.
[ 2] - Linda Nilson, Specifications Grading, Routledge, 2014.
[ 3] - Robert Talbert, Three steps for getting started with alternative grading, Grading for Growth (blog), https://gradingforgrowth.com/p/three-steps-for-getting-started-with.
[ 4] - Implementing Mastery Grading in the Undergraduate Mathematics Classroom, PRIMUS 30 (2020), no. 8-10.
[ 5] - The Grading Conference (website): https://thegradingconference.com/.
Credits
Photo of Hanna Bennett is courtesy of Hanna Bennett.
Photo of Nina White is courtesy of Teresa Stokes.
Photo of Beth Wolf is courtesy of Nina White.