Rubric Design

Initial Publication Date: March 6, 2013

Introduction

An assessment rubric is simply a table that lists evaluation criteria and, for each criterion, describes several levels of performance. While it is possible to assess work with only a list of criteria, students often need more detail to understand what makes some work exemplary while other work merely passing. As one colleague once shared about her own frustrating early college experience, "I wasn't producing C work on purpose!" Creating a more fulsome grading rubric can be an instructive teaching tool (and it often reduces grading time as an extra bonus!).

1. Identify grading criteria based on course goals

If assignments are designed to move students toward achievement of course goals, then assessment criteria should reflect those course goals. Sometimes, the course goals themselves can serve as criteria for grading the assignment. For example, "A working understanding of Ohms Law" might simultaneously be a course goal and a grading criteria for an assignment. But often course goals are written more broadly so that they contain multiple assignment grading criteria. For example, the course goal of "Mastery of the model of supply and demand" may show up in grading criteria as "Comprehends connection between marginal cost and supply" and "Completely explores marginal benefit."

2. Choose descriptions of activity over adjectives and adverbs when describing levels of proficiency

Sometimes the difference between one level of proficiency and the next is black and white. For example, an assignment is either free of typos or has one to two typos or has more than two. Such distinctions are cut and dried. But in most cases the difference between novice and competent and exemplary performance is seen in the way something is completed. When dealing with such cases, poor rubrics put too much weight on adjectives that can have different meanings to different readers. That leaves students unsure of what exactly distinguishes strong from weak work. When possible, look for descriptors that are clearly distinct. For example, consider distinguishing good and exemplary performance on the criteria "distinguishes between correlation and causation" with the examples below:

Of course, every rule has its exceptions. There will be times when it is all but impossible to complete your rubric without using adjectives and adverbs as distinguishing markers. In such cases, consider showing students weak and strong examples (see above) to give meaning to your rubric.

3. Decide whether you want to give numeric scores to each criterion

Many faculty and students think all rubrics are quantitative: students get a numeric score on each criterion and the final score is the sum of the criterion scores. While that is one way to use a rubric, there are times it doesn't make sense. What grade do you give an abysmal argument filled with QR fallacies which nonetheless includes a strong supply and demand model? Do the F and A grades on the sub-categories average to a C? Or does the holistic sense (it was a miserable argument!) mean the paper is an F?

You can dodge this line of question by giving only qualitative scores to each criterion. While numeric scores can make assigning grades a breeze, many times a qualitative rubric reveals visual patterns that yield most of the efficiency of numeric grading rubrics. The nature of the assessment criteria and their interaction should dictate whether you give numeric scores to each criterion.

4. Develop generic QR-related rubric content that you can adapt to many assignments

Every assignment has its own, unique context. And that will often show up with specific references in the assessment rubric. But many aspects of QR show up over and over again. By developing a generic rubric, you can quickly copy and paste relevant elements into your grading rubric and then revise to the specific context. You can look at these two rubrics for inspiration (courtesy of John Bean, Seattle University):

By revising the bullet-pointed sub-criteria in each section (see the left-most column) he can quickly adapt these rubrics to many new assignments.
Note: If you prefer qualitative over quantitative rubrics, replace John's numeric scores of 10 through 8 with a descriptor like "strong," his scores 7 through 4 with something like "adequate," and his scores 4 through 0 with something like "inadequate."

Example Assignment Assessment Rubrics

In addition to the ideas above, the teaching activities in the collection have assessment information embedded in them which can be used as models for creating new ones. Here are a few activities with strong assessment components.


SSDAN Logo ICPSR logo Teaching with Data Logo Teaching with Data Logo