Professional Documents
Culture Documents
Image: iStock
For the first 18 years of my academic career, I ran into the same problem every semester. It
happened at about the 13-week mark: I would share a tearful farewell with my family and
begin serving my sentence in Grading Jail. In that moment, I would look back on a career of
repeat offenses against efficient and timely grading of student work, and see clearly that I had
no one to blame but myself. I was a hopeless recidivist.
Or so it seemed. Remarkably, the hard time I served was enough to rehabilitate me, and turn
me into a productive member of grading society. And now since were at that point of the
semester Im ready to share what Ive learned in hopes of saving others from the academic
clink.
But first (and before I beat the jail metaphor any further into the ground), I ought to disclose
that my own relationship with grades is an ambivalent one. I think too much emphasis is put
on grades by both students and institutions, I dont think a single grade is representative of a
students academic ability, and I firmly reject the idea that grades reflect intelligence or
potential. That said, I also realize the need to assess student work in a consistent and
understandable manner. In a perfect educational world, there would be individualized
assessments formative and summative and in-depth conferences in which professors
and students could share and discuss these narratives. In our imperfect world, grades are still
a feature of the academic landscape, and we owe it to students to fairly use the tools we have,
no matter how flawed.
Prompt feedback may be a "best practice," but too often in the semester, we honor that
injunction primarily in the breach. Thus, in a paroxysm of equal parts guilt and panic, we
lock ourselves in Grading Jail hard labor with no parole until weve atoned for our
(procrastination) sins. The all-night grading binge is problematic, though. Are we really
giving effective and thoughtful feedback to students at 3 a.m., after weve read 25 (or more)
of their classmates essays? Are the standards applied to the final paper the same as the ones
used to evaluate the first, so many hours and cups of coffee ago?
Here, then, are the three strategies Ive found most helpful in the continuing quest to better
manage my grading workflow and stay out of trouble.
Before classes start, as Im drafting my syllabi, I print out calendars for every month of the
term and lay them out on my desk. Using different colored markers for each section/course, I
plot out the due dates for every assignment I will give throughout the semester. A cluster of
different colors in a three-day span is a quick visual cue that I ought to reconsider some due
dates. Is there a distinct pedagogical need to collect a stack of exam books from one course,
and a pile of essays from another the next day? Or can I space out those due dates
differently?
I know this sounds head-slappingly simple, but how many of us really do this sort of careful
planning and comparison in advance? Judging from the litany of "I have to grade four
sections of papers" lamentations on my Twitter feed, its a strategy that more of us should
consider. Sometimes the simple steps pay off exponentially in the long run.
Rubrics done well are your friend. I was a rubric skeptic early in my career, but
with education and experience, Ive become a big fan of them for much of my
grading. The initial impetus for me to consider rubrics was the realization that I was
using essentially the same set of comments for much of my feedback across classes
and assignments. How many times do I want to write "use a specific example here" or
"awkward phrasing please rework?"
My initial solution was to have a Word document with the phrases I used most often open
while I graded, and then cut-and-paste the appropriate comment as needed. I ran into two
problems with that strategy, though: First, I had to be grading student work electronically to
use it, and second, it became patently absurd. If I was writing the same comments over and
over, maybe I needed to revisit just how clear my criteria were to my students. As I wrote out
explicitly my criteria for evaluating student work, I also realized that I often didnt apply
them evenly. I mean, its easy to be seduced by a beautifully written essay, even if it says
little of substance and especially if it comes on the heels of four stinkers in a row.
Was I being as fair as I could be? And how would I know if I was? That was where rubrics
came in for me, after I did some research and consulted with colleagues.
Constructing a rubric involves a significant investment of time on the front end, but once
designed, using it to assess student work cuts my grading time by more than half. Im not
writing the same basic comments over and over, because theyre on my rubric, and I can
circle or highlight them there. I use the time Ive saved to concentrate on more meaningful
individual feedback. Most important, having specific criteria and clearly defined benchmarks
gives me the assurance that Im being as consistent as possible in my grading. Indeed, my
assignment design has improved as a result of forcing myself to define specific learning
outcomes, and how I plan to assess them.
An additional advantage: If students have the rubric in front of them as they work, ambiguity
and guesswork (as well as the anxiety those can produce) are eliminated from the process.
Thats no small thing when it comes to a high-stakes assignment like a final research paper,
for example. (Of course hastily written or vague rubrics dont provide any of those benefits,
and indeed may exacerbate the very problems they were intended to solve.)
A caveat: Advance distribution of a rubric shouldnt be your only conversation with students
about your expectations. A good, detailed rubric promotes transparent criteria, consistently
applied. As the only reference point, though, it becomes easy for students to "write to the
rubric," creating homogeneity and blandness rather than giving them the freedom to achieve
learning outcomes in a creative and genuine way.
I can talk faster than I write. So can you, I imagine. In the last couple of years,
speech-to-text options (Googles Gboard mobile app, for example) have proliferated.
Dictating comments into a Google Doc and using speech-to-text to transcribe them in
real time is one way to provide substantial feedback on a large amount of student
work without developing carpal tunnel syndrome.
However, Ive found it even more meaningful to record my comments and then share them
with individual students via an audio file they can listen to on any device. I stumbled into this
method out of desperation several years ago; I was woefully behind on grading student essays
and needed a way to get through them quickly without skimping on feedback, so I decided to
do a virtual "talk-through" of the papers for each student. I used a voice-recorder app on my
tablet, and recorded myself talking through the paper with summary comments at the end,
which took about six to eight minutes for each essay. Then I saved the files in Dropbox
folders and gave students a link to their feedback folder so they could stream or download the
audio as they wished.
I came to that method independently, but subsequent research showed me that audio feedback
has been a practice in some quarters for both face-to-face and online courses. The research
also affirmed both my initial impressions and my students reactions: My feedback felt more
personal, it balanced specific and global commentary, and students felt like they paid more
attention to my audio comments than they did to standard written feedback.
Since then, Ive streamlined my practice a bit: I read through a paper, making cursory notes
in the margins. Then I formulate my overall summary and decide which themes or issues I
want to capture. I record my talk-through on a voice-recorder app (I use Voisi, but there are
scads of free apps out there). I begin with the summary; I tell the student what I think the
papers strengths are, and what Id like them to focus on for the next draft or assignment.
Then I do a brief talk-through of the paper, not to point out every specific error or problem,
but to give my general feedback. Because audio files are sometimes too large to attach to an
email, I upload them to Dropbox and send a shareable link to the student.
What Ive found is that especially for large-scale projects and written work audio
feedback cuts my grading time just about in half, without sacrificing the depth or quality of
feedback.
Those three strategies have transformed grading from something Ive always dreaded into
something that I well, enjoy is too strong a word. But I am now able to provide timely and
meaningful assessment without locking myself away for days at a time. Professors are
remarkably like our students in many ways, perhaps most obviously in how we sometimes
flail around trying to manage the end-of-the-semester crush. And just as our students dont do
their best work in all-night cram sessions, neither do we.
For those of you who share my ambivalence about the value of grading itself, there are ways
to turn it into a more meaningful, collaborative project for example, Cathy Davidsons
Peer-to-Peer Assessment and Contract Grading models, and Linda B. Nilsons Specifications
Grading framework. But they take time to learn and adapt. And in the midst of a semester, we
arent blessed with a lot of extra time or motivation to do that sort of long-term reflection and
rethinking. You might use the winter or summer breaks to carry out a broad overhaul of your
grading practices.
In the meantime, consider these three strategies the academic equivalent of a "get out of jail
free" card. The more we can ensure consistency and fairness, the less likely we are to be
beset with student complaints, and the better the chances of students actually putting our
feedback to use in their subsequent work which is the whole point, right?
Kevin Gannon is a professor of history and director of the Center for Excellence in Teaching
and Learning (CETL) at Grand View University in Des Moines, Iowa.
How To Crowdsource Grading
Page Views: 20017
73
By Cathy Davidson
on July 26, 2009
I loved returning to teaching last year after several years in administration . . . except for the
grading. I can't think of a more meaningless, superficial, cynical way to evaluate learning in
a class on new modes of digital thinking (including rethinking evaluation) than by assigning a
grade. Top-down grading by the prof turns learning (which should be a deep pleasure,
setting up for a lifetime of curiosity) into a crass competition: how do I snag the highest
grade for the least amount of work? how do I give the prof what she wants so I can get the A
that I need for med school? That's the opposite of learning and curiosity, the opposite of
everything I believe as a teacher, and is, quite frankly, a waste of my time and the students'
time. There has to be a better way . . .
So, this year, when I teach "This Is Your Brain on the Internet," I'm trying out a new point
system supplemented, first, by peer review and by my own constant commentary (written and
oral) on student progress, goals, ambitions, and contributions. Grading itself will be by
contract: Do all the work (and there is a lot of work), and you get an A. Don't need an
A? Don't have time to do all the work? No problem. You can aim for and earn a B. There
will be a chart. You do the assignment satisfactorily, you get the points. Add up the points,
there's your grade. Clearcut. No guesswork. No second-guessing 'what the prof wants.' No
gaming the system. Clearcut. Student is responsible.
But what determines meeting the standard required in this point system? What does it mean
to do work "satisfactorily"? And how to judge quality, you ask? Crowdsourcing. Since I
already have structured my seminar (it worked brilliantly last year) so that two students lead
us in every class, they can now also read all the class blogs (as they used to) and pass
judgment on whether the blogs posted by their fellow students are satisfactory. Thumbs up,
thumbs down. If not, any student who wishes can revise. If you revise, you get the
credit. End of story. Or, if you are too busy and want to skip it, no problem. It just means
you'll have fewer ticks on the chart and will probably get the lower grade. No whining. It's
clearcut and everyone knows the system from day one. (btw, every study of peer review
among students shows that students perform at a higher level, and with more care, when they
know they are being evaluated by their peers than when they know only the teacher and the
TA will be grading).
What this teaches my students is responsibility, credibility, judgment, honesty, and how to
offer good criticism to one's peers--and, in turn, how to receive it. The beauty comes in the
fact that those who judge one week are among those who are judged the next. Throughout
the course, "judgment" will be a main subject of the course, as it should be in a course
entitled "This Is Your Brain on the Internet." Contributing to the whole of a group is another
skill this course will emphasize. How is cognition different in a customizing, process-
oriented, collaborative online public environment? How do we learn to contribute and
collaborate well in such an environment. Little in our formal education prepares us to be
responsible participants of the Internet. This course proposes an evaluation system that
matches the purpose of the course, where students learn how to be responsible judges of
quality and helps them learn to be responsive to feedback as well. I can't imagine better
skills to learn within the safe confines of a class, with a prof on hand to offer constructive
feedback (including to those giving feedback).
Here's the syllabus for the course. Skip to the bottom for the section on grading. I'm happy
for comments. And we (the students and I) will let you know how it works.
----
Subsequent addition (Aug 15): This post has garnered so much attention that I wrote a
longer follow-up which explains the history and theory of evaluation motivating this
experiment: http://www.hastac.org/blogs/cathy-davidson/crowdsourcing-grading-follow
-----------
ISIS 120S-01, English 173S-05: This is Your Brain on the Internet
Spring 2010 IMPS Space, Rm. 230, John Hope Franklin Center
DESCRIPTION:
This is Your Brain on the Internet is open to any student fascinated by how we come to know
the world and how we may or may not know the world differently in the Information Age.
Our quest in this course will be to explore many different, quirky, eccentric, and exceptional
models of mind in order to force ourselves to think, together, about what models best suit our
digital, interactive, collaborative age. Although we are in a great era of neuroscience and are
learning more and more about our mental processing, what we do not know about how our
brain works is infinitely more vast than what we know. Thus we make models to try to
explain ourselves to ourselves. Every era (and the present is no exception) and every culture
imagines its own models of mind. In the scientific method, this hypothesis then both shapes
experiments and data collection and uses experimental findings and the data collected to test,
refine, or (sometimes) refute the hypothesis.
This class advances an argument: We are living in one of the most momentous times of
change in human history. We have changed. Now we need to name the paradigm that has
already shifted. This class will be testing that argument in myriad ways. We will be
thinking together about how we know the world, how we think, and how we think about
thinking as individuals, as groups, as a culture, as subcultures, in a historical moment, as
mediated by and through technology. The readings are intended as provocations. Some are
evocative, some controversial, all have strong points of view, all are polemical in the sense
that they advocate for models of mind, collaboration, interaction, and mediation. All are also
situated in the sense that they do not look at cognition in an abstract sense, as divorced from
social concerns, but as deeply rooted in cultural arrangements, so another focus of the course
will be on new ways that humans interact with one another as friends, business partners, and
members of a global information community. How are collaborations different when they
are face-to-face than when virtual, mediated by technology? Our own classroom will move
back and forth between actual and virtual experiences, including observation of highly
complex collaborative environments (including choreography, improvisation, and other ways
of interacting with and without words), some of which involve technology and some of which
do not.
Readings: Readings will be chosen by our student leaders from among the following but
some other texts, films, experiences, websites, interactive projects, and so forth will be added
or dropped by student leaders as the course unfolds.
Jean-Dominique Bauby, The Diving Bell and the Butterfly
*Norman Doidge, The Brain That Changes Itself
Cathy N. Davidson and David Theo Goldberg, The Future of Learning Institutions in a
Digital Age [online; free download from MIT Press]
*Anna Everett, ed., Learning Race and Ethnicity: Youth and Digital Media (MacArthur
Foundation Digital Media and Learning Series) [selections online]
*Temple Grandin, Animals in Translation
*Christopher Kelty, Two Bits: The Cultural Significance of Free Software [selections
online]
*Daniel Levitin, This Is Your Brain on Music
Mark Haddon, The Curious Incident of the Dog in the Night-Time
*Jeff Hawkins, On Intelligence
*Tara McPherson, ed., Innovative Uses and Unexpected Outcomes (MacArthur
Foundation Digital Media and Learning Series) [selections online]
*Howard Rheingold, Smart Mobs
*Clay Shirky, Here Comes Everybody: The Power of Organizing Without Organizations
This course is student-driven: All classes will be lead by pairs of students who will also give
us reading assignments (books, articles, websites, films) and writing/creating assignments
(setting us ways to interact with the material prior to or in class as well as after it). The
student leaders for each session will also evaluate every other students contributions, a
process that will continue throughout the class, on our class Wordpress site. Discussing the
role and purpose of evaluation to the learning process will be a key feature of "This Is Your
Brain on the Internet" and is part of making all of us more responsible citizens of the
interactive, customizing, crowdsourcing Information Age. One purpose of this course is for
all of us to become used to peer evaluation, peer response, peer collaboration and to use these
collective processes as aids toward our mutual learning goals. Students will be encouraged
to respond back to the student leaders making the comments and to discuss these responses in
class.
There will be no exams and no formal, final research papers required in this class. Any
student who would like to write a final research paper can pitch an idea to the class. If
accepted, the student will be invited to write the paper. In all other cases, students will work
together on a final, collaborative multimedia online project that will be made available on a
public website, probably the HASTAC (www.hastac.org) or the ISIS site.
Final Project: Students may come into the class with a final project in mind and find
collaborative partners, or we can begin discussing ideas from the beginning. A class wiki
will be set up for this purpose.
Student Participation in the Set-Up of the Class: The burden of customizing our WordPress
site for the purposes of individual projects, of proposing other software possibilities or other
technologies, will be shared across the students in the class. We will have laptop-required
and laptop-free days. We will be blogging and twittering and facebooking to a group site.
(Policy warning: I do not friend current students on Facebook.) Students will receive some
grading points (see below) for contributions in this area. Students without tech skills will be
urged to contribute in other ways--creatively, critically, in performance, offering feedback,
and so forth. Collaboration by difference is a theme we will take seriously in the course.
Grading and Evaluation. After returning to teaching after several years as an administrator, I
found grading to be the most outmoded, inconsequential, and irrelevant feature of
teaching. Thus for ISIS 120, S 2010, all students will receive the grade of A if they do all the
work and their peers certify that they have done so in a satisfactory fashion. If you choose
not to do some of the assignments and receive a lower grade, thats permissible. You will be
given a chart at the beginning of the course with every assignment adding up to 100
points. A conventional system will be assigned (95-100 points = A-, etc). We total the
scores at the end and you get the points youve achieved. If, on any one assignment, peers
rank the work unsatisfactory, you will either not be assigned any points for that assignment or
you can submit a revised assignment in response to the class critique. Revision and
resubmission results in full points. In other words, everyone who chooses to do the work to
the satisfaction of his or her collaborative peers in the course will receive an A, but no one is
required to do all of the work or to earn an A.
In lieu of a final exam, students will write an evaluation of the class (in addition to the
university-required student evaluations). This will emphasize what you learned in the class,
what you feel you accomplished (with "accomplished" self-defined). I will offer feedback on
your self-assessment, amounting to an "evaluation" of your contribution to the experiences
of, in Toffler's phrase "learning, unlearning, and relearning" that are central to "Your Brain
on the Internet."
Responding to Writing Assignments:
Managing the Paper Load
Writing can be a powerful learning tool. But as class sizes increase and the stacks of
unmarked writing assignments on our desks grow, we need to reconsider how we introduce
writing into our courses. One way to give students the learning benefits of writing without
burying ourselves in paper is to shift from mostly high-stakes writing assignments to more
low-stakes writing assignments. This involves a shift from writing that tends to be formal and
in depth (e.g., essays) to writing that is more informal, usually counts less toward the final
grade, and is generally easier and quicker to mark (e.g., journals, online discussion groups).
Consult the Centre for Teaching Excellence teaching tip Using Writing as a Learning Tool
for more information about these assignments.
Beyond being creative about the types of assignments we create, we need to find efficient
ways to respond to and assess students writing. Below you will find two sets of strategies:
one to limit the number of assignments that you read, the other to structure how you respond
to and assess the ones that you do read. Consult the concluding section for guidance on how
to choose the strategies that are most appropriate for the assignments you have designed.
What is your purpose for the writing assignment? Because different types of writing
encourage the development of different skills (e.g., critical thinking, creativity,
clarity, elaboration), each type may require a different kind of response from you.
For example, if your purpose was to get students evaluating a theory, you need to
assess how well they did so; conversely, you dont need to comment extensively on
their grammar, spelling, or style. Be sure that the response you choose is closely
related to the assignment purpose.
Is the assignment high stakes or low stakes? Not all writing requires you to respond
with extensive narrative comments. In general, the lower the stakes of the
assignment, the less you need to respond to it. Elbow (1997) writes, When we
assign a piece of writing and dont comment on it, we are not not-teaching: we are
actively setting up powerful conditions for learning by getting students to do
something they wouldnt do without the force of our teaching (p.11). Dont use
Elbows comment to shirk responsibility, however. Weekly journals (low stakes) may
need only a simple check or minus or general feedback to the class, but a 15-page
essay (high stakes) will require much more, probably a rubric followed by some
specific narrative comments.
How will students use the response you give? There are times when students will not
benefit from extensive comments. For example, when you are marking final essays
submitted at the end of term, assume that most students are interested primarily in
their grade and perhaps the rationale for the grade; few students will actually pay
close attention to the comments youve written throughout the paper. Conversely,
assume that students will pay attention to and use the comments you write on a
draft version of an assignment, so give specific comments to help them improve for
the final submission. Regardless of how much you write, remember always to
balance negative and positive comments, so that students are challenged to improve
but not compelled to give up. And to avoid overwhelming them with a multitude of
negatives, choose two or three of the largest issues to highlight.
What do students want to know about their writing? This question can be difficult to
answer without asking students themselves. When students submit long or complex
pieces of writing that will require a significant level of response from you, consider
having them submit an informal cover letter along with their assignment. You might
ask them to tell you a list of their main points, how they wrote the assignment,
which parts theyre most and least satisfied with, and what questions they have for
you as a reader. These letters will help you to decide what to comment on.
Resources
Andrade, H.G. (2000). Using rubrics to promote thinking and learning. Educational
Leadership, 57 (5). Link to Andrade Article (scroll down to volume 57, February
2000).
Bean, J.C. (1996). Engaging Ideas: The Professors Guide to Integrating Writing,
Critical Thinking, and Active Learning in the Classroom. San Francisco, CA: Jossey-
Bass Publishers.
Elbow, P. (1997). High stakes and low stakes in assigning and responding to writing.
In M.D. Sorcinelli and P. Elbow, eds. Writing to Learn: Strategies for Assigning and
Responding to Writing Across the Disciplines (pp. 513). San Francisco, CA: Jossey-
Bass Publishers.
Mertler, C.A. (2001). Designing scoring rubrics for your classroom. Practical
Assessment, Research & Evaluation, 7(25).
http://pareonline.net/getvn.asp?v=7&n=25
Montgomery, K. (2002). Authentic tasks and rubrics: Going beyond traditional
assessment in college teaching. College Teaching, 50 (1): 34-39.
Moskal, B. M. (2000). Scoring rubrics: what, when and how? Practical Assessment,
Research & Evaluation, 7(3). http://pareonline.net/getvn.asp?v=7&n=3
Williams, J.D. (1998). Preparing to Teach Writing: Research, Theory, and Practice,
2nd ed. Mahwah, NJ, and London: Lawrence Erlbaum Associates.
Wright, W.A., Herteis, E.M., and Abernethy, B. (2001). Learning Through Writing: A
Compendium of Assignments and Techniques, Rev. ed. Halifax: Office of Instructional
Development and Technology, Dalhousie University.
Appendix
Sample heuristic:
Does the writer respond to the assigned prompt with appropriate depth and focus?
Comments:
Does the paper have an apparent and easy-to-follow structure?
Comments:
Does the writer interpret key concepts correctly and provide his or her own
reasonable applications as evidence?
Comments:
Does the writer use sentences that are well-formed and appropriately varied in
length and style?
Comments:
Is the paper generally free of spelling, typographical, and grammatical errors?
Comments:
Readers general comments:
A (Excellent) Responds to prompt with appropriate depth and focus. Clear introduction,
smooth transitions between topics, and thoughtful conclusion. Concepts correctly interpreted;
own applications given for each concept discussed; applications are reasonable. Sentences
well-formed and appropriately varied in length and style. Few if any spelling or grammatical
errors
B (Very good) Appropriate focus, although could be in more depth. Introduction, transitions,
and conclusion present, but could be clearer or smoother. Concepts correctly interpreted; own
applications given but may be unreasonable. Most sentences well-formed, with occasional
awkwardness. Some spelling and grammatical errors, but paper is still understandable
C (Fair) Some attempt to focus. Evident which topics are being discussed, but no
introduction, conclusion, or transitions. Some concepts interpreted incorrectly; few
applications given or applications are ill-explained. Some sentences poorly constructed but
generally understandable. Some spelling and grammatical errors, making paper difficult to
understand in places
D (Poor) Not at all focused and/or very superficial; may not follow prompt given Unclear
which topics are being discussed and when; transitions non-existent. Most concepts
interpreted incorrectly; no applications given. Many sentences poorly constructed,
incomplete, and/or awkward. Many spelling and grammatical errors, which present
significant barrier to understanding
This Creative Commons license lets others remix, tweak, and build upon our
work non-commercially, as long as they credit us and indicate if changes were made. Use this
citation format: Responding to writing assignments: managing the paper load. Centre for
Teaching Excellence, University of Waterloo.