This week’s BlendKit chapter covered how to assess student
learning. Ah assessment. How I love that word. My introduction to classroom assessment happened not long after I was hired at The School
That Shall Not Be Named, and my Chair handed me a CD Rom with the label “Ass. Docs.”
Yep. Thankfully, it was not a CD of strange MLA porn.
I’ve learned a few things since then. Mostly that when
someone hands you a CD Rom with the label “Ass. Docs.” you should expect
him/her not to be around much longer. And he wasn’t.
Then came his esteemed successor—Shriveled Spider—who had
our division juke stats for assessment. Ass Docs indeed.
At my new institution, assessment is taken a bit more
seriously. They have standardized tests in place that gauge student learning.
Plug the assessment into your course, viola!
Still, these assessments are typically multiple choice exams
(MCEs). So I stopped and pondered when I came across the following passage in my
chapter reading this week:
“Despite the importance of real life application of
knowledge and skills, perhaps the most common type of assessment is still the
traditional multiple choice exam. Placing such tests (or non-graded
self-assessment versions) online is one of the most popular approaches to
blended assessment of learning.”
Why is this the case? If educators—and assumingly
administrators who used to be educators—realize the MCE runs a distant second
to assessing students’ application of knowledge and skills why not chuck it and
create/implement assessments that completely apply to what students will be
doing in “real life”?
I loathe multiple choice tests—in college I was diagnosed with
test anxiety and had to implement rituals strategies for completing MCEs.
It was a rare occasion when I didn’t have a complete panic attack during one. So I try not to have any MCEs in my
classes. However, in a writing course I just
designed and built in our online platform (to be used by all faculty at my
College) I had to implement two MCEs. One is to assess SLOs for SACS (so,
really there’s no way around that one—the data has to be solid, unchanging, and
not left up to interpretation). The other is to test for grammar—which to me is
ridiculous. Grammar is not learned through rote instruction, it’s learned
through trial and error. Then, we forget break the rules. Like misplaced
modifiers. I’m reading a NYT Bestseller right now and on page 24, right there
in broad daylight, living without shame, is an incredibly bad misplaced
modifier. The author, the agent, the editor, the publisher—they all missed
it/didn’t care it was there. So why, oh why, should I test my students on
misplaced modifiers? Who cares if they write, “I saw a sculpture next to the
man with a moustache made of ice.” I know the moustache wasn’t made of ice. We’re
not on Pluto. Ice moustaches don’t exist. I’m just impressed they can spell
moustache.
So why did I include the test? Simple. I’m lazy. There are ten
assessments in the class—the two MCEs and eight writing assignments. Those
eight assignments involve several steps and workshops before the final draft is
submitted—drop boxes, peer edits, classroom presentation. I will spend most of
my semester grading the writing assignments using a detailed rubric, giving grammar
instruction through trial and error, and writing end notes that say things
like, “If you can’t quit using the second person pronoun, you will fail the
course.” I need a wee break, even if it comes in the form as a self-grading
MCE.
And I’m not ashamed to admit it.
So I was very happy when the chapter reading directed me to the
web-page by Bobby Hoffman and Denise Lowe. I am going to keep this little
page in my back pocket as I create new, more effective MCEs.
And I’m also going to go polish my ice moustache.
And then I’m going to take my Ass Docs for a little walk.
No comments:
Post a Comment