Continuing my thoughts the other day about what makes a good course good, but not great, I looked again at Coursera’s website — where they list several questions they’re trying to tackle to take their courses to the next level. So another way to address the issue of good and great would be through the lens of each of these, “challenges.” Let me lay out some preliminary thoughts about each of these questions, and return to each more thoroughly in later posts. I also think the order in which they were posed doesn’t really get at the way to think about these issues.
How do we design rigorous assessments that can be taken online?
This seems to be the question lots of people are trying to work out, particularly as MOOCs and open-course platforms try to pivot to address the issue of credentialing. That is, assessment when a course is free and students only seek formative feedback for their own improvement seems less fraught–at least in the minds of many people–than designing a system that assesses learning as part of a process of validating that a particular student has mastered certain material in order to receive a certificate or badge of completion (summative). I say in the minds of many people, though, because I’m not sure it has to be that different. Indeed, I think many administrators become so obsessed over the issue of “security” and “integrity” when it comes to assessment that they undermine the potential value of the assessment itself. Which is not to say that ensuring that people don’t cheat isn’t important. But if paranoia over the identity of the user is what drives the design of the assessment, I think it can lead to very bad things.
It reminds me a little of the conversation I had with the chair of the English program at one of the Chicago City Colleges last year over using portfolio-based assessment in her composition classes, and the self-reflective cover letter as a means of having students demonstrate that they had met the learning outcomes of various courses. She was so worried about “authenticity” — and by that she really meant was “integrity” in the sense of verifying that students had produced the letters without the help of tutors — that she resisted the model, even though she recognized it was a better way to get at the kinds of things they really wanted to stress in their writing courses.
The problem here is that she was still so focused on evaluating the grammar and expression of the cover letter as the assessment. What she didn’t get was that a well designed assessment, one that is authentic in the sense that it asks students to demonstrate their knowledge by addressing a real problem or set of issues through tasks that have been supported through the design of the class– well, these are hard to fake. (And that reminds me of the line from Annie Hall: ““I was thrown out of college for cheating on the metaphysics exam; I looked into the soul of the boy sitting next to me.”)
I’ll blog more about my experience with assessment in “Critical Thinking in Global Challenges” a bit later (not great), but again, I think the biggest issue has to do with whether assessment is understood as a mechanism for evaluating what students have retained (the old transmission model of learning) , versus an opportunity for students to show what they can do (and the process through which they figured it out).
I think folks in writing studies get this in ways that other fields just haven’t gotten to yet, but even then, the number of people I meet in composition who also still focus on products (like averaging paper grades to determine whether students have actually learned something over the course of the semester!) is pretty astounding.