In parts 1, 2, 3, and 4 of this series, I laid out a model for a learning platform that is designed to support discussion-centric courses. I emphasized how learning design and platform design have to co-evolve, which means, in part, that a new platform isn’t going to change much if it is not accompanied by pedagogy that fits well with the strengths and limitations of the platform. I also argued that we won’t see widespread changes in pedagogy until we can change faculty relationships with pedagogy (and course ownership), and I proposed a combination of platform, course design, and professional development that might begin to chip away at that problem. All of these ideas are based heavily on lessons learned from social software and from cMOOCs.
In this final post in the series, I’m going to give a few examples of how this model could be extended to other assessment types and related pedagogical approaches, and then I’ll finish up by talking about what it would take to make the peer grading system described in part 2 be (potentially) accepted by students as at least a component of a grading system in a for-credit class.
Competency-Based Education
I started out the series talking about Habitable Worlds, a course out of ASU that I’ve written about before and that we feature in the forthcoming e-Literate TV series on personalized learning. It’s an interesting hybrid design. It has strong elements of competency-based education (CBE) and mastery learning, but the core of it is problem-based learning (PBL). The competency elements are really just building blocks that students need in the service of solving the big problem of the course. Here’s course co-designer and teacher Ariel Anbar talking about the motivation behind the course:
It’s clear that the students are focused on the overarching problem rather than the competencies:
And, as I pointed out in the first post in the series, they end up using the discussion board for the course very much like professionals might use a work-related online community of practice to help them work through their problems when they get stuck:
This is exactly the kind of behavior that we want to see and that the analytics I designed in part 3 are designed to measure. You could attach a grade to the students’ online discussion behaviors. But it’s really superfluous. Students get their grade from solving the problem of the course. That said, it would be helpful to the students if productive behaviors were highlighted by the system in order to make them easier to learn. And by “learn,” I don’t mean “here are the 11 discussion competencies that you need to display.” I mean, rather, that there are different patterns of productive behavior in a high-functioning group. It would be good for students to see not only the atomic behaviors but different patterns and even how different patterns complement each other within a group. Furthermore, I could imagine that some employers might be interested in knowing the collaboration style that a potential employee would bring to the mix. This would be a good fit for badges. Notice that, in this model, badges, competencies, and course grades serve distinct purposes. They are not interchangeable. Competencies and badges are closer to each other than either is to a grade. They both indicate that the student has mastered some skill or knowledge that is necessary to the central problem. But they are different from each other in ways that I haven’t entirely teased out in my own head yet. And they are not sufficient for a good course grade. To get that, the student must integrate and apply them toward generating a novel solution to a complex problem.
The one aspect of Habitable Worlds that might not fit with the model I’ve outlined in this series is the degree to which it has a mandatory sequence. I don’t know the course well enough to have a clear sense, but I suspect that the lessons are pretty tightly scripted, due in part to the fact that the overarching structure of the course is based on an equation. You can’t really drop out one of the variables or change the order willy-nilly in an equation. There’s nothing wrong with that in and of itself, but in order to take full advantage of the system I’ve proposed here, the course design must have a certain amount of play in it for faculty teaching their individual classes to contribute additions and modifications back. It’s possible to use the discussion analytics elements without the social learning design elements, but then you don’t get potential the system offers for faculty buy-in “lift.”
Adding Assignment Types
I’ve written this entire series talking about “discussion-based courses” as if that were a thing, but it’s vastly more common to have discussion and writing courses. One interesting consequences of the work that we did abstracting out the Discourse trust levels is that we created a basic (and somewhat unconventional) generalized peer review system in the process. As long as conversation is the metric, we can measure the conversational aspects generated by any student-created artifact. For example, we could create a facility in OAE for students to claim the RSS feeds from their blogs. Remember, any integration represents a potential opportunity to make additional inferences. Once a post is syndicated into the system and associated with the student, it can generate a Discourse thread just like any other document. That discussion can be included in With a little more work, you could have student apply direct ratings such as “likes” to the documents themselves. Making the assessment work for these different types isn’t quite as straightforward as I’m making it sound, either from a user experience design perspective or from a technology perspective. But the foundation is there to build on.
One of the commenters on part 1 of the series provided another interesting use case:
I’m the product manager for Wiki Education Foundation, a nonprofit that helps professors run Wikipedia assignments, in which the students write Wikipedia articles in place of traditional term papers. We’re building a system for managing these assignments, from building a week-by-week assignment plan that follows best practices, to keeping track of student activity on Wikipedia, to pulling in view data for the articles students work on, to finding automated ways of helping students work through or avoid the typical stumbling blocks for new Wikipedia editors.
Wikipedia is its own rich medium for conversation and interaction. I could imagine taking that abstracted peer review system and just hooking it up directly to student activity within Wikipedia itself. Once we start down this path, we really need to start talking about IMS Caliper and federated analytics. This has been a real bottom-up analysis, but we quickly reach the point where we want to start abstracting out the particular systems or even system types, and start looking at a general architecture for sharing learning data (safely). I’m not going to elaborate on it here—even I have to stop at some point—but again, if you made it this far, you might find it useful to go back and reread my original post on the IMS Caliper draft standard and the comments I made on its federated nature in my most recent walled garden post. Much of what I have proposed here from an architectural perspective is designed specifically with a Caliper implementation in mind.
Formal Grading
I suppose my favorite model so far for incorporating the discussion trust system into a graded, for-credit class is the model I described above where the analytics act as more of a coach to help students learn productive discussion behavior, while the class grade actually comes from their solution to the central problem, project, or riddle of the course. But if we wanted to integrate the trust analytics as part of the formal grading system, we’d have to get over the “Wikipedia objection,” meaning the belief that somehow vetting by a single expert produces more reliably generates accurate results than crowdsourcing. Some students will want grades from their teachers and will tend to think that the trust levels are bogus as a grade. (Some teachers will agree.) To address their concerns, we need three things. First, we need objectivity, by which I mean that the scoring criteria themselves are being applied the same to everyone. “Objectivity” is often about as real in student evaluation as it is journalism (which is to say, it isn’t), but people do want some sense of fairness, which is probably a better goal. Clear ratings criteria applied to everyone equally gives some sense of fairness. Second, the trust scores themselves must be transparent, by which I mean that students should be able to see how they earned their trust scores. They should also be able to see various paths to improving their scores. And finally, there should be auditability, by which I mean that, in the event that a student is given a score by her peers that her teacher genuinely disagrees with (e.g., a group ganging up to give one student thumbs-downs, or a lot of conversation being generated around something that is essentially not helpful to the problem-solving effort), there is an ability for the faculty member to override that score. This last piece can be a rabbit hole, both in terms of user interface design and in terms of eroding the very sense you’re trying to build of a trust network, but it is probably necessary to get buy-in. The best thing to do is to pilot the trust system (and the course design that is supposed to inspire ranking-worthy conversation) and refine it to the point where it inspires a high degree of confidence before you start using it for formal grading.
That’s All
No, really. Even I run out of gas. Eventually.
For a while.
Shaomeng Zhang says
Hi Michael, thanks for sharing your thought experiment. I also had my thought experience once about a LMS, heck, I even implemented part of the backend for that LMS. Instead of focusing on encouraging faculty’s sharing of content and collaboration and social authentic metrics, my focus was on the affordances and specifically social affordances of LMS.
I used to teach and was using Canvas and was not very satisfied with it, thus my thought experiment. It never went too far, but you just inspired me to publish my draft from about a year ago: https://medium.com/learning-technologies/design-of-instructspace-d44c31f78b27 For anyone having the same itch in their minds. Thanks!