In the first post of this series, I identified four design goals for a learning platform that would be well suited for discussion-based courses:
- Kill the grade book in order to get faculty away from concocting arcane and artificial grading schemes and more focused on direct measures of student progress.
- Use scale appropriately in order to gain pedagogical and cost/access benefits while still preserving the value of the local cohort guided by an expert faculty member, as well as to propagate exemplary course designs and pedagogical practices more quickly.
- Assess authentically through authentic conversations in order to give credit for the higher order competencies that students display in authentic problem-solving conversations.
- Leverage the socially constructed nature of expertise (and therefore competence) in order to develop new assessment measures based on the students’ abilities to join, facilitate, and get the full benefits from trust networks.
I also argued that platform design and learning design are intertwined. One implication of this is that there is no platform that will magically make education dramatically better if it works against the grain of the teaching practices in which it is embedded. The two need to co-evolve.
This last bit is an exceedingly tough nut to crack. If we were to design a great platform for conversation-based courses but it got adopted for typical lecture/test courses, the odds are that faculty would judge the platform to be “bad.” And indeed it would be, for them, because it wouldn’t have been designed to meet their particular teaching needs. At the same time, one of our goals is to use the platform to propagate exemplary pedagogical practices. We have a chicken and egg problem. On top of that, our goals suggest assessment solutions that differ radically from traditional ones, but we only have a vague idea so far of what they will be or how they will work. We don’t know what it will take to get them to the point where faculty and students generally agree that they are “fair,” and that they measure something meaningful. This is not a problem we can afford to take lightly. And finally, while one of our goals is to get teachers to share exemplary designs and practices, we will have to overcome significant cultural inhibitions to make this happen. Sometimes systems do improve sharing behavior simply by making sharing trivially easy—we see that with social platforms like Twitter and Facebook, for example—but it is not at all clear that just making it easy to share will improve the kind of sharing we want to encourage among faculty. We need to experiment in order to find out what it takes to help faculty become comfortable or even enthusiastic about sharing their course designs. Any one of these challenges could kill the platform if we fail to take them seriously.
When faced with a hard problem, it’s a good idea to find a simpler one you can solve that will get you partway to your goal. That’s what the use case I’m about to describe is designed to do. The first iteration of any truly new system should be designed as an experiment that can test hypotheses and assumptions. And the first rule of experimental design is to control the variables.
Of the three challenges I just articulated, the easiest one to get around is the assessment trust issue. The right use case should be an open, not-for-credit, not-for-certification course. There will be assessments, but the assessments don’t count. We would therefore be creating a situation somewhat like a beta test of a game. Participants would understand that the points system is still being worked out, and part of the fun of participation is seeing how it works and offering suggestions for improvement. The way to solve the problem of potential mismatches between platform and content is to test the initial release of the platform with content that was designed for it. As for the third problem, we need to pick a domain that is far enough away from the content and designs that faculty feel are “theirs” that the inhibitions regarding sharing are lower.
All of these design elements point toward piloting the platform with a faculty professional development cMOOC. Faculty can experience the platform as students in a low-stakes environment. And I find that even faculty who are resistant to talking about pedagogy in their traditional classes tend to be more open-minded when technology enters the picture because it’s not an area where they feel they are expected to be experts. But it can’t be a traditional cMOOC (if that isn’t an oxymoron). We want to model the distributed flip, where there are facilitators of local cohorts in addition to the large group participation. This suggests a kind of a “reading group” or “study group” structure. The body of material for the MOOC is essentially a library of content. Each campus-based group chooses to go through the content in their own way. They may cover all of it or skip some of it. They may add their own content. Each group will have its own space to organize its activities, but this space will be open to other groups. There will be discussions open to everyone, but groups and individual members can participate in those or not, as they choose. Presumably each group would have at least a nominal leader who would take the lead on organizing the content and activities for the local cohort. This would typically be somebody like the head of a Center for Educational Technology, but it could also be an interested faculty member, or the group could organize its activities by consensus.
To make the use case more concrete, let’s assume that the curriculum will revolve around the forthcoming e-Literate TV series on personalized learning. This is something that I would ideally like to do in the real world, but it also has the right characteristics for the current thought experiment. The heart of the series is five case studies of schools trying different personalized learning approaches:
- Middlebury College, an elite New England liberal arts school in rural Vermont
- Essex County College, a community college in Newark, NJ
- Empire State College, a SUNY school that focuses on non-traditional students and has a heavy distance learning program
- Arizona State University, a large public university with a largely top-down approach to implementing personalized learning
- A large public university with a largely bottom-up approach to implementing personalized learning
These thirty-minute case studies, plus the wrapper content that Phil and I are putting together (including a recorded session at the last ELI conference), covers a number of cross-cutting issues. Here are a few:
- What does “personalized” really mean? When (and how) does technology personalize, and when does it depersonalize?
- How does the idea of “personalized” change based on the needs of different kinds of students in different kinds of institutions?
- How do personalized learning technologies, implemented thoughtfully in these different contexts, change the roles of the teacher, the TA, and the students?
- What kinds of pedagogy seem to work best with self-paced products that are labeled as providing personalized learning?
- What’s hard about using these technologies effectively, and what are the risks?
That’s the content and the context. Since we’re going for something like a PBL design, the central problem that each cohort would need to tackle is, “What, if anything, should we be doing with personalized learning tools and pedagogical approaches in our school?” This question can be tackled in a lot of different ways, depending on the local culture. If it is taken seriously, there are likely to be internal discussions about politics, budgets, implementation issues, and so on. Cohorts might also be very interested to have conversations with other cohorts from peer schools to see what they are thinking and what their experiences have been. Not only that, they may also be interested in how their peers are organizing their campus conversations about personalized learning. This is the equivalent of sharing course designs in this model. And of course, there will hopefully also be very productive conversations across all cohorts, pooling expertise, experience, and insight. This sort of community “sharding” is consistent with the cMOOC design thinking that has come before. We’re simply putting some energy into both learning design and platform design to make that approach work with a facilitation structure that is closer to a traditional classroom setting. We’re grafting a cMOOC-like course design onto a distributed flip facilitation structure in the hopes of coming up with something that still feels like a traditional class in some ways but brings in the benefits of a global conversation (among teachers as well as students).
The primary goal of such a “course” wouldn’t be to certify knowledge or even to impart knowledge but rather to help participants build their intra- and inter-campus expertise networks on personalized learning, so that educators could learn from each other more and re-invent the wheel less. But doing so would entail raising the baseline level of knowledge of the participants (like a course) and could support the design goals. The e-Literate TV series provides us with a concrete example to work with, but any cross-cutting issue or change that academia is grappling with would work as a use case for attacking our design goals in an environment that is relatively lower-risk than for-credit classes. The learning platform necessary to make such a course work would need to both support the multi-layered conversations and provide analytics tools to help identify both the best posts and the community experts.
In the next two posts, I will lay out the basic design of the system I have in mind. Then, in the final post of the series, I will discuss ways of extending the model to make it more directly suitable for traditional for-credit class usage.
tjhunt says
How well do you know the history of Moodle?
What you are describing does not see that fundamentally different to me to what Martin Dougiamas tried 15 years go. The first release of Moodle in 2003 had forums, but no quizzes. It was built around social constructivist principles (https://docs.moodle.org/28/en/Pedagogy – the contents of that page have not changed fundamentally in the 10 years I have been working on Moodle). It seems version 1.0 did have a gradebook, but it was pretty primitive. (I actually thought it didn’t until I dug into history while writing this comment.)
Of course, what happened next is as described in your “Dammit, the LMS” post. People got what they asked for. The quiz was added in Moodle 1.1, the first feature that someone paid Martin to develop. Lots of features for managing content and more features for different sorts of grading.
It’s not that your ideas don’t have merit. They do. I just don’t see anything much different about the world today that suggests things would go any differently, though it would be nice if they did.
I also don’t really see the need to build a whole new LMS. You could use a Moodle course with forums. Hide all the grading stuff completely. Add a big of custom code to get the analytics you want. Of course, that is just one way you could build it.
Anyway, I look forwards to the rest of this series.
John Norman says
Like Tim, I am fascinated by where this might go. I am also a bit uneasy. I worry that you are focussing too quickly onto the ‘platform design’ as though that is the most important problem.
In some ways, it seems to me that your learning goals could be supported by this blog as a platform. It is easy to use and you are already working up a stimulus-response set of conversations. Tracking and assessment may be a difficulty, but I thought you were setting ‘grading’ aside. The biggest difficulty with using /this/ blog might be the readership. How many faculty follow you and what are the chances that you will get replies from Faculty vs. proxies for Faculty like Tim and myself – with plenty of experience and opinions, but not actively engaged in teaching. Which in turn makes me think – if not /this blog/ then what?
Thus my thoughts are – if you want this conversation to happen online in a distributed way, there will already be challenges because not all Faculty will be accustomed to this mode of discourse, and we should certainly try to find a forum for the conversation (platform) with which Faculty are comfortable (including its openness or closedness). The challenge is ‘what would it take to get Faculty to engage with the Goals and the approach’? Are we really so short of options that we need to create something new? Perhaps you already have the answers to some of these questions, but a platform (however good) is not sufficient to make a useful conversation happen.
What if your experimental design were to define the goals, and then run a series of attempts to achieve those goals with different existing platforms and observe/report on the ways in which the technology fails to help with the goals..? I’m fairly confident that our students try to solve a lot of similar problems with Facebook groups. Not because it is the best ‘platform’ but because it is the most familiar and as such, least gets in the way of the conversation…
Luke Fernandez says
“I’ve emphasized the software design aspects in the title of the post, mainly because people will seemingly read anything we write about LMSs, particularly if it involves the prospect of killing them….”
I guess if one wants to drive up readership it makes sense to proclaim the death of the LMS in a blog post title. But hopefully that eschatological trope is intended just as a publicity thing to drive readership rather than as a way of recruiting academics to a cause. The trope smacks too much of disruptive innovation and of what David Noble once called the “religion of technology.” That might attract technological zealots but it isn’t a constructive way to reach out to those who gravitate to a more deliberative and conservative form of technological change in the university. I’ll submit that a good portion of academics fall into this latter category. If we want to gain their support we probably want to avoid the eschatological trope altogether.
Michael Feldstein says
Luke, you’ll note that the title refers to “a post-LMS” but not to “killing” the LMS. My point was really more that people like to talk about systems more than they like to talk about the systemic. I think I’ve been pretty careful here.
Luke Fernandez says
Agreed. Your title doesn’t technically proclaim the death of anything. But “post” connotes (at least for me) a fairly radical break with the past. Is that what the post-LMS will really be? Or will it be a refinement built on top of the labors of those who have come before?
Michael Feldstein says
Luke, I’ll let you judge yourself at the end of the series. Personally, I think what I’m proposing is a pretty significant break with the past.
Michael Feldstein says
Tim, I do know a bit about the history of Moodle. I entirely agree that there is a kind of “stone soup” phenomenon that happens with simple learning platforms. “Oh, this is great! I love it’s simplicity. If only it had a grade book, it would be something I could use.” As you point out by referring to the “Dammit, the LMS” post, it’s a cultural problem. Just creating a new platform won’t solve it. One of the reasons why I chose a use case of faculty professional development is to give faculty a new pedagogical experience, which may inspire them to try it themselves in their own classes. That’s (part of) the way out of the cultural trap. I’ve emphasized the software design aspects in the title of the post, mainly because people will seemingly read anything we write about LMSs, particularly if it involves the prospect of killing them, but this is as much a thought experiment in cultural engineering as it is in software engineering. Also, another way out of the “stone soup” trap is to vow never to build another general-purpose LMSs anymore. If you want that, there are a number of mature platforms already. But they don’t do a particularly good job of CBE. And they don’t do a particularly good job with conversation-based classes. We don’t need another LMS, but we do need better tools built for specific pedagogical purposes. There is no need to create another Swiss army knife, particularly when all you need at the moment is a better Philip’s head screwdriver. The stone soup problem hits because the people building the platforms have outsized adoption targets (usually linked to outsized revenue targets). Martin Dougiamas runs a company. He wanted that company to grow and thrive. That entailed making compromises on his platform to drive adoption. At the time, you could argue that those compromises were reasonable trade-offs given that the LMS market was still immature and in need of more and better options. I don’t think the same argument would hold water today. What you need is a platform developer with the conviction to say “no” to feature requests and a sustainability model that allows it. That’s a hard problem too, because of the economics of education, but it’s an orthogonal problem to the design discussion at hand.
I also think that we need to adjust our notion of what it means to “build” or even to be a “platform” these days. Tim, what I’m going to propose in the next post of the series is essentially your idea to take an existing discussion board and add some analytics. The details matter, but at a high level, one thing I will not be proposing is to build another discussion board. The technical part of this series is about creating a “platform” by tying together a couple of existing pieces. John, if you’ll look in your email, you’ll see a message I sent earlier asking for some information that will give you some inkling of where I will be taking the series in the post after next.
I will say this: It is easy to become a little jaded after participating in the building of one of these platforms, only to see the same old things happen again. Normally, good software design is implementing features that help users do what they want to do. That’s hard enough. Even harder is in building features that will change users’ incentives, giving them reason to want to do something that they didn’t previously have a desire to do (or had a latent desire to do), like, say, sharing the boring details of their daily lives with their 357 “friends,” or writing restaurant reviews (complete with photos!) for strangers, or sharing your course designs and materials. Again, I think we have evidence that platform design isn’t sufficient to drive the change we want to see in faculty behavior. There has to be a cultural component. But I do believe that platform design is probably necessary or, at the very least, extremely influential. Whenever you’re talking about mass behavior changes, it’s usually not a binary thing. “Oh, if only we could show them X or do Y, everybody will do this thing we want them to do.” Much of the time, it’s about a lot of little changes and seemingly small features that lower the inhibitions and raise the rewards for the behavior we want. That is one of the main lessons I take away from social software design.
Anyway, I hope that by the end of the series I will have persuaded both of you that the path forward I am outlining avoids the worst of your respective concerns.