- Kill the grade book in order to get faculty away from concocting arcane and artificial grading schemes and more focused on direct measures of student progress.
- Use scale appropriately in order to gain pedagogical and cost/access benefits while still preserving the value of the local cohort guided by an expert faculty member, as well as to propagate exemplary course designs and pedagogical practices more quickly.
- Assess authentically through authentic conversations in order to give credit for the higher order competencies that students display in authentic problem-solving conversations.
- Leverage the socially constructed nature of expertise (and therefore competence) in order to develop new assessment measures based on the students’ abilities to join, facilitate, and get the full benefits from trust networks.
I also argued that platform design and learning design are intertwined. One implication of this is that there is no platform that will magically make education dramatically better if it works against the grain of the teaching practices in which it is embedded. The two need to co-evolve.
This last bit is an exceedingly tough nut to crack. If we were to design a great platform for conversation-based courses but it got adopted for typical lecture/test courses, the odds are that faculty would judge the platform to be “bad.” And indeed it would be, for them, because it wouldn’t have been designed to meet their particular teaching needs. At the same time, one of our goals is to use the platform to propagate exemplary pedagogical practices. We have a chicken and egg problem. On top of that, our goals suggest assessment solutions that differ radically from traditional ones, but we only have a vague idea so far of what they will be or how they will work. We don’t know what it will take to get them to the point where faculty and students generally agree that they are “fair,” and that they measure something meaningful. This is not a problem we can afford to take lightly. And finally, while one of our goals is to get teachers to share exemplary designs and practices, we will have to overcome significant cultural inhibitions to make this happen. Sometimes systems do improve sharing behavior simply by making sharing trivially easy—we see that with social platforms like Twitter and Facebook, for example—but it is not at all clear that just making it easy to share will improve the kind of sharing we want to encourage among faculty. We need to experiment in order to find out what it takes to help faculty become comfortable or even enthusiastic about sharing their course designs. Any one of these challenges could kill the platform if we fail to take them seriously.
When faced with a hard problem, it’s a good idea to find a simpler one you can solve that will get you partway to your goal. That’s what the use case I’m about to describe is designed to do. The first iteration of any truly new system should be designed as an experiment that can test hypotheses and assumptions. And the first rule of experimental design is to control the variables.
Of the three challenges I just articulated, the easiest one to get around is the assessment trust issue. The right use case should be an open, not-for-credit, not-for-certification course. There will be assessments, but the assessments don’t count. We would therefore be creating a situation somewhat like a beta test of a game. Participants would understand that the points system is still being worked out, and part of the fun of participation is seeing how it works and offering suggestions for improvement. The way to solve the problem of potential mismatches between platform and content is to test the initial release of the platform with content that was designed for it. As for the third problem, we need to pick a domain that is far enough away from the content and designs that faculty feel are “theirs” that the inhibitions regarding sharing are lower.
All of these design elements point toward piloting the platform with a faculty professional development cMOOC. Faculty can experience the platform as students in a low-stakes environment. And I find that even faculty who are resistant to talking about pedagogy in their traditional classes tend to be more open-minded when technology enters the picture because it’s not an area where they feel they are expected to be experts. But it can’t be a traditional cMOOC (if that isn’t an oxymoron). We want to model the distributed flip, where there are facilitators of local cohorts in addition to the large group participation. This suggests a kind of a “reading group” or “study group” structure. The body of material for the MOOC is essentially a library of content. Each campus-based group chooses to go through the content in their own way. They may cover all of it or skip some of it. They may add their own content. Each group will have its own space to organize its activities, but this space will be open to other groups. There will be discussions open to everyone, but groups and individual members can participate in those or not, as they choose. Presumably each group would have at least a nominal leader who would take the lead on organizing the content and activities for the local cohort. This would typically be somebody like the head of a Center for Educational Technology, but it could also be an interested faculty member, or the group could organize its activities by consensus.
To make the use case more concrete, let’s assume that the curriculum will revolve around the forthcoming e-Literate TV series on personalized learning. This is something that I would ideally like to do in the real world, but it also has the right characteristics for the current thought experiment. The heart of the series is five case studies of schools trying different personalized learning approaches:
- Middlebury College, an elite New England liberal arts school in rural Vermont
- Essex County College, a community college in Newark, NJ
- Empire State College, a SUNY school that focuses on non-traditional students and has a heavy distance learning program
- Arizona State University, a large public university with a largely top-down approach to implementing personalized learning
- A large public university with a largely bottom-up approach to implementing personalized learning
These thirty-minute case studies, plus the wrapper content that Phil and I are putting together (including a recorded session at the last ELI conference), covers a number of cross-cutting issues. Here are a few:
- What does “personalized” really mean? When (and how) does technology personalize, and when does it depersonalize?
- How does the idea of “personalized” change based on the needs of different kinds of students in different kinds of institutions?
- How do personalized learning technologies, implemented thoughtfully in these different contexts, change the roles of the teacher, the TA, and the students?
- What kinds of pedagogy seem to work best with self-paced products that are labeled as providing personalized learning?
- What’s hard about using these technologies effectively, and what are the risks?
That’s the content and the context. Since we’re going for something like a PBL design, the central problem that each cohort would need to tackle is, “What, if anything, should we be doing with personalized learning tools and pedagogical approaches in our school?” This question can be tackled in a lot of different ways, depending on the local culture. If it is taken seriously, there are likely to be internal discussions about politics, budgets, implementation issues, and so on. Cohorts might also be very interested to have conversations with other cohorts from peer schools to see what they are thinking and what their experiences have been. Not only that, they may also be interested in how their peers are organizing their campus conversations about personalized learning. This is the equivalent of sharing course designs in this model. And of course, there will hopefully also be very productive conversations across all cohorts, pooling expertise, experience, and insight. This sort of community “sharding” is consistent with the cMOOC design thinking that has come before. We’re simply putting some energy into both learning design and platform design to make that approach work with a facilitation structure that is closer to a traditional classroom setting. We’re grafting a cMOOC-like course design onto a distributed flip facilitation structure in the hopes of coming up with something that still feels like a traditional class in some ways but brings in the benefits of a global conversation (among teachers as well as students).
The primary goal of such a “course” wouldn’t be to certify knowledge or even to impart knowledge but rather to help participants build their intra- and inter-campus expertise networks on personalized learning, so that educators could learn from each other more and re-invent the wheel less. But doing so would entail raising the baseline level of knowledge of the participants (like a course) and could support the design goals. The e-Literate TV series provides us with a concrete example to work with, but any cross-cutting issue or change that academia is grappling with would work as a use case for attacking our design goals in an environment that is relatively lower-risk than for-credit classes. The learning platform necessary to make such a course work would need to both support the multi-layered conversations and provide analytics tools to help identify both the best posts and the community experts.
In the next two posts, I will lay out the basic design of the system I have in mind. Then, in the final post of the series, I will discuss ways of extending the model to make it more directly suitable for traditional for-credit class usage.