David Wiley has a really interesting post up about Lumen Learning’s new personalized learning platform. Here’s an excerpt:
A typical high-level approach to personalization might include:
- building up an internal model of what a student knows and can do,
- algorithmically interrogating that model, and
- providing the learner with a unique set of learning experiences based on the system’s analysis of the student model
Our thinking about personalization started here. But as we spoke to faculty and students, and pondered what we heard from them and what we have read in the literature, we began to see several problems with this approach. One in particular stood out:
There is no active role for the learner in this “personalized” experience. These systems reduce all the richness and complexity of deciding what a learner should be doing to – sometimes literally – a “Next” button. As these systems painstakingly work to learn how each student learns, the individual students lose out on the opportunity to learn this for themselves. Continued use of a system like this seems likely to create dependency in learners, as they stop stretching their metacognitive muscles and defer all decisions about what, when, and how long to study to The Machine.
Instructure’s Jared Stein really likes Lumen’s approach, writing,
So much work in predictive analytics and adaptive learning seeks to relieve people from the time-consuming work of individual diagnosis and remediation — that’s a two-edged sword: Using technology to increase efficiency can too easily sacrifice humanness — if you’re not deliberate in the design and usage of the technology. This topic came up quickly amongst the #DigPedNetwork group when Jim Groom and I chatted about closed/open learning environments earlier this month, suggesting that we haven’t fully explored this dilemma as educators or educational technologist.
I would add that I have seen very little evidence that either instructors or students place a high value on the adaptivity of these products. Phil and I have talked to a wide range of folks using these products, both in our work on the e-Literate TV case studies and in our general work as analysts. There is a lot of interest in the kind of meta-cognitive dashboarding that David is describing. There is little interest in, and in some cases active hostility toward, adaptivity. For example, Essex County College is using McGraw Hill’s ALEKS, which has one of the more sophisticated adaptive learning approaches on the market. But when we talked to faculty and staff there, the aspects of the program that they highlighted as most useful were a lot more mundane, e.g.,
It’s important for students to spend the time, right? I mean learning takes time, and it’s hard work. Asking students to keep time diaries is a very difficult ask, but when they’re working in an online platform, the platform keeps track of their time. So, on the first class day of the week, that’s goal-setting day. How many hours are you going to spend working on your math? How many topics are you planning to master? How many classes are you not going to be absent from?
I mean these are pretty simple goals, and then we give them a couple goals that they can just write whatever they feel like. And I’ve had students write, “I want to come to class with more energy,” and other such goals. And then, because we’ve got technology as our content delivery system, at the end of the week I can tell them, in a very efficient fashion that doesn’t take up a lot of my time, “You met your time goal, you met your topic goal,” or, “You approached it,” or, “You didn’t.”
So one of the most valuable functions of this system in this context is to reflect back to the students what they have done in terms that make sense to them and are relevant to the students’ self-selected learning goals. The measures are fairly crude—time on task, number of topics covered, and so on—and there is no adaptivity necessary at all.
But I also think that David’s post hints at some of the complexity of the design challenges with these products.
You can think of the family of personalized learning products as having potentially two components: diagnostic and prescriptive. Everybody who likes personalized learning products in any form likes the diagnostic component. The foundational value proposition for personalization, (which should not in any way be confused with “personal”), is having the system provide feedback to students and teachers about what the student does well and where the student is struggling. Furthermore, the perceived value of the product is directly related to the confidence that students and teachers have that the product is rendering an accurate diagnosis. That’s why I think products that provide black box diagnoses are doomed to market failure in the long term. As the market matures, students and teachers are going to want to know not only what the diagnosis is but also what the basis of the diagnosis is, so that they can judge for themselves whether they think the machine is correct.
Once the system has diagnosed the student’s knowledge or skill gaps—and it is worth calling out that these many of these personalized learning systems work on a deficit model, where the goal is to get students to fill in gaps—the next step is to prescribe actions that will help students to address those gaps. Here again we get into the issue of transparency. As David points out, some vendors hide the rationale for their prescriptions, even going so far as to remove user choice and just hide the adaptivity behind the “next” button. Note that the problem isn’t so much with providing a prescription as it is with the way in which it is provided. The other end of the spectrum, as David argues, is to make recommendations. The full set of statements from a well behaved personalized learning product to a student or teacher might be something like the following:
- This is where I think you have skill or knowledge gaps.
- This is the evidence and reasoning for my diagnosis.
- This is my suggestion for what you might want to do next.
- This is my reasoning for why I think it might help you.
It sounds verbose, but it can be done in fairly compact ways. Netflix’s “based on your liking Movie X and Movie Y, we think you would give Movie Z 3.5 stars” is one example of a compact explanation that provides at least some of this information. There are lots of ways that a thoughtful user interface designer can think about progressively revealing some of this information and providing “nudges” that encourage students on certain paths while still giving them the knowledge and freedom they need to make choices for themselves. The degree to which the system should be heavy-handed in its prescription probably depends in part on the pedagogical model. I can see something closer to “here, do this next” feeling appropriate in a self-paced CBE course than in a typical instructor-facilitated course. But even there, I think the Lumen folks are 100% right that the first responsibility of the adaptive learning system should be to help the learner understand what the system is suggesting and why so that the learner can gain better meta-cognitive understanding.
None of which is to say that the fancy adaptive learning algorithms themselves are useless. To the contrary. In an ideal world, the system will be looking at a wide range of evidence to provide more sophisticated evidence-based suggestions to the students. But the key word here is “suggestions.” Both because a critical part of any education is teaching students to be more self-aware of their learning processes and because faulty prescriptions in an educational setting can have serious consequences, personalized learning products need to evolve out of the black box phase as quickly as possible.
Fred M Beshears says
The third step of the three step, high-level model of a personalized learning environment fails to mention two important ingredients. For review, here are the three steps:
1. building up an internal model of what a student knows and can do,
2. algorithmically interrogating that model, and
3. providing the learner with a unique set of learning experiences based on the system’s analysis of the student model
Notice that the third step fails to specify where the systems goals come from. Presumably, it knows ‘what the student knows and can do [and will do under various conditions?]’. This is the status quo ante.
However, how does the system know what the student wants to become. In other words, how does the system determine what changes the student wants to make to the status quo.
In a non-personalized learning environment, the problem can be addressed by having the faculty of the institution specify a set of courses the student can choose to take. Each course comes with a course description, pre-requisites, learning activities, and learning objectives.
However, in a personalized learning environment, one critical question is the extent to which the burden of specifying learning activities and objectives should fall on the student.
Perhaps the personalized learning environments of the future will be semi-automated career and life-coach counsellors. Today, some career and life-coach counsellors use personality profile questionnaires and aptitude tests to help determine the set of careers and life plans that would best suite their clients.
At the extreme end of personality profile spectrum, there are very large personality profiles consisting of more than 100,000 questions. Obviously, almost no one would want to manually fill out one of these questionnaires. However, programmers could use all the posts one makes to Facebook, Email, blog posts, and other online sources as inputs to an algorithm that simulates answers to one or more of these extensive personality profile questionnaires that have been constructed by psychologists.
Such a personality profile could be used to create a personalized learning environment. And, it could be used for a variety of other purposes. For example, Martine Rothblatt describes how we might create mind clones of ourselves to act as our agents and to fill-in for us. Our mind clone(s) could perform many of the routine tasks we currently perform in the various roles we play at school, at work, or in life.
Here’s how Rothblatt defines the three parts of a mind clone and its environment in her book Virtually Human: the promise – and peril – of digital immortality.
1. a Mindfile – “A set of stored digital information about a person, such as the totality of one’s social media posts, saved media and other data relating to one’s life, intended to be used for the creation of a mindclone.”
2. Mindware – “Software that functions as an operating system for an artificial consciousness, including the capability to extract from a mind file the personality of the individual who is the subject of the mindfile and to replicate that personality via operating-system settings.”
3. Mindclone – “A humanly cyberconscious being designed to replicate the consciousness immanent in a mindfile of another person. A digital dopperganger and extended identity of another person.”
For more on Rothblatt’s book, see my review at:
Virtually Human: the promise – and peril – of digital immortality
IMO, the design of future personalized learning environments and the extensive learner information profile(s) required to make them work could be important components of the mind clone industry Rothblatt has in mind. So, perhaps educators should start thinking about how they would (or whether they would) want be a part of such an enterprise.