The New Yorker published an article yesterday titled “A MOOC Mystery: Where Do Online Students Go?” which tried to explain low MOOC completion rates by comparing the situation to the General Educational Development (GED) exam. Right off the bat, the article conflates MOOCs with “online students”. MOOCs are but one form of online education, and a very recent one at that. Worse, however, is that the entire basis for the article is quite flawed – GED results do not give much insight into MOOC students patters, and it turns out there is not much of a mystery in the first place.
The hook in this article seems to be the coincidence of two numbers [emphasis added]:
When the Times declared 2012 the “Year of the MOOC,” it seemed, in the words of the paper, that “everyone wants in,” with schools, students, and investors eager to participate. But, as can happen in academia, early ambition faded when the first few assessments were returned, and, since then the open-online model appears to have earned an incomplete, at best. An average of only four per cent of registered users finished their MOOCs in a recent University of Pennsylvania study, and half of those enrolled did not view even a single lecture. EdX, a MOOC collaboration between Harvard and the Massachusetts Institute of Technology, has shown results that are a little more encouraging, but not much. And a celebrated partnership between San Jose State and Udacity, the company co-founded by Sebastian Thrun, a Stanford professor turned MOOC magnate , also failed, when students in the online pilot courses consistently fared worse than their counterparts in the equivalent courses on campus.
Some of the problems encountered by MOOCs echo those of an earlier model of alternative learning. Last month, the General Educational Development exam, or G.E.D., was replaced by a more challenging computer version. Like MOOCs, the G.E.D, which has been around since 1942, is partially an attempt to save time and money in education, and to extend opportunity to students outside the traditional classroom. As a marker of high-school equivalence, it holds the promise that an entire academic career can be distilled into the knowledge required to pass a five-part exam.
But according to a September, 2013, American RadioWorks report, of the forty per cent of G.E.D.-holders who go on to college, fewer than half complete more than a year, and only about four per cent earn a four-year degree. The additional rigor of the redesigned exam might not be the solution. The military tried a similar approach when, in the nineteen-seventies, it raised the G.E.D. scores required for entry. Even then, G.E.D. applicants quit or were thrown out of the service at a higher rate than enlistees with high-school degrees.
Get it? Oh, the possible conclusions we can draw now that we’ve established this remarkable insight!
There might be just a few problems with this analogy, however.
- The GED is targeted at high school students who did not or could not complete their high school education and graduate; MOOCs appeal for the most part to working professional adults who already have at least a bachelor’s degree (according to the same U Penn Study cited by the New Yorker, an “overwhelming 83 percent already have a two-year or four-year degree, the study showed” and “44 percent have advanced degrees”).
- The “half” and “4%” numbers in the GED study are based on whether or not they got a four-year college degree; other several pilot programs, MOOCs offer no credit towards a degree.
- The GED is an official government program to grant a credential; MOOCs are based on open education in that anyone can sign up, and for the most part the learners do not care about certificates or any acknowledgement of completion.
- The GED is a test – passing the test is the point, not learning; MOOCs are learning opportunities – for the majority of learners, access to educational content is the point, not testing.
- The SJSU / Udacity courses were not MOOCs – they were non-massive, controlled access online courses.
Before we go on, let me point out that I am not making an argument that MOOC completion rates are a non-issue, nor am I arguing that MOOCs are solving higher education problems. What I am pointing out is that the New Yorker is basing its whole article on a faulty analogy.
Not only is the analogy flawed, but the focus on course completion in MOOCs in this simplistic fashion is also flawed, as was pointed out as the #1 takeaway in the HarvardX / MITx study linked by the New Yorker article:
Takeaway 1: Course completion rates, often seen as a bellwether for MOOCs, can be misleading and may at times be counterproductive indicators of the impact and potential of open online courses.
The researchers found evidence of large numbers of registrants who may not have completed a course, but who still accessed substantial amounts of course content. Across the 17 MITx and HarvardX courses covered in the reports, 43,196 registrants earned certificates of completion. Additionally, another 35,937 registrants explored half or more of the units in a course without achieving certification.
The author does acknowledge later in the article this exact flaw in the basis for his own article:
But students may go into an online course knowing that a completion certificate, even offered under the imprimatur of Harvard or UPenn, doesn’t have the same worth. A recent study by a team of researchers from Coursera found that, for many MOOC students, the credential isn’t the goal at all. Students may treat the MOOC as a resource or a text rather than as a course, jumping in to learn new code or view an enticing lecture and back out whenever they want, just as they would while skimming the wider Web.
But by this point, the author has already drawn several conclusions from his pithy insight, so who cares about context at this point?
If the New Yorker wants to explore the MOOC mystery, it turns out that it’s not such a mystery at all what is happening with MOOC students, or at least there is a fair amount of recent and ongoing research into the subject. Here is a graphic that captures some of the MOOC student patterns that is in alignment with more formal studies at Stanford and MIT.
But even better, it turns out that the U Penn study was actually presented at the MOOC Research Initiative Conference. That’s right – an entire conference based on real research into MOOC student patterns. From e-Literate TV, we have a YouTube channel populated with interviews with the MRI conference grantees – there’s a ton of insight available there. Here’s one in particular where Michael interviews Martin Weller from the Open University about their research data:
Unfortunately, I’m sure the New Yorker article will get plenty of airplay. I just hope more people ask some tough questions before jumping into the resulting debates.
Update: I should point out that several of the author’s conclusions about MOOCs have real merit, especially the need for more social interaction as well as MOOCs being incomplete but having potential. These points can be lost, however, by the faulty analogy and setup.