Michael and I have been at the MOOC Research Initiative conference in Arlington, TX (#mri13) for the past three days. Actually, thanks to the ice storm it turns out MRI is the Hotel California of conferences.
credit: Bailey Carter assignment for Laura Gibbs’ class
While I’m waiting to find out which fine Texas hotel dinner I might enjoy tonight, I thought it would be worthwhile to share more information from the University of Pennsylvania research that seems to be the focus of media reports on the conference (see Chronicle, Inside Higher Ed, and eCampusNews, for example). Penn has tracked approximately one million students through their 17 first-generation MOOCs on Coursera, which provided the foundation for this research.
Per IHE:
“Emerging data … show that massive open online courses (MOOCs) have relatively few active users, that user ‘engagement’ falls off dramatically especially after the first 1-2 weeks of a course, and that few users persist to the course end,” a summary of the study reads.
For anyone who has paid even the slightest bit of attention to the MOOC space over the past year, those conclusions hardly qualify as revelations. Yet some presenters said they felt the first day of the conference served as an opportunity to confirm some of those commonly held beliefs about MOOCs.
While it is accurate that these basic observations have been made in the past, there was some additional information from U Penn worth considering. The following slide images are courtesy of Laura Perna, a member of the research team.
The research team (but apparently not the faculty members) classified only two of the courses studied as targeted at college students (Single-variable Calculus and Principles of Microeconomics). There were seven courses targeted at “occupational” students (Cardiac Arrest, Gamification, Networked Life, Into to Ops Management, Fundamentals of Pharmacology, Scarce Medical Resources and Vaccines) and eight for “enrichment” (ADHD, Artifacts in Society, Health Policy and ACA, Genome Science, Modern American Poetry, Greek and Roman Mythology, Listening to World Music, and Growing Old). Update: I have changed the language in this paragraph based on commentary from one of the MOOC faculty; see clarification at end of article.
As the Chronicle pointed out, there was a wide variation in these courses.
The courses varied widely in topic, length, intended audience, amount of work expected, and other details. The largest, “Introduction to Operations Management,” enrolled more than 110,000 students, of whom about 2 percent completed the course. The course with the highest completion rate, “Cardiac Arrest, Resuscitation Science, and Hypothermia,” enrolled just over 40,000 students, of whom 13 percent stuck with it to the end.
This variation included the use of teaching assistants.
The research tracked several characteristics of the student population:
- Users – these are all students who registered for the course, regardless of time frame.
- Registrants – these are the subset of Users who registered before the course through the last week of the course. The difference is interesting, as there were quite a few Users who registered well after the course was over, essentially opting for a self-paced experience. We have seen very little analysis of this difference.
- Starters – these are the students who logged into the course and had some basic course activity.
- Active User – these are the students who watched at least one video (I’m not 100% sure if this is accurate, but it is close).
- Persister – these are the students who were still active within the last week of the course.
Given their categories, the Penn team showed percentages across all the courses in question. The completion rate (% of Registrants who were Persisters) varied from 13% to 2%. More useful, in my opinion, was the view of all categories across all courses.
And finally, they showed the pattern of MOOC activity over time, as shown by this view of quizzes in one course. This general pattern of steep drop-off in week one, followed by a slower decrease.
Notes
1) Which Categories – I think the team missed an opportunity to build on the work of the Stanford team, which identified different student patterns with more precision (see Stanford report here and my graphical mash-up here).
2) Self-Paced – As mentioned before, it is interesting the separation of students who registered during the course official time frame (Registrants) and those who registered after the course was over. This later group ranged from 2% to 23%, which is significant. Thousands and even tens of thousands of students are choosing to register and access course material when the course is not even “running”. They would have access to open material, quizzes and presumably assignments on a self-paced basis, but likely have no interactions with other students or the faculty.
3) Learner Goals – As was discussed frequently at the conference (but not in news articles about the conference), when you open a course up in terms of enrollment, one result is that you get a variety of student types with different goals. Not everyone desires to “complete” a course, and it is a mistake to solely focus on “course completion” when referring to MOOCs. For future research, I would hope that U Penn and others would find a way to determine learner goals near the beginning of the course then measure whether students met their learning goals either when finishing or dropping out.
Update (12/7): From the comments, one of the Penn professors who taught one of the MOOCs (Kevin Werbach) has provided some clarifications that I feel are important enough to include within the article.
I’m glad to see the Penn research getting so much attention, but it seems it primarily confirms what all other studies have shown.
As far as I know, the researchers didn’t have any contact with the faculty teaching the courses. So some of their statements are generalizations. E.g., I’m not sure what it means for a course to be “targeted at college students.” E.g., I teach the in-person version of my course (Gamification) to college students, and I would think most of the people who study modern poetry do so in college.
Also, I wouldn’t take the TA numbers too seriously. There’s a big difference between an undergrad and a PhD student in the field, for example, and those numbers don’t indicate how much time they worked or whether they were paid. And it looks like they confused the two sessions of my course. The first one (which seems to be what they looked at) had 1 TA. In the second session, I experimented with using two MBA students supervising 4 undergrads (hence the 6), which worked poorly.
Finally, including people who signed up after the course ended seems very odd, especially when one of the metrics is what percentage were in the course at the time it ended. Plus Coursera implemented their Watchlist feature somewhere in the middle of this process, which I think would significantly change the post-course registration behavior.
Full disclosure: Coursera has been a client of MindWires Consulting.
Maha Bali (@Bali_Maha) says
Thanks for the deeper and slightly more critical insight. I think generalizations about MOOCs and MOOC learners are very unhelpful to understanding their impact and usefulness, so am glad ppl are starting to say that more explicitly (obvious though it is!)
Maha Bali (@Bali_Maha) says
I would actually be interested to know about differences in design of these UPenm MOOCs. The one I completed was very cognitive behaviorist and i completed it because assessment was just quizzes that tested recall. I.e. i completed it because it was easy, not because i was engaged (also, it was my first MOOC so i still hadn’t become comfortable dropping out, as i do now with less interesting MOOCs). It is hard to get dropouts to respond to surveys explaining why they dropped out, isn’t it, though? Many won’t respond, even though i have seen questions directed at them in end-of-MOOC surveys
Phil Hill says
Maha, you’re right that it’s hard to get dropouts to respond to surveys, but there could be periodic pop-up survey-like questions or even multiple-choice attitudinal questions. When someone drops out, you could go back to last answers and get sense of whether meeting goals. I’m not suggesting that it’s trivial, but there could be some attempt to gather this information without using the blunt-force survey method. But even with surveys using email registration info, how many times have we seen attempt to ask drop-outs their feedback? Not enough IMO.
And great quote – “i completed it because it was easy, not because i was engaged”.
Kesiena Okooboh says
Hi Phil
The findings are very interesting, I had registered for three MOOC’s but was unable to complete any of them (a pity on my part). The main reason will be lack of time as I am currently studying for my Doctorate and working full time as well. I have registered for two short MOOC’s with FutureLearn commencing in January and March 2014 (3 weeks each). From my perception, the longer the courses the harder it is to complete. I would recommend the timeframe for the courses to be short – to facilitate completion
Kesiena Okooboh says
Another important finding (based on anecdotal evidence) a lot of ‘learners’ register for MOOC courses that they have no interest in completing. They just register for the sake of it and this could be responsible for some conflicting results.
Phil Hill says
Kesiena, I agree about registrants who have no intention of completing. This is analogous to Netflix users putting movies in instant queue with no definitive plans to watch, or (as I heard at MRI13) a student circling or book-marking a course in the course catalog but not deciding to take the course. These are not that relevant to “course completion” statistics, at least compared to traditional f2f courses.
But, MOOC marketers have made it a point of hyping these registration numbers; so, it’s a little of the ‘hoisting on your own petard’ situation. It’s not actually helpful, but it is somewhat self-inflicted.
Nicole Wang says
Hi Phil,
Thanks for the great comments about our presentation. Several things I’d like to point out from my personal perspective:
1. ‘Active User’ is correct. One of my colleague dug into video viewer data file and generated the result you saw.
2. We are still in the stage of cleaning up messy data, and there are so many questions we have. In addition, unlike EdX, Coursera does not provide users’ personal information (age, gender, education level, etc) to us. It’s difficult to cluster users when we do not have control variables.
3. TA is just one factor we pointed out from Syllabus. There are other factors we pointed out as well. We for sure will share our research with instructors once we have more findings. We feel truly appreciated to them in terms of their great help to us.
4. For self-paced students, they can still interact with faculty or other cohorts on discussion board. Also, Penn Coursera courses close in three months after the courses end. But your perspective is really interesting, I will put more thought into it.
Thanks again for the comments, I really appreciated it!
Regards,
Nicole
Phil Hill says
Nicole, thanks for comments – it’s helpful to hear directly from a member of the research team.
I think a question raised by Kevin (or my reading of Kevin’s comment) is not just about ‘sharing research with instructors’ but more about getting their input and verifying data context ahead of time. Do you plan on interviewing / getting the MOOC profs input before final report? And yes, I realize that is a leading question.