Update (3/10): Patterns and descriptions have been updated based on feedback in a new post.
As discussed in my last post, the focus on “completion rates” in MOOCs is somewhat misplaced, as open education is not simply an extension of traditional education. As several others have noted, not every student is attempting to complete a course, and in fact different students have different goals while participating in the same open course. This holds true for both cMOOCs and xMOOCs.
Does this mean that we should throw out the completion rate data? No. As Katy Jordan described quite well in the comments:
A lot of people have asked whether completion rates are the right way of framing the success of a MOOC; I agree that there is much more to the potential positive impacts of MOOCs for students than completion rate but, at the moment, completion rate is what the providers are measuring most consistently.
In my mind, we should augment the models we use to evaluate MOOCs rather than throw the baby out with the bathwater. The challenge, therefore, is to move beyond the simplistic view of one type of student with one type of goal (course completion), and find patterns of student behavior that will give additional insight into the different goals and therefore different measures we should have in evaluating whether MOOCs are effective.
Study Based on Change11
In 2011-2012 as part of the Change11 course (a connectivist course, or cMOOC, facilitated by George Siemens, Dave Cormier and Stephen Downes), the Scottish group Caledonian Academy was given access for surveys and follow-up interviews to help understand the student population in a research study.
The first component of the study was to ask participants to complete an SRL profile instrument* we had developed for the study. The instrument was adapted from a number of pre-existing SRL self-report instruments (full details, and a copy of the instrument are here), most notably the Motivated Strategies for Learning Questionnaire (Pintrich et al 1991) and a more recent Self directed Learning Orientation scale developed by Raemdonck (Gijbels et al , 2010). [snip]
We saw different patterns of engagement. In addition to an expected cluster of lurkers who purposefully did not engage with other course participants, we identified two further groups: one group of passive participants, who expected ‘to be taught’, and viewed the course as a source of information, attempting to capture all the ideas being exchanged within the Change 11 community; and a final group, more active participants, who set their own goals, established connections with other learners and linked these connections with their existing personal learning network. [emphasis added]
Based on various first-hand descriptions of MOOCs over the past year, I would propose a fourth pattern – the Drop-In, where students who direct most of their active participation for a particular topic within the course or a particular discussion thread.
The Four Student Archetypes
This leaves us with four student archetypes to consider (note that these are emerging patterns based on partial information, and these descriptions may need to change as we get more data):
- Lurkers – This is the majority of students within xMOOCs, where people enroll but just observe or sample a few items at the most. Many of these students do not even get beyond registering for the MOOC or maybe watching part of a video.
- Passive Participants – These are students who most closely align with traditional education students, viewing a course as content to consume. These students typically watch videos, perhaps take quizzes, but tend to not participate in activities or class discussions.
- Active Participants – These are the students who fully intend to participate in the MOOC, including consuming content, taking quizzes and exams, taking part in activities such as writing assignments and peer grading, and actively participate in discussions via discussion forums, blogs, twitter, Google+, or other forms of social media.
- Drop-Ins – These are students who become partially or fully active participants for a select topic within the course, but do not attempt to complete the entire course.
These are not static patterns, in that students may move from one archetype to another. Lurkers may decide that they should spend more time in the course and become passive participants. Passive participants may become more engaged and become active participants over time. Of course, any of these students may also drop out and leave the course.
These student archetypes generally have different goals. Lurkers may not have specific goals beyond finding out what the course is about or doing a “drive-by” evaluation of whether the course merits more time and attention. Passive participants, as discovered in the Change11 MOOC may desire to just experience the MOOC platform or course design.
One problem with our study which we hadn’t anticipated (but perhaps should have) was that individual participants might have quite different (conflicting?) reasons for signing up. While some participants signed up for the content of the course, others (the majority) were primarily or exclusively interested in experiencing the Change 11 MOOC as a learning environment, often because they wanted to implement some of the features of a MOOC in their own practice.
I should also note that while the student archetypes are somewhat based on the general goals for taking a course, there are also important, but largely unexplored, questions on why students leave a MOOC. As Laura Gibbs described in several Google+ discussions, leaving a course because you got what you wanted is very different than leaving due to abusive discussion forums.
Whither Completion Rates
How would our understanding change if we understood the different student archetypes and goals for enrolling in MOOCs? I believe we would end up with better feedback to improve the MOOC models, and a more realistic discussion about the impact of MOOCs. Katy’s data curation and visualization is based on the data available, which is invaluable, but I think her linkage to sources might give us insight to build on the prevailing model and more closely understand student goal completion.
Completion rate should really be measured for active participants. For those students who planned to complete the course and participate in all or most activities, how many ended up achieving that goals and completing the course?
Let’s consider Internet History, Technology and Security taught by Charles Severance in 2012. By traditional measures (as captured by Katy) there were roughly 46k students enrolled with 4.6k students who received a certificate, leading to a completion rate of 10%.
But look a little closer at the data using Katy’s links:
There were 11.6k students who completed the first week of activities – a rough measure of active participants. Using the four student archetypes, the completion rate was closer to 40%. Likely the rate was higher as the 11.6k number included Drop-Ins who did not intend to fully participate in the full course. But for now, we don’t have the data to accurately separate out this group.
To me, these measures of 11.6k students who actively participated with 40% completing the course is more meaningful than the 46k students enrolled and 10% completion rate. Clearly the majority of the 46k never intended to participate in the whole course. In a traditional face-to-face course, would we include all students who checked out a course syllabus or students auditing a course as actual students in the completion rate measurements? No, we would only count students who indicate through the add / drop period that they intend to fully take the course.
I’d appreciate feedback on these patterns – feel free to comment below or in the Google+ post.
Polly says
“Passive Participants – These are students who most closely align with traditional education students, viewing a course as content to consume. These students typically watch videos, perhaps take quizzes, but tend to not participate in activities or class discussions.”
Can you point to the evidence for this statement about ‘traditional education students’?
Phil Hill says
Polly, good question and as I read it there certainly is an editorial slant to the statement.
It comes from the Caledonian Academy study and interviews, and their comment regarding the passive participants “those more traditional learners in the middle group really wanted more guidance than they were given”. The study’s origins within a connectivist MOOC probably explains the comparison between traditional students and those fully participating in knowledge creation as active learners.
I might need to reconsider whether this description is appropriate across all MOOCs. Your thoughts?
Charles says
There’s definitely a bias that’s misleading. Take, for example, the phrase “a final group, more active participants, who set their own goals.” This phrase suggests that the so-called “passive participants” didn’t set their own goals when it’s just as likely that their goals were simply different. It’s also a privileging of one type of behavior as “active” learning, when in fact, someone who doesn’t physically participate may be actively learning just as much if not more than the “active participants” from the perspective of thinking more deeply about the course material.
Michael Feldstein says
To be honest, Charles, I have limited patience for radical relativism regarding student behavior. Yes, yes, sure, we don’t know for certain what the goals of the individual “passive learners” were. But the existence of a separate category of “lurkers” suggests that passive learners take a particular attitude toward education which is often taught in our school systems and which leads to less effective education. I don’t think there’s anything disrespectful or morally suspect to assert the possibility that some students are doing themselves a disservice by incorrectly assuming that education is something that happens to them. To the contrary, I believe we have a moral obligation to address this problem head-on and help these students become more effective learners by taking responsibility for their own education.
VanessaVaile says
Recently, in another comment, I described myself as an active community and higher ed blogger, and serial mooc/er (a chronic non-completer too) who participates moderately (sometimes more, sometimes less). I am hugely interested in education in general, in this evolving model in particular and often acts as a go-between translating and explaining to academic and local connections.
New models probably call for new categories and vocabularies. For now,we are making do with existing terms.
Charles says
Michael, I can imagine that for some this would be an example of radical relativism, but I can use myself. Sometimes, I prefer to listen. Usually that’s when I feel that others know a lot more than I do (although sometimes I just don’t have the time to “engage” as expected). I also note that in many discussions, people don’t really contribute much of substance. They may simply agree with what’s being said. In my own case, if I don’t have a contribution beyond what’s already been stated, I see no reason to babble uselessly. My goal is to learn: I take notes and think about what’s being discussed.
Besides, anyone who teaches or can remember when they were a student knows that there are always a few students who “hog” the conversation and they weren’t the ones who usually understood the topic the best. They simply liked to talk.
So, I agree that students need to take responsibility for their learning, but that doesn’t mean that they need to conform to others’ expectations for learning behavior. What is needed is the research that finds out why people behave the way they do in a learning situation, rather than assuming why they’re behaving in a particular way.
Michael Feldstein says
That’s a fair point.
Debbie Morrison says
I agree with Phil’s position stated here that the research presented is a good starting point – to begin to analyze learner behavior patterns and motivations for enrolling, engaging, not-engaging, etc. within a MOOC. We may be missing the bigger picture about the value, function and potential of MOOCs if we focus on the completion rate (or conversely the drop-out rate) as a metric for evaluating the effectiveness. Though as a starting point for sure — however the scope must be larger by asking more questions to really be able to uncover the motivations of learners.
I suggest rather than asking why learners fail to complete the MOOC [drop-out], ask questions to determine learners motivations for signing up: i.e. planned learning path, personal learning goals, planned participation levels [or not]. We may not need to ask the question then ‘why didn’t you finish the MOOC?’ as the answer will be inherent in the answers to these questions. On the other hand, some learners may not finish the MOOC even though they fully intended to do so, (as given by the information provided by student on his or her intended learning path), these are the students we would want to ask, ‘what held you back from completing the course?’ Qualitative versus quantitative data will be more useful at this point when exploring student behaviours.
Thanks Phil for the post!
Colin Milligan says
Hello Phil (and all). Thanks for this post and for building on our work. I thought it would be helpful to add my thoughts on drop-ins and also address some of the issues which have arisen in the comments regarding the study and the nomenclature we chose.
We certainly recorded the drop-in behaviour you describe, but in our classification we categorised these learners as part of a broader ‘lurkers’ category – as they were participating on their own terms*. I’d also agree with your next assertion that the patterns aren’t static – indeed I would say that we didn’t necessarily pick up anyone who went into Change11 intending to be a drop-in, though once they recognised the format, some realised that this was a sensible participation strategy to adopt.
We then get to the issue of ‘passive participants’. In the original blog post I drew the distinction between ‘active participants’, who ‘got’ the connectivist nature of Change11 and ‘passive participants’ who wanted a more traditional learning experience and ‘expected to be taught’.
To address Polly’s comment first: the evidence comes from our observations based on the interviews, and as Phil highlights, we drew the comparison between connectivist MOOCs and traditional learners. while we have not studied xMOOCs, I would expect this distinction to be less pronounced, but still there in some form.
Finally on to Charles’ comment. Charles states: ‘This phrase suggests that the so-called “passive participants” didn’t set their own goals when it’s just as likely that their goals were simply different.’. But that’s exactly our point – the active participants were setting goals, but the passive participants expected someone else to set their goals for them. Our lurker* class includes those who were actively engaging with the course but not the other participants – not dissimilar to the behaviour Charles describes in his second comment. It is clear that the nomenclature we used isn’t perfect. Hopefully full papers emerging form this study will be more clear than a second hand blog post.
So, sorry for the long comment, but I hope I have managed to clarify some issues and questions. I agree with Debbie that learner motivations are important. I’m just about to start analysing the
data we collected on this in preparation for blogging and paper writing in the next month or so.
*in this respect, our definition of lurkers is perhaps more broad than the one you arrive at. We defined lurkers as follows: ‘These participants were actively following the course but did not actively engage with other learners within it. These participants were by no means disengaged with the course, or unhappy with their position. Instead, lurking was an active choice for them’
Kevin Kelly (@KevinKelly0) says
Hi Phil and all,
Following threads in this conversation, I suggest adding another archetype: “focused participants” (“hyperfocused participants” might be a more fun name)
As we’re seeing MOOCs used formally as sources of content for flipped classes in higher education (and beyond), I suspect that some students themselves have used MOOCs informally to find content that help them meet course goals elsewhere. Those students wouldn’t need the material from the entire course, nor would they intend to complete it, as they have existing courses and course assignments to complete. Even life-long learners often seek discrete information to answer a specific question, and would not feel the need to finish an entire course.
To determine if I’m right, I’d be interested in seeing the data/analytics around consumption of individual learning objects in MOOCs as well as students’ abilities to demonstrate specific (individual) competencies. This might help us understand if the MOOCs are really better at facilitating “long tail learning”–I.e., The students who complete an entire MOOC are represented on the tall part of the long tail curve, while students who only want or need a particular portion of the MOOC stretch out across the long part of the curve.
“Focused participants” would have Venn diagram overlap with active participants, who engage with the learning community, and passive participants, who keep their learning to themselves.
Thanks for the post and discussion!
Phil Hill says
Colin, thanks for the clarifications – I’ve also shared your comments via Twitter and Google+.
Kevin, interesting point. I would include “focused participants” as a form of “Drop-In” based on your description. I agree that this type makes sense, but would love to see some analytics to back it up.
Debbie Morrison says
HI Collin, your comment helped flesh out the details of your research and archetypes for MOOCs [thanks]. These profiles are helpful in categorizing the types of participants, which will certainly support research with MOOCs and education. I realized from reading your comment your choice of the term of lurker was not in a negative context – however the term does have a negative connotation associated with it – to ‘lurk’ according to the Webster’s dictionary, ” to lie in wait in a place of concealment especially for an evil purpose”, though I realize a modern definition has been added recently, “to read messages on an Internet discussion forum (as a newsgroup or chat room) without contributing”.
However, given the history of the word; its original meaning ‘to lie in wait’, the connotations, as I mentioned are critical. For this reason, perhaps another word to describe this group may be more fitting, as the term – to lurk – is in contrast to the principles of a MOOC, which is to participate as one chooses to do, there are no rules for participation.
Some suggestions for other terms might be: “observer”, “spectator”, or “bystander”.
Just some thoughts. 🙂
Jaime Metcher says
Thanks, Phil, for a thought provoking post.
The four archetypes are a valuable description of what I’d call post facto categories – that is, once the collision of student and course has taken place, we examine the by-products and perhaps draw some conclusions. The limitation of these categories, considered in isolation, is that it’s difficult to tease out which of the inputs is doing what. I don’t really care if all the habitual lurkers continue to lurk. However, if my active students are driven to passivity by poor course design, I want alarm bells to ring. Just counting lurkers or drop-outs or whatever doesn’t really do that for me.
So, to me, these archetypes are useful as ways of understanding outcomes for the interaction of known categories of students with known characteristics of course design. It’s this a priori modeling that brings the post facto categorization alive. Dividing students into “active” vs “other” participants based on first week activities is a step along this path. Ditto for the Change11 profiling and for Debbie’s call for understanding motivation. My centre (University of Queensland CIPL) has some models to propose for both course design characteristics and student characteristics at time of intake, which we hope soon to publish at least informally.
The other observation that has been made many times before is that completion is really a pretty poor proxy for impact. I’d expect this to be even more markedly so as the formalities around course entry and exit are minimized, as in MOOCs. While measuring ultimate impact remains hard, we believe we can do better with choosing proxy measures. Again we hope to publish on this shortly.
Charles says
The nomenclature is a problem, so some hasty thoughts. In this particular case, “participant” seems too broad. It seems that part of what is being considered active and passive is goal-setting. Then talk about active and passive goal setters. Another part of the active-passive distinction is the participation in the discussion forums, the “networking.” So terms like rich and poor networkers or connectors might work. As much as possible, the terms should represent the actions participants are engaging in, and also they should represent the theoretical framework of connectivism, in which case “connectors” might work.
Colin Milligan says
Hello Phil, Debbie (and all). The use of the term ‘lurker’ reflects the academic literature, e.g. Rovai (2000) who describes lurkers as “… learners who are bystanders to course discussions, lack commitment to the community, and receive benefits without giving anything back”. This is exactly what we observed, so while I thought about whether to adopt a different term, I eventually concluded that Rovai’s definition described what we were observing. All the alternatives Debbie suggests, observer, spectator and bystander imply (to me) a lack of engagement with the course as well as with the community.
I’ve put some further thoughts on ‘lurking’ at http://worklearn.wordpress.com/2013/03/06/lurkers-lurking-and-labels/ which I hope will be useful.
Rovai, A. P. (2000). Building and sustaining community in asynchronous learning networks. The Internet and Higher Education, 3(4), 285-297.
Phil Hill says
FWIW, I’ve kept the term ‘lurker’ for my second post, as well as usage of ‘participant’. No definition is perfect, but I’m trying to err on the side of simplicity for now. I appreciate all the feedback – very helpful.
Colin – curious to get your take on Dr. Chuck’s comment on graphics post. Essentially he recommends separating ‘no-shows’ out of ‘lurkers’.
Debbie Morrison says
Hi Colin, Thanks for this literature and for the link to your post. After reading Rovai’s paper, there are some differences in the contexts for the lurking behaviour as described – Rovai refers to closed, online courses – courses that are for credit, where participation is graded and required. In these classes, participation is required in order for the course to function. These courses require structured learning activities to bring about learning that meets the objectives for a given course. Rovai speaks specifically to discussion forum participation, as he points out here:
“Admittedly, there is a measure of learning in such situations, but the low level of participation itself is insufficient to provide sustained benefit to onlookers. Additionally, active members of the community mistrust those who do not participate, thereby affecting overall sense of community.To encourage all learners to access and participate in online discussions on a regular basis, learners should understand that course participation is not only a course requirement, but is also a graded component of the course. Members of the learning community should be graded on quantity, quality, and timeliness of their contributions.”
Having developed with faculty numerous online courses, I know the challenge in prompting students to participate in discussions, thus the grading is required. All of the points Rovai mentions as above is true, and applicable to a closed online, for-credit course with a small number of students (30 or so) in comparison to a MOOC. Some students willingly participate, and some do not, which is frustrating for the professor and students. And I agree, and in this situation, it may appear students are lurking [as data from LMS platform may show students logging on and viewing the discussion forums, yet do not actually participate], though in some cases students simply do not log on and read the forums in the first place. Though an alternative explanation I discovered in speaking with students as to their lack of participation within forums, is that students are intimidated by the forums [students that are new to the online environment], feel that everyone else seems knowledgeable and smart, and that they don’t have much to contribute. Essentially these students don’t have the confidence [yet], or the skills to engage effectively. Those this is where the grading helps, they have to participate, and an instructor can hopefully draw them out.
My point here is that there are many types of participants, with different motivations in an xMOOC that may appear passive or might be considered lurkers by your definition, but do contribute in other ways, and may not for whatever reason contribute within that particular MOOC, but may use the knowledge and apply it later. I know two individuals who participated in Digital Cultures MOOC, who may fit your descriptions as passive and/or lurker, perhaps consumed content, but will apply it a later time. One is PhD professor and the other works in a non-profit health organization. I am sure there are many more students that fit this prototype.
All that to say, I respect your position and the research you have done, though do think the terms may be suited to a different names :). There are interesting discussions to have, as stated there is much more research to be done given the newness of the tremendous learning that is being facilitated around the world with this format.
Thank you for the opportunity to engage in discussion. Debbie
Rene Kizilcec says
Thanks for the great post, Phil.
You and other readers might be very interested in this paper that I wrote with my two collaborators here at Stanford: http://rene.kizilcec.com/publications/
While the number of enrolled students is a good indicator of interest in the course, it is a skewed measure of the number of course participants. Note that a considerable number of people enroll before the course starts but never enter the course page after it opens. We would not want to count those as participants.
Colin Milligan says
Debbie, thanks for continuing this discussion – your comments have been helpful in identifying that the term ‘lurker’ is proving problematic. The definition of lurkers was indeed coined in closed courses and I acknowledge the risk of using it in this different context. Although it has negative connotations – in Change 11, we saw ‘active participants’ who mistrusted lurkers just as Rovai describes – a lurker is a clearly defined behaviour irrespective of this. In our study (most (but not all) lurkers self-identified as such.
As I said in the post on my own blog, lurking is entirely compatible with participation in a cMOOC, but (although I haven’t carried out research in xMOOCs) I can envisage this type of learner might cause problems in an xMOOC whose design relies on peer-grading or (too-) small groups which may not ‘function’ when lurkers don’t connect with other learners.
Debbie Morrison says
Hi Colin, Thanks for your reply and for clarifying. Yes I see how this type of participant would be disruptive and helps me see the potential that this type of student would exist within an xMOOC. It does appear within an xMOOC lurking behaviours within peer-grading and small groups is addressed by the structure itself – with peer-grading the system forces students to participate, thus minimizing [if not eliminating] passive and lurking participants – one has to grade at least three other student artifacts [essay or other] to be eligible to receive one’s own grade. It is anonymous, so even if a student who did not take the grading seriously, was not engaged and only seeking his or her own grade, the system averages the grade based on the number of scores. Though I do see your point now, the lurker could sabotage the process.
From what I’ve experienced in xMOOCs, groups are self-forming, usually large groups and focused on course activities, studying for exams, discussions of readings etc. How is a lurker identified in this type of group I wonder? And it would be interesting to determine how he or she is identified by group members within an xMOOC. This would be an interesting area to research.
Our discussion has led me to think of some questions about participant behaviour within a MOOC.
What percentage of participants are lurkers within a MOOC? And of those considered lurkers, what behaviours were associated with this? If someone self-identified as a lurker, what are the reasons for this? These questions would provide interesting answers — I hope further research can be done with xMOOCs.
Perhaps within MOOC environments, organizers could outline a set of participant guidelines that might address the types of students participation. For example, including a list of statements such as, “if you have joined this MOOC to expand your knowledge, but don’t plan to contribute and participate actively, here are some guidelines….i.e. joining a small group is not recommended unless you plan to contribute…., etc.” This may be a proactive way of dealing with lurkers. However I realize your research is exploratory, in nature focusing on past participation – I am thinking aloud here :).
Again, thank you Colin for the discussion and for your time [and patience] in explaining your research. I have learned much from your perspective and knowledge. Best, Debbie
Phil Hill says
Debbie / Colin – enjoying your dialogue and learning (hmm, I was a lurker, but just became active by this comment?).
Rene – very interesting article. Given your different methodology, there seems to be some overlap here. It seems there are strong similarities in grouping definitions:
Auditing = Passive Participants
On-time = Active Participants
Out = Lurkers or drop-outs
Thoughts?
Phil Hill says
Debbie / Colin – what if we looked at the definitional problem based on known data available (assuming we can’t always rely on solid research and self-identification)?
No-Shows: Registered for course but never logged in while course was active
Lurkers: Logged into course, read content or browsed discussions, but did not take any form of assessment beyond pop-up quizzes embedded in videos
Drop-Ins: Logged in, performed some activity (videos, discussions) for a defined set of topics / weeks
Passive Participants: Watched videos, took quizzes, but did not perform remaining assignments
Active Participants: Took part in majority of assignments and all quizzes / assessments
Debbie Morrison says
Hi Phil, I like this! BUT, I suggest changing the term lurkers (surprise, surprise 🙂 ) to ‘observer’. Observer is a non-biased term. Lurker seems to have stemmed from other participants feeling uncomfortable with non participating group members based on what Colin mentioned in his posts.
Perhaps there is value in having an additional category for disruptive participants, those which appear to be sabotaging the learning process, which could be identified through behaviours which are observed through types of comments on MOOC discussions boards, or through grading or contributing work that does not follow guidelines in order for participant to receive personal benefit. As mentioned, with these classifications I suggest we classify participants in terms of the behaviours that are observable through analytics from the platform or through posted content.
I also suggest these categories by flexible, in that there may be additional ones to add as we learn more about the motivations of participants.
Thanks Phil for starting this great discussion – and Colin for contributing. 🙂
Debbie 🙂
MichaelB says
A couple things that I’ve encountered in my MOOC experiences, both open-source and within my organization that I would be interested to see accounted for in additional research:
1) technological issues affect participation greatly. I nearly dropped a MOOC because videos wouldn’t load on my desktop and replayed slowly on tablet. I’d explored course for content that did work, but that wasn’t hand picking content, just looking for something that worked. Frustration with data inputs on math course didn’t help either. Same with time expectations. Did course take students the time they’d been told to expect?
2) pacing for its own sake. I am a perfect 8-8 on self paced courses and 1-3 for paced courses. The paced courses set arbitrary dates for homework, etc., wont let you move ahead, and build no new habit. (I understand that they may literally be making the materials one week at a time, but not always.). Some weeks, I have tons of time. Others, I just don’t. If it’s intended to be a computer-graded course, let me start when I want, press when I want, etc.
3) Class size as known to enrollees. One class told us from start that there were 40,000. Another made it seem like the prof wanted more to join, as though there weren’t that many. Guess which one I jumped into the forums on? If possible, account for what students are told about class…expected hours, size, etc.
4) Live video…if I see the prof on screen, I zone out less. If its just PowerPoint, well I can read a book at my own pace, so I find less value/reason to stay.
5) I’d be interested in seeing enrollment and participation rates for similar courses based on school….see if a HarvardX course gets more hits than State U. There is a curiousity about how the brand-name schools, and I suspect interesting results here.
6) Consider whether there are adjustable end goals. Did they say: we know you have different goals. Some just want to learn the basics, some want to understand, some are trying to build marketable skill set, we’ve personalized this for you. Someone who wants to just get a taste can have a 30% be a passing score. If they want comprehension, it’s 70%. Besides, you allegedly are offering a top-flight college course to folks who might not be high school grads. Telling them they have to make a 65% just for a certificate is a good way to scare off the folks that MOOCs claim to try and reach.
Learn More says
Thanks! It is an superb web site.
Dangergirl hope (@DangergirlHope) says
You might consider the possibility that people who were previously watching television or playing online games, or otherwise engaging their brain in a pre-work relaxation,where one desires to engage the mind in a relaxing but worthwhile way, might be switching to MOOCs.
We may well find that engagement will intensify as MOOCs become more well known, and become a regular alternative “spare time” activity. I could see MOOC’s becoming more popular than television.
Consider that online appearance of passivity should not be taken for actual real life engagement,improvement, in fact a cycle upward of personal growth.
The engagement of worldwide students together in each class will also have benefits for the world, tolerance, knowledge and cultural exchange, understanding, working together for common goals, and more.