Part 2 of 3: SO WHAT?
Overview
In Part 1 of this series, I compared seven online course design rubrics that are used by multiple institutions to improve the quality, accessibility, and consistency of individual courses. The institutions do this with an eye toward offering online degree programs, credentials, and certificates. Rubric comparisons are all well and good, but why is this an important topic now?
Demand > Persistence > Success
First, it’s about the numbers. By now, most people watching distance education have heard the statistics—while overall college enrollment is static or declining, enrollment in online courses and programs is growing dramatically. In Fall 2015, 5.95 million students—almost a third (29.8%) of higher education students in the United States—enrolled in at least one distance education course (NCES, 2018).
Similarly, enrollment in online courses at a CCC district I work with doubled in five years, from roughly 9% to 18%. However, that increase in enrollment is offset by rates of student retention (roughly 75% complete a course) and student success (roughly 67% pass a course). Until the district can improve those rates (e.g., through improving course quality and supporting online learners), it is reluctant to add more online course offerings.
In that Feb 12 article referenced in part 1, Phil Hill noted that improving online student success rates throughout the CCC system can be only partially attributed to the Online Education Initiative. If we look at that data further, success rates increased by 13% over a ten-year span. With only two in three students passing online courses, though, there is room for more improvement. (The fact that face-to-face success rates did not change over those ten years is a topic for another blog post.)
Student Success = $
It’s also (or soon going to be) about the money. As more students choose online courses and higher education funding models begin to stress successful completion as much as or more than enrollment, institutions cannot just leave it up to chance that online learners will persist and succeed on their own. If one in five course enrollments is online, and one in three students will take online courses, and the numbers keep growing, the rubrics and related efforts will play an even larger role.
Moreover, while the rubrics focus at the course level, successful completion of online courses contributes to completion of degrees. The CSU system’s Graduation Initiative 2025, the CCC system’s Vision for Success, and the SUNY Completion agenda all point toward system-wide efforts to increase degree completion and eliminate equity and achievement gaps. It should be no surprise, then, that all three of these systems—three of the largest higher education systems in the country—have launched online course quality projects featuring rubrics.
Course design rubrics and related professional development are becoming as important to the institutions themselves as to the students they serve, and the large-scale initiatives that support them all still have work to do. The Public Policy Institute of California summed up the situation fairly well:
Our research suggests that a more data-driven, integrated, and systematic approach is needed to improve online learning. It is critical to move away from the isolated, faculty-driven model toward a more systematic approach that supports faculty with course development and course delivery. A systematic approach better ensures quality by creating teams of experts with a range of skills that a single instructor is unlikely to have completely. (Johnson, Cuellar Mejia, & Cook, 2015, p. 3)
Multiple achievement gaps exist
As colleges and universities address rapidly increasing distance education enrollments, they must also address two achievement gaps that often appear in the research:
- Overall, online learners have lower retention and success rates than learners in face-to-face courses (Xu & Jaggars, 2014).
- Achievement gaps are larger for some subpopulations of online learners—e.g., students who are male, who are academically underprepared, or who belong to specific ethnicity groups (Jaggars, 2014)
Evidence of scale
This comparison has become important for another reason: scale. Quality Matters and Blackboard serve international networks of systems and individual institutions with their rubrics, related professional development, and awards. Meanwhile, the California Community College (CCC) system, the California State University (CSU) system, the State University of New York (SUNY) system, and the Illinois Online Network (ION) all have led far-reaching efforts with their own, internally created rubrics and professional development training. Within the CCC system alone, the Online Education Initiative’s Consortium serves 56 campuses1, all of which have committed to reviewing and redesigning—with the rubric—20% of their online course offerings over two years.
Evidence of impact (and the need for much more)
A number of the rubric providers have made an effort to evaluate their respective rubrics’ impact. Currently, Quality Matters publishes “What We’re Learning” reports to synthesize the research about the impact of its rubric (e.g., Shattuck, 2015). Quality Matters also shares the largest number of studies focused on its impact—through the Quality Matters Research Library that can be searched by standard or keyword and a set of Curated Resources. Of these 25 curated studies, four studies appear to look at the highest level of Kirkpatrick’s Four Levels of Evaluation—i.e., those four studies look at the end results, or to what extent redesigning a course based on the rubric affects students completing and/or passing a course. An equal number of studies investigate Kirkpatrick’s third level, or changes in faculty behavior as a result of training and exposure to the rubric. The largest subset of these curated studies focuses on learner or teacher perceptions, motivation, and satisfaction, but I have not reviewed the entire library!
In its 2018 grant proposal, the Foothill De Anza CCD team stated that “the OEI courses that are aligned to the [OEI Course Design] rubric, checked for accessibility and fully resourced have an average student success rate of 67.4%, which is 4.9 percentage points higher than the statewide average online success rate of 62.5%” (Nguyen, as cited in Foothill De Anza CCD, 2018, p. 4). Anecdotally, interviews with executive CCC stakeholders have identified that some online faculty also improved the quality of their face-to-face courses based on what they learned from the rubric.
That said, the evaluation of OEI’s impact on student success is just a start. While that evaluation showed that rubric-reviewed and redesigned courses had better success rates compared to other online courses, the study did not identify a) which learners performed well or b) what aspects of the course design and/or facilitation helped specific subgroups of students who do not persist or succeed as much in online courses.
Overall, however, we still know very little about the impact of these rubrics. With access to learner analytics capabilities in online learning environments, institutions, programs, and individual instructors should be able to track online course activity and results in real-time. In a recent email exchange with a colleague who works with big data, I proposed tags for learner analytics data within a learning management system. Here are three sets of tags that correlate to three of the rubric comparison categories with the most criteria—Instructional Design & Course Materials (Content), Collaboration and Interaction (Interactivity), and Assessment:
- find/create/share/review/reflect on content
- content + access: student downloads files, accesses online media via links, visits an LMS content page
- content + share: student shares a new, unique, external resource related to course topics–e.g., via discussion post or common page (Google doc)–that he or she has found or created
- content + review: student plays media–e.g., while you cannot guarantee the student is actually watching a video, you can tell that it has played for X minutes and/or it has been played Y times
- content + engage: student has used digital tools to highlight, annotate, leave questions about text or media
- engage in an individual activity or interactivity
- activity + engage: student starts an individual learning activity
- activity + complete: student completes an individual learning activity, such as a simulation
- activity + share: student shares a new, unique, external learning activity related to course topics
- interactivity + initiate: a) student initiates contact with other students–e.g., student sends a message to a group or posts a new discussion thread in a group space (or a general forum for the entire class)–and/or b) student creates an activity or environment for working with other students–e.g., creates a Facebook group, creates a group homepage in Canvas
- interactivity + support: student helps a peer who has identified a personal obstacle or challenge
- interactivity + contribute: student completes a task as part of a whole-class activity, group activity, or group project–e.g., reply to a peer in a discussion, submit a file in a group area for others to review
- interactivity + summarize: student creates a summary of a group discussion, virtual meeting, or project
- complete assessment (or self-assessment) activities
- assessment + self: student completes a prescribed self-assessment activity (e.g., practice quiz)
- assessment + complete: student completes a low-stakes assessment–e.g., quiz–or high stakes assessment–e.g., essay
- assessment + peer: student provides feedback to another student–e.g., Turnitin peermark assignment
- assessment + course: student submits feedback about the course–e.g., completes a mid-semester evaluation survey or student evaluation of teaching effectiveness survey, posts feedback for instructor in a general forum
- assessment + reflect: student submits a reflection in an assessment context–e.g., posts a reflection with an ePortfolio artifact
Of course, using LMS analytics data is not the only way to evaluate the effectiveness of rubric-guided course redesign (and universal design) efforts, but it’s an avenue that holds promise—especially if we can determine what leads to student success for individual students and different student subpopulations. In my own online class, I emailed a student who earned an A the semester after failing the course to congratulate him for his efforts. He told me one of the biggest factors in his success the second time around was how I redesigned the instructions for everything—it had made everything so much clearer to him. It turns out that in between the two semesters I had redone ALL of my content review prompts, and discussion and assignment instructions after learning about the Transparent Assignment Template by Mary-Ann Winkelmes from University of Nevada Las Vegas. It made me wonder—how many times do those types of changes make an impact without the instructor knowing? It’s time we started finding out.
In Part 3 of this three-part series, NOW WHAT?, I point out some new or upcoming research, as well as call for much, much more evidence about the impact of these rubrics and their individual criteria a) at the highest levels of Kirkpatrick’s model, b) over time, and c) on specific populations of students.
- Disclosure: OEI is a client of MindWires. [↩]