Over the past few years, the team at e-Literate has reported on online course quality and common challenges online learners face. In late 2016, Phil Hill described and shared “explainer videos” that outlined how system-wide online course exchanges and their shared social infrastructure can help increase access and improve quality. Almost a year ago Phil conducted an informal, internal review of two online courses at Rio Salado College, in which he reviewed online course quality in light of the Making Digital Learning Work report by Arizona State University and the Boston Consulting Group. In a more recent (February 12) article, Phil discussed online education challenges and potential solutions—specifically, 1) community college students face a number of challenges in online course environments and 2) institutions address some of those challenges through online course design efforts at scale.
Currently, the primary method to scale online course quality is through the use of rubrics that inform online course (re)design. Understanding the rubric field is critical for educational institutions that a) want to offer quality online programs, credentials, and/or certificates, and b) want to increase online student success rates.  To support these institutions—especially those that do not yet use a rubric at scale—as well as the rubric providers themselves, I conducted a review of the most widely used online course design rubrics, which I have turned into a three-part series:
- Part 1: WHAT? A comparison of the seven most widely used online course design rubrics, along with their collective strengths and limitations
- Part 2:Â SO WHAT? A discussion of why using these rubrics has become so important, and some early evidence of impact
- Part 3: NOW WHAT? Recommendations for what the rubric providers and adopters should do next to increase online student success further
Part 1 of 3: WHAT?
Overview
My investigation of online course design rubrics began with exposure to two rubrics at the same time:
- To make it possible for all California State University (CSU) students to take my own online course, a colleague and I began reviewing it using the rubric required by the CSU system’s Quality Learning & Teaching (QLT) Program.
- When I began working with a California Community College (CCC) district’s distance education committee, we designed faculty training based on the CCC Online Education Initiative’s Course Design Standards.
I also knew about Quality Matters and wanted to explore the full range of rubrics used to support quality at scale. Curiosity did not kill any cats, but it led to search results listing several online course design rubrics*, each of which is used by multiple institutions (see Table 1).
TABLE 1. Online course design rubrics used by multiple institutions
Rubric | Last Revised | Rubric license | Rubric provider | Rubric provider type |
---|---|---|---|---|
Online Education Initiative – Course Design Rubric (OEI CDR) | 2018 | CC BY | CCC California Virtual Campus-Online Education Initiative | Higher ed institution |
Blackboard Exemplary Course Program Rubric (Bb ECPR) | 2017 | CC BY-NC-SA | Blackboard | Commercial |
Open SUNY Course Quality Review Rubric (SUNY OSCQR) | 2016 | CC BY | Open SUNY (State University of New York) | Higher ed institution |
California State University – Quality Learning & Teaching Rubric (CSU QLT) | 2016 | CC BY-NC-SA | California State University system | Higher ed institution |
Quality Matters – Higher Ed Course Design Rubric (QM HE CDR) | 2018 | Subscription fee | Quality Matters | Non-profit org |
Illinois Online Network – Quality Online Course Initiative Rubric (ION – QOCI) | 2018 | CC BY-NC-SA | Illinois Online Network | Higher ed institution |
UW-La Crosse Online Course Evaluation Guidelines (UWL OCE) | 2014 | License not stated | University of Wisconsin -La Crosse | Higher ed institution |
* NOTE: New Mexico State University also created a rubric that at one time was referenced by other schools but has since switched to Quality Matters.
Speed of change
While some of these online course design rubrics date back twenty years, their promotion and use at different levels—institution, district and system—really has ramped up in the last five years. To emphasize this point, consider the following. Baldwin, Ching and Hsu (2018) compared six of the seven rubrics listed in Table 1 above. Five of those six rubrics have been updated since those authors completed their study in July 2017. [NOTE: Ironically, the rubric comparison study did not appear in any of my search results. I only found it as a link from the OSCQR Resources.]
Depending on what milestone you choose as a starting point, people have been completing online courses for thirty years or so. For those of you wondering why we have not gotten online course quality figured out by now, keep in mind that face-to-face courses in higher education began over 1000 years ago and to date do not undergo as much evaluation.
Influences
Most of the seven rubric providers cite the Rubric for Online Instruction (created by California State University, Chico) and some credit the Quality Matters framework as influences used to create their own rubrics. Some also cite Universal Design for Learning principles, the Community of Inquiry framework, and the National Survey of Student Engagement. Interestingly, more than one rubric cites as an influence the Seven Principles for Good Practice in Undergraduate Education outlined by Chickering and Gamson (1987), which was published just before the first online courses launched. Almost thirty years later, Crews, Wilkinson, and Neill (2015) applied these seven principles to online courses.
What the rubrics evaluate
Each rubric is broken into broad categories—ranging from three to ten, depending on the rubric. Across all seven rubrics there are fourteen categories in all, one of which I have created. For this comparison, I reorganized each rubric to put comparable criteria in the same categories for consistency, and repeated criteria that measure more than one aspect of an online course. Table 2 shows how many criteria each rubric includes in each comparison category. (Bold numbers signify a rubric has 7 or more criteria in that category; italicized numbers signify a rubric has 2 or fewer criteria in that category.)
TABLE 2. Number of criteria each rubric includes per comparison category Â
Rubric Comparison Category | OEI CDR | Bb ECPR | SUNY OSCQR | CSU QLT | QM HE CDR | ION QOCI | UWL OCE |
---|---|---|---|---|---|---|---|
Course Overview and Information | 5 | 8 | 10 | 9 | 9 | 10 | 4 |
Learning Objectives | 4 | 3 | 1 | 1 | 5 | 2 | 2 |
Instructional Design & Course Materials | 4 | 9 | 5 | 7 | 6 | 12 | 10 |
Individual Learning Activities | 1 | 2 | 2 | 0 | 0 | 1 | 0 |
Collaboration and Interaction | 3 | 10 | 4 | 6 | 4 | 13 | 7 |
Facilitation | 1 | 0 | 0 | 7 | 0 | 0 | 12 |
Assessment | 9 | 12 | 9 | 7 | 7 | 23 | 7 |
Learner Support | 3 | 1 | 1 | 5 | 3 | 3 | 2 |
Accessibility, Usability, Universal Design, & Inclusivity | 17 | 9 | 13 | 6 | 6 | 6 | 5 |
Course Summary | 0 | 1 | 0 | 3 | 0 | 0 | 4 |
Course Evaluation | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
Course Technology | 2 | 6 | 4 | 4 | 4 | 7 | 4 |
Web Design or Course Layout | 0 | 0 | 4 | 0 | 0 | 7 | 1 |
Mobile Platform Readiness | 0 | 0 | 0 | 4 | 0 | 0 | 0 |
Quantitatively, this allows us to see what aspects of online teaching each rubric emphasizes, which categories are well covered by every rubric, and which categories are not emphasized enough overall. For example, all seven rubrics have several criteria related to assessment, but only two rubrics—QLT by the CSU system and OCE by UW LaCrosse—address an online instructor’s facilitation strategies. Similarly, all seven rubrics have several criteria related to accessibility, but on the whole they pay little attention to individualized learning activities.
To what extent does research support rubric criteria?
Beyond how many criteria each rubric offers in a specific category, though, how well does learning design research and learning science support usage of these rubrics? Since MindWires leads the Empirical Educator Project—an effort to promote broader adoption of evidence-based teaching practices—I went through a number of research articles and research literature reviews. My goal was to identify correlations between a) the aggregate set of rubric criteria and b) online course design factors (or online teaching techniques) that have increased student engagement, motivation, persistence, success, etc. Below is a representative sample of what I found.
- The CCC Chancellor’s Office (2013, p. 23) identified factors that affect student persistence in online courses, which I have ranked according to the amount of instructor influence to facilitate (from most influence to least influence):
- Increased communication with instructor
- Sense of belonging to learning community
- Student satisfaction with online learning
- Time management skills
- Peer/family support
- A number of research articles (CCCCO, 2013; Crews, Wilkinson, & Neill, 2015; Hart, 2012; Nash, 2009; Orso & Doolittle, 2012; Ragan, n.d.; Savery, 2005) show that the online instructor plays one of the biggest roles in student retention and/or success. Two examples include:
- The CCC Chancellor’s Office (2013) identified faculty-student interaction and increased communication with the instructor as key factors in student success.
- Communication/availability rated as the top characteristic of an outstanding online teacher, followed by compassion, organization and feedback (Orso & Doolittle, 2012).
- In addition to interaction/communication with the online instructor, Lister (2014) shared the results of a comprehensive literature review identifying key factors that affect online learner success:
- Course organization and structure
- Content presentation
- Opportunities for collaboration and interaction
- Timely and effective instructor feedback
In Table 3 below, I’ve started the process of aligning research articles with rubric criteria in common rubric categories. While the research supports the concepts, though, not every rubric actually addresses them to the extent that the research dictates. For example, four of the seven rubrics have a criterion related to timely instructor feedback, but two of the four stop short at simply sharing expectations for timely feedback. Those rubrics do not measure whether or not faculty actually give timely feedback.
TABLE 3. Research literature supporting rubric criteriaÂ
Common rubric categories | Research literature supporting rubric criteria in those categories |
---|---|
Instructional Design & Course Materials | Ragan, Orso & Doolittle, 8 studies cited by Lister |
Interaction & Collaboration | CCCCO, Crews et al., 9 studies cited by Lister |
Assessment | Feedback: CCCCO, Lister, Crews et al., Orso & Doolittle, Hart, & more |
Learner support | CCCCO, Crawley, & more |
Some rubric providers also allude to or directly reference other literature that supports the inclusion of rubric criteria. For example:
- While the OSCQR website shares research supporting OSCQR standards (4 articles), the site does not draw correlations between the articles and specific rubric categories or criteria.
- Quality Matters states that “the QM standards have been examined for consistency with the conclusions of the educational research literature regarding factors that improve student learning and retention rates, as well as activities that increase learning and engagement” (https://www.qualitymatters.org/qa-resources/rubric-standards/higher-ed-rubric).
On the flip side, the research identifies practices that the rubrics do not address much or at all. For example, several studies show increased success when institutions require students to participate in a course-level orientation or module-based introduction to online learning (Cintrón & Lang, 2012; Lorenzi, MacKeogh & Fox, 2004; Lynch, 2001). However, there are very few criteria across the rubrics that address helping students assess and address their online learning readiness.
Limitations and strengths
As far as limitations go, all of the rubrics focus heavily on reviewing courses before they begin—i.e., before the students show up. While all seven rubrics also have criteria related to interaction, those criteria largely address activity set up, clarity of activity instructions, and other factors that can be reviewed without the students being in the course. Only two of the seven rubrics—CSU QLT and UWL OCE—really look at instructor facilitation.
Given these limitations, the rubrics do provide great value. Ultimately, these rubrics represent the current thinking about improving the quality of online courses. Some of the rubrics have evolved over twenty years and will continue to do so. The rubrics’ strengths manifest at different levels—for individual instructors, a rubric acts as a course design guide; for institutions and systems, a rubric creates a common vocabulary, an aspirational worldview, a mechanism for consistency and accountability, and the basis for a social infrastructure that runs parallel to the technological infrastructure.
In Part 2 of this series, SO WHAT?, I will look at why these rubrics are important for improving online course quality at scale, and how well rubric providers have evaluated the effectiveness of their instruments.
03/20/2019 UPDATE – Table 1: Changed Rubric Provider Type for Quality Matters to Non-profit organization.
Dr. Michael Sutton says
Incredible series of insightful articles. THANKS! I have shared with my LI network of followers.
Kevin Kelly says
Thanks, Michael! I’m glad you found the series useful and appreciate the LI shares.