An edited version of this post was first published at The Chronicle of Higher Education
Let’s admit it, there can be some real tension when a college is faced with choosing a new learning-management system, or any software used by more than one department.
Since the decision involves the administrators who will support the system — commonly called an LMS — and professors who will use it, who should lead the process? Should staff members just get input from faculty members, or should professors vote on the final decision? Or should professors run the process?
This is when distrust between the two sides can emerge. IT administrators may fear that professors will make the pace unbearably slow with an overly deliberative approach. And professors may assume that IT has already determined what system to buy, and may not take their input seriously. Or they may worry that the meetings will be dominated by those needy power users from the psychology department.
In one sense, the LMS has been a huge success. In just a few years starting in the late 1990s, purchases by colleges of these systems went from zero to some 90 percent of all American institutions.
Actual adoption by professors, however, has been a different, slower story. Just over half of students reported using an LMS in most or all of their courses as recently as two years ago, two decades after the creation of modern learning-management software. In short, professors aren’t as sold on using an LMS as administrators are.
For the selection of software used by professors, evaluations and decisions should be anchored by an understanding of the problems to be solved and not just the solutions. A lot of the tension in decision making on technology in higher education comes from jumping too quickly into discussions of specific features. Michael Feldstein described the nature of a better process that focuses on needs rather than features:
“Higher education needs to get better at academic needs assessment. That requires an entirely different and deeper set of questions than which features are important to put on a checklist. It requires an in-depth exploration of how teaching and learning happens in various corners of the campus community and what capabilities would be most helpful to support those efforts.”
To understand academic needs, it helps to gain a better understanding of the people making the decisions on whether and how to use an LMS. For a majority of institutions, this means professors. They are seen as the primary users, more so even than students. But one mistake to avoid is lumping all faculty members into an amorphous mass that can be measured with a simple metric of how fast they adopt the system. How many professors use these features? How many faculty members are satisfied with the system?
In tech circles there is a popular notion of a “technology-adoption curve,” first proposed by Everett Rogers and extended by Geoffrey Moore. The technology-adoption curve creates categories of adopters over time that include innovators, early adopters, early majority, late majority, and laggards. (I prefer to use “holdouts” instead of “laggards” to remove the implicit assumption that new technology should be adopted in all cases.)
Moore described the enormous difference between the innovators and early adopters on one side and the holdouts on the other. Early adopters tend to be risk takers with the technology in question, willing to experiment and willing to fill in the gaps of cool ideas that are not complete solutions. Early majority tend to be pragmatic, risk-averse, wanting complete and proven solutions. This is the “chasm” that traps many technologies and prevents them from being widely used.
The natural description that Mr. Moore used was the concept of “Crossing the Chasm,” assuming that, as a market matures, technology providers should pick which side of the chasm to serve, with the riches being available in the mainstream, majority case.
We don’t have it that easy in education, and we should think of adoption somewhat differently than a consumer tool like LinkedIn. Ed tech should not be a market to be conquered but rather a continuous process of improving student learning and meeting institutional goals. Faculty members are not just end users to be converted and trained. We will always have a subset of professors who are ed-tech enthusiasts and often drive the exploration of different innovations, and we will always have a larger subset of faculty members who may or may not be interested in technology in the classroom and don’t have the time or inclination to be proactive in figuring it out.
Whether a technology such as an LMS should be used, or used more deeply, depends on the teaching and learning context: discipline, lower or upper division, the type of students enrolled, personal experience of the instructor, etc. And when you look at the broad range of faculty members to be supported, the difficult reality is more one of “Straddling the Chasm” than “Crossing the Chasm.”
Rather than going to faculty members to create monochromatic lists of desired features and attributes, a stronger process is to acknowledge variation in professors and rely on them in different roles as part of the technology-selection process.
You might have one approach for dealing with the ed-tech enthusiasts. In most cases, a system seems easy to use once you know how to use it. Therefore, a system with which you are intimately familiar will probably look easier to use than one with which you are unfamiliar. That is not a good test of usability. These professors are quite good at pointing out what a system can and cannot do, however.
The best way to use your ed-tech enthusiasts is to have them sit down with well-informed and passionate teachers who use and advocate for other platforms. These peer-to-peer conversations will help them develop perspective on the guts of the platform alternatives that will be very valuable to you. It may also help them come to terms with the inevitable grieving process they will experience at the prospect of giving up the system they have invested so much in mastering. Unfortunately, the migration process is probably harder on your ed-tech enthusiast faculty member than on anyone else, even including the support staff.
Outside of specialized programs or colleges using competency-based education, it is likely that many of your ed-tech enthusiasts are using or will want to use many tools in addition to the LMS. It is also likely that many will do so whether or not their use is officially supported. Support staff members should consider how to support this type of “unofficial” adoption.
Then there are your mainstream professors. Somebody who has taught with more than one LMS could be a good judge of usability: Faculty members who have taught with two or three (or more) systems generally have some sense of what differences between platforms really matter and what differences don’t in a practical sense. If you have such faculty members on your campus, then you really need their input.
Somebody who has never taught with any LMS but would be open to doing so in the future could be a good judge of usability: You don’t want somebody who still can’t do an email attachment, but you do want somebody who is not a technology fetishist and has no preconceptions. Talk through with this person a small handful of tasks or activities that she might want to try in her first or second attempt to web-enhance her class. Then ask her to look at the candidate platforms just from the perspective of learning how to do those particular tasks. You’ll learn a lot about how easy each platform will make it to expand your faculty commitment.
Understanding the needs of faculty members before jumping into features and solutions is an important way to improve the process, and doing so requires an understanding of different types of professors.
Fred M Beshears says
Phil,
Thanks for the analysis of LMS user types.
Back in 2001, when UC Berkeley was considering an enterprise wide LMS, we had a campus wide committee for instructional technology. It’s acronym was a real mouthful – the CCCPB-IT committee, which stood for the Chancellor’s Computing & Communications Policy Board – Instructional Technology.
It was co-chaired by Jack McCredie (CIO and head of Information Systems and Technology) and Professor Alice Agogino (Mechanical Engineering). It was composed of faculty members, including one from the School of Education, the head of the Library, the Registrar, and various support staff units. My unit, the Instructional Technology Committee, staffed this committee.
As support staff to this committee, I drafted an LMS Evaluation Framework back in 2001 to help the committee evaluate different enterprise wide LMS. The two main options at the timer were Blackboard and WebCT.
Also at this time, ITP was supporting two low-end LMS (one from Bb, the other from WebCT), and we had purchased and evaluated a number of other low end systems that were on the market at that time (including one from a company called MadDuck!).
In any event, Berkeley initially decided to join Sakai in an effort to collaboratively develop an open source LMS. Recently, however, the campus decided to drop out of Sakai and buy Canvas.
In any event, one part of my LMS evaluation framework included an analysis of different types of users, which included some of the ideas from your piece. For example, it discusses early adopter faculty, and the extent to which we should weight their needs relative to those of other, more conservative faculty.
It also talks about issues not included in your analysis. For example, it distinguishes between faculty who are from well funded departments, and those who are from less well funded departments (i.e. those who are more dependent on central campus services).
Here are the major criteria it lists for evaluating an LMS, and the vendor selling the LMS:
Known Requirements
Ability of the package to meet the university’s current academic and administrative requirements, and future requirements that are currently known to exist.
Unknown Future Requirements
Ability to modify the package to meet the university’s new requirements as they become known.
Implementability
Ability to implement the package easily.
Supportablility
Ability of the vendor to support both the package and the University in the future.
Cost
Total cost to purchase and implement the package as well as ongoing maintenance and support
costs.
If anyone’s interested in a framework from way, way back, I have it up on my blog.
Learning Management System Evaluation Framework
http://innovationmemes.blogspot.com/2012/11/lms-evaluation-framework.html
Cheers,
Fred
Phil Hill says
Fred, Interesting framework. Thanks for sharing.