New LMS market analysis with a leader list that includes a company that is retiring its LMS and ignores the company who has a 5x lead in new implementations for its core market? Sign me up.
Over the next week or two, we plan several posts at e-Literate based on the various LMS users conferences we recently attended. We already covered Sakai and Blackboard, and we will have commentary on Moodle, Schoology, D2L, and Canvas soon. But before we do so, I wanted to call out the recent analysis by MarketsandMarkets, which requires some commentary as it is leading to some media coverage that could misinform those trying to understand the LMS market. The headline of their recent market report:
Learning Management System (LMS) Market Worth 15.72 Billion USD by 2021
According to a new market research report, “Learning Management System Market by Application, Delivery Mode (Distance Learning and Instructor-Led Training), Deployment (On-Premises and Cloud), User Type (Academic and Corporate), Vertical, and Region – Global Forecast to 2021”, published by MarketsandMarkets, the LMS market size is expected to grow from USD 5.22 Billion in 2016 to USD 15.72 Billion by 2021, at a CAGR of 24.7%.
This analysis combines the academic LMS market and the corporate training markets. Later in the press release (and also included in media coverage):
Blackboard, Inc. (Washington, U.S.), Cornerstone OnDemand, Inc. (California, U.S.), Xerox Corporation (Connecticut, U.S.), IBM Corporation (New York, U.S.), NetDimensions Ltd. (Hong Kong), SAP SE (Walldorf, Germany), Saba Software (California, U.S.), McGraw-Hill Companies (New York, U.S.), Pearson PLC (London, U.K.), and D2L Corporation (Ontario, Canada) are identified as leaders in the LMS market.
If you’re looking at the future market size, it’s vital to understand the market dynamics involved. For academic LMS market, this list is just wrong.
- In early February we broke the story at e-Literate: “LearningStudio and OpenClass End-Of-Life: Pearson is getting out of LMS market”. We interviewed Pearson executive Curtiss Barnes who confirmed that Pearson was getting out of this business and described why there were making this move. Just two days ago Pearson further described the financial “impact of retiring LearningStudio” in their quarterly financial conference call.
https://twitter.com/RadHertz/status/759199936454025218
- Furthermore, the MarketsandMarkets report lists Blackboard and D2L as market leaders yet ignores Instructure and its Canvas LMS. To ignore Canvas is to misunderstand the dynamics of the academic LMS market. Within their highest revenue core market, Canvas is dominating the new implementations (when an institution switches LMS solutions). Based on our latest research from our LMS market analysis, for the half year January – June 2016 Canvas has 77% of all new implementations of primary LMS solutions in US and Canadian higher education.
- At the recent ISTE Conference in Denver, which represents the largest conference for K-12 edtech in the US, there was near-unanimous viewpoint from people I asked about the dominant LMS vendors – Instructure’s Canvas and Schoology. Google Classroom generates a lot of interest as does Microsoft’s cleverly-named Classroom, but for paid district or school adoption (remember we’re talking about revenue-based market analysis), it’s Canvas and Schoology. D2L has a few very significant wins such as Florida Virtual Schools and the state of New Mexico, and ItsLearning and CypherLearning (NEO) have interesting products as well. I would argue that if you’re doing a five-year projection, PowerSchool should be in the mix based on their acquisition of Haiku Learning and introduction of “Learning” product. Out of these, only D2L is mentioned in the report.
Treat this post as our nudge for MarketsandMarkets to get the basics right when analyzing a market and forecasting where it will be in 3-5 years.
Update: Bumping Michael’s comment into post:
If anything, my Phil understates the case here. Lumping higher ed, K12, and corporate LMSs into the same category is a little bit like lumping railroad cars together with automobiles because they are both called cars, have wheels, and carry things and/or people from one place to another. On top of that, nobody has decent data on the size of the global market, never mind the growth of it. MarketsandMarkets’ “analysis” effectively gives us made-up numbers about a mythical automobile/train car market.
Michael Feldstein says
If anything, my Phil understates the case here. Lumping higher ed, K12, and corporate LMSs into the same category is a little bit like lumping railroad cars together with automobiles because they are both called cars, have wheels, and carry things and/or people from one place to another. On top of that, nobody has decent data on the size of the global market, never mind the growth of it. MarketsandMarkets’ “analysis” effectively gives us made-up numbers about a mythical automobile/train car market.
John Fritz says
Phil: what do you make of the CHE’s recent article, “Which Ed-Tech Tools Truly Work? New Project Aims to Tell Why No One Seems Eager to Find Out,” by Goldie Blumenstyk at http://chronicle.com/article/Which-Ed-Tech-Tools-Truly/237196
Is higher ed ceding control of the “what works” question by relying on vendor market share as a proxy for effectiveness? I think a larger question is what works in teaching and learning generally, with or without technology. Teasing out answers to both questions is challenging and complicated, particularly if we want solutions that scale. But has market share become the sine qua non to our method of assessing ed tech?
John Fritz
Asst. VP, Instructional Technology
University of Maryland, Baltimore County (UMBC)
Phil Hill says
John: that was interesting article and an initiative that Michael and I are following (and might even support). As you allude in your ceding comments, however, there is a risk that even if project succeeds in better research / reporting, the edtech community might continue to use proxy data such as market share.
There is a counter-argument if you watch our recent interviews with Oregon State University. See second video in this EdSurge piece: https://www.edsurge.com/news/2016-07-19-what-do-academics-really-think-of-adaptive-learning
The biggest risk educators listed, in terms of this category of edtech being used effectively, is poor research / reporting. That anecdote at least suggests that research is important and not just evaluated by proxy.
Interested in your thoughts on that video. And be forewarned that this might turn into a blog post.
John Fritz says
Hey Phil: forewarning received and welcomed. This is important and I’ll be interested in eLiterate’s take on the CHE article generally or maybe Jefferson Education at the University of Virginia specifically.
Thanks also for sharing the edsurge piece and video. A few thoughts on the 2nd one in particular:
1. Kudos to OSU for their intentionality. Same for the vendors.
2. I’d love to have seen a 3rd video interviewing the vendors about what they were hearing from higher ed that informed the products they developed and showcased at OSU. No need to name names, but what are the themes they’ve come across? On the other hand, if they can’t actually name names, then that’s probably an indictment that they aren’t seeking or listening to feedback from current or potential clients.
3. I love the call for “partnership” between higher ed and vendors echoed in the 2nd half of the video. Couldn’t agree more, which is one reason we’ve stayed with Bb as our LMS. Despite their own issues with turnover in leadership and market share, there are some good folks with whom we’ve worked well together, yes to solve problems (sometimes self-inflicted by them and us), but also to create new learning opportunities, particularly in analytics (see doit.umbc.edu/analytics). That may not be trendy, but its worked out pretty well for us. We’ve not gone thru a lot of LMS review, selection, implementation cycles typical at other schools, but I think that’s allowed us to perhaps go a little deeper with tool itself. But as the saying goes, “your mileage may vary.” 😉
4. It’s interesting that some of the folks you interviewed specifically called out the vendors for a lack of evidence. But how would a partnership they desire actually approach this? If I read Goldie Blumenstyk’s article correctly, don’t vendors and institutions both have some responsibility for assessment of Ed-Tech? We might lament or conjecture why the vendors don’t do it, but why don’t we in higher ed do so, too? The 2014 ECAR Study of Faculty and IT concluded there were three primary motives for why faculty use IT in teaching & learning: a) clear benefit to students, b) release time to do it well, and c) confidence the technology can work. I think it takes a village to do the first, provosts, deans and chairs to the 2nd and IT support to do the 3rd.
5. Several speakers wanted to pick and choose elements of vendor solutions, but I’d say the same might be said for a research methodology to evaluate the resulting “platform” that one person called it. On the faculty development side, there is a growing body of work that is broadly described as the Scholarship of Teaching and Learning (SoTL) that some rightly attribute to Ernest Boyer. There are excellent SoTL articles, journals and books, but I just don’t see a similar emerging practice or even theory for evaluating the Ed-Tech tools that support teaching & learning. To be honest, I think we could do worse than to simply appropriate and adapt SoTL theory and practice. But it takes some practice There are some decent links for more info on wikipedia: https://en.wikipedia.org/wiki/Scholarship_of_Teaching_and_Learning.
Not sure if this warrants more discussion, but I do appreciate what you and Michael have been doing, particularly on the personalized or adaptive learning fronts. One key issue I see is that the culture of teaching & learning and culture of IT are fundamentally different when it comes to time. Faculty rarely change let alone assess their teaching or course design in less than 3-4 years, but a vendor or even campus IT support unit might be looking at two iterations of one product in that time frame. If so, how do we resolve this tension and pursue the partnership that results in sustainable continuous improvement?
Back at you,
John
Phil Hill says
John – thanks for additional thoughts. Need to ponder.
Michael Feldstein says
And by “might even support,” Phil means “are already participating in.” I am on one of the working groups for that project.
John Fritz says
Glad to hear it, Michael, and also not surprised. I hope the JEUVA project has methodological “legs” for evaluating ed tech and you’ll share what you learn. Best, John