During yesterday’s K-20 learning platform panel at IMS Global’s Learning Impact Leadership Institute (the panel that replaced the LMS Smackdown of year’s past), Scott Jaschik started the discussion off by asking “what is the LMS?”. As I have recently complained about our Saturn Vue that replaced a Chrysler Town & Country, the answer I provided was that the LMS is the minivan of education. Everyone has them and needs them, but there’s a certain shame having one in the driveway.
The Car Committee
It’s popular to gripe about minivans, but in reality they reflect what we (the family set with kids still at home) actually are and what we do. Sure, the minivan encourages us to throw everything in the car and continue soccer mom lives, but they do offer great seating, storage, smooth rides (on boring roads at least). Likewise, the typical LMS is in actuality still a Course Management System (CMS), which reflects how courses are organized and managed in large part.
We’re done with the boring minivan and have moved on to SUVs, but the SUV has morphed into a minivan with bad gas mileage and poor seating. It feels so nice to call it a different name, but it’s still a
CMS minivan at its core.
There are new innovations in the car market, like the Tesla. The risk we face in education is falling back on our RFP-driven habits. Great car demo, but the committee is using a family-driven process. Item #142 includes having more than 5 seats, with a place for little Kenny’s sippy cup in each. You know what, let’s just make it taller and add a hatch in the back. Item #275 requires ethanol percentages (and we read an article that batteries are risky), so could you add in an standard engine? Two years later . . . “dammit, the LMS”.
Put it together, and the LMS is important and ubiquitous, but we all know we need better options. Despite this, take away the LMS and see if students like a different method to submit assignments or check grades for every class.
Pork Belly Futures?
The metaphor has limitations, of course, as the LMS market has matured over the past few years with new options, better usability and reliability, and the beginnings of true interoperability (largely thanks to LTI).
I also do not think that the LMS is a commodity.
Is the LMS a commodity? Do you have NO opinion which you use and is price your ONLY decision criteria? That’s defines a commodity. #LILI15
— Jeremy Auger (@JeremyAuger) May 6, 2015
My reaction to the observation of the 80/20 rule (LMS has too many features, with most getting little usage) is that we need a system that does fewer things but does them very well. Then take advantage of LTI and Caliper (more on that later) to allow multiple learning tools to be used but with a way to still offer consistent user experience in system access, navigation, and provision of course administration.
I answered another question by saying that the LMS, with multiple billions invested over 17+ years, has not “moved the needle” on improving educational results. I see the value in providing a necessary academic infrastructure that can enable real gains in select programs or with new tools (e.g. adaptive software for remedial math, competency-based education for working adults), but the best the LMS itself can do is get out of the way – do its job quietly, freeing up faculty time, giving students anytime access to course materials and feedback. In aggregate, I have not seen real academic improvements directly tied to the LMS.
- The LMS has enabled blended and fully online courses, where you can see real improvements in access, etc.
- John Baker from D2L disagreed on this subject, and he listed off internal data of 25% or more (I can’t remember detail) improved retention when clients “pick the right LMS”. John clarified after the panel the whole correlation / causation issue, but I’d love to see that data backing up this and other claims.
The biggest news out of the conference is the surprisingly fast movement on Caliper. From the press release:
Caliper has progressed through successful alpha and beta specification and software releases, providing code to enable data collection, known as Sensors (or the Sensor API) and data models (known as metric profiles). A developer community web site has been set up for IMS Members while the Caliper v1 work is offered as a candidate final release.
Michael has written about the importance of Caliper here.
We live in an appy world now. The LMS is not going away, but neither is it going to be the whole of the online learning experience anymore. It is one learning space among many now. What we need is a way to tie those spaces together into a coherent learning experience. Just because you have your Tuesday class session in the lecture hall and your Friday class session in the lab doesn’t mean that what happens in one is disjointed from what happens in the other. However diverse our learning spaces may be, we need a more unified learning experience. Caliper has the potential to provide that.
The agile approach that the Caliper team, led by Intellify Learning, is using involves the creation code first, multiple iterations, and documentation in parallel. There were several proofs of concept shown at the conference of companies implementing Caliper sensors and applications.
For now, Caliper appeals to the engineer in me, where I see the novel architecture and possibilities. But that will need to change, as the community needs to see real-world applications and descriptions in educational terms. But this should not diminish the real progress being made, including proofs of concept by vendors and institutions.
Can someone tell me why Freeman Hrabowski is not running for state or national office? Great work as president of UMBC, but he would make a great politician with national impact.
Phil Hill says
Adding John Fritz comment to WordPress, so it doesn’t get lost in G+ read more land:
Hey Phil, don’t take Freeman away from us at UMBC, but you raise a good question. He’s great, isn’t he? 😉
But that’s not why I am writing. I’ve been an e-Literate follower for about two years now. With all due respect, I don’t quite understand yours and Michael’s hand-wringing about the LMS. The problem is not the features of the minivan or even the new Tesla, whatever you think that may be in ed-tech form. The problem is defining what we want to get out of ed-tech generally. For lack of a better term, I’m going to say “evidence of impact” on student learning and/or faculty teaching.
Unless you have a clear, shining example that suggests otherwise, isn’t the jury still out on that one? The late David Noble would certainly say so. In a way, so did Michael in his insightful reviews of the Pearson efficacy initiative: what works, why and how can it scale in a cost efficient way? He rightly focused on Pearson’s research into teaching & learning generally, but it’s hard not to add technology as a variable given Pearson’s (and e-Literate’s) ed-tech interests.
Whatever it may or may not do, the ubiquity of the LMS minivan at least makes it a worthwhile object of study for “impact,” if only because of the “scaling efficacy” Holy Grail we’re all seeking. In a small way, my UMBC colleagues and I have been trying to do so since 2007, first with our homegrown LMS analytics approach, and then in 2012, using Blackboard Analytics (we’ve written some related articles you can find at http://doit.umbc.edu/analytics). Is UMBC’s or Bb’s (or Purdue’s) approach perfect? Of course not. If this was easy, everyone would be doing it.
But the real, still untapped potential of the LMS is what we might learn about how faculty design their courses, how students use them and how said students perform academically, at least during a semester, if not over their academic career. I’m willing to cede that we can’t fully know if or how technology impacts student learning using LMS analytics alone, but I’m pretty sure I can now use the data trail of both faculty and student usage to winnow down the number of fruitful possibilities where I can do follow up interviews with each to learn more. In fact, we’ve been doing it, and I’m writing my dissertation about our “lessons learned” to try and document that. Alas, I wish I were further along to bring it out for a test drive here — and also to answer Freeman’s always polite, but persistent “Well?” when I run into him on campus. 😉
My own take is that unless or until any ed-tech is shown to “make a difference” in student success, I don’t see faculty significantly changing their practice, let alone the policies around promotion & tenure that cement them into the status quo. However, the “proof” I think I seek can’t easily be found, let alone scale, without a few intrepid faculty changing their practice, so we can see if and how technology supports them and their students’ learning. If you haven’t seen it, the 2014 ECAR Study of Faculty and their use of IT identified three key factors that motivate faculty to adopt any technology:
1. Clear indication/evidence that students would benefit
2. Release time to design/redesign courses
3. Confidence that the technology would work as planned
This is a much different pace of change and culture than the constant parade of new ed-tech you folks cover so well. In the end, though, I don’t think technology turns bad teachers or bad students into good ones, but I think it can amplify, accentuate and shine light on good or bad practices and tendencies that exist already.
I know analytics is becoming a term that is “oversold and underused,” as Larry Cuban wrote about ed-tech 15 years ago, but I still think we need to focus more on identifying, supporting, evaluating and celebrating evidence of effectiveness. If we do, then suspect teaching & learning practices (and related ed-tech), will suffer by comparison, as they should. If enough faculty become aware of this, then the potential for transformation occurs when their change of heart and practice begins to scale.
You’re right, just bolting on new chrome to the LMS minivan isn’t the answer. But is just changing rides going to get us any closer to understanding where we want to go, if we’ve arrived and how we got there? My apologies for overly stretching your good minivan analogy, but if everyone has one, doesn’t the LMS still afford us the opportunity to understand how to “scale efficacy” that e-Literate has repeatedly called for from ed-tech vendors and consumers alike?
Asst. VP, Instructional Technology
UMBC Division of Information Technology
Phil Hill says
Thanks for the thorough comment, but I’m actually not sure where we disagree. I certainly agree with your descriptions of need for focusing on effectiveness, that the “jury is still out”, etc.
I’m guessing the issue is what you’re reading into my (and Michael’s) “hand-wringing about the LMS”. I am a firm believer (based in large part on what I’ve seen on campuses as well as ed theory) that most real improvements come not from ed tech itself from the proper application of ed tech; new systems along with improved pedagogy or student support have real possibility. My hand-wringing is not based on “we don’t need LMS” or “it doesn’t work”, but more on:
– The core designs of CMS/LMS has helped admin usage but not learning improvement;
– The weaknesses of LMS in large part come from market demands, often by faculty asking for what they’re already using through a flawed RFP process; and
– Vendors and promoters advance bogus claims about benefits of pure ed tech (not combined with pedagogy, support) and they need to be called out.
Take that last point. It does not mean that LMS usage should not be studied and there is no potential of effectiveness. It does mean that I have seen no credible evidence in aggregate that the LMS has “moved the needle” on effectiveness and educational outcomes. More on that one in a post coming out early next week.
My intention, particularly in this post, is not pure hand-wringing about the LMS. It’s calling out what I see the LMS can do. See the paragraph with this sentence: _I see the value in providing a necessary academic infrastructure that can enable real gains in select programs or with new tools_. To get to the “three key factors” we both want to occur, I don’t think we’ll get that purely from the LMS (except in very specific cases, not in aggregate). We’ll get there with 1) LMS + 2) other learning apps or student support apps + 3) human-centered pedagogical changes and enhanced support structures. AND I think this is more likely with an LMS “that does fewer things but does them very well”.
To extend this thought and address your last point, I would suggest that we’ll have much greater ability to “scale efficacy” with combination of items 1) through 3) above.
So overall, I think you make some great points, and I agree with you.
John Fritz says
Hey Phil, great stuff. Thanks. So, just riffing off of your reply, okay, but help me with the following:
“I am a firm believer . . . that most real improvements come not from ed tech itself [but] from the proper application of ed tech;”
Couldn’t agree more. But then there’s this:
“I have seen no credible evidence in aggregate that the LMS has “moved the needle” on effectiveness and educational outcomes.”
First, don’t you mean “the proper application of the LMS hasn’t moved the needle”? It’s just a tool, right? I’m not saying design is neutral when it comes to what faculty think any ed-tech might afford, but isn’t this secondary to the analysis of how actual faculty actually use an actual tool to try improving student learning?
Second, at the risk of quibbling over semantics, expecting a tool to “move the needle” sounds (to me) a bit like the “pure ed-tech (not combined with pedagogy, support)” that you rightly state should “be called out.” Sorry if I’m misreading you, but it feels like you’re anthropomorphizing, or at least endowing, the LMS with responsibility (culpability?) for effectiveness and educational outcomes that differs from your ”proper application of ed-tech” comment above.
Digression: I would say some faculty (not just vendors) do this, too, when they ask “how do I get more technology into my teaching?” I usually reply “to do what?” which can then lead to a more productive conversation about pedagogy, goals, assessment and outcomes – and only then how technology may or may not help. The key is they have to come with a goal, or we have to discover one. Isn’t that sorta the point you and Michael have made about next-gen RFPs being bogged down with a status quo punch list of needs? I think we’d agree that ed-tech doesn’t come with direction or purpose, it only aids in supporting one that is articulated.
Finally, have you seen any ed-tech that “moves the needle” in a way you’d like the LMS to replicate? If so, please share, so we can all learn from the assessment approach that identifies effective practice and could (perhaps) be applied to other ed-tech. One area that feels promising to me is one you folks have covered well: adaptive learning. But even if it’s effective, that darn “how does it scale?” question crops up.
I would agree we mostly agree, but you seem to be focused on the tool (maybe even the tool maker) whereas I’m focused more on the tool user. Perhaps this is just our different perspectives in the ed-tech ecosystem that is on display. Regardless, I very much do appreciate the service you and Michael provide. You do it well, and I look forward to your post coming out early next week.
Phil Hill says
I should have been more clear in the post, but that question was asked at the IMS panel, hence the wording that doesn’t really match how I typically describe ed tech usage and the apparent focus on tools.
However sloppy the semantics, I have not seen any large-scale improvements (hence, moving the needle) in terms of academic outcomes from “application of LMS” as a primary cause. This is in contrast to tools such as adaptive software for remedial math, where I have seen real improvements. See the Essex County episodes of e-Literate TV that includes use of ALEKS:
http://e-literate.tv/e3-s20/ and http://e-literate.tv/e3-s21/
or Cerritos College and use of MyLabs:
Phil Hill says
I should point out that the adaptive SW for remedial math example is not one (yet?) of large-scale effects, as the applications have mostly been in pilot programs to date. But they show more promise for improving academic outcomes (when properly applied) that the traditional LMS.
Luke Fernandez says
Perhaps there’s something to be learned from recalling what Vint Cerf once said about the internet:
“The internet is a reflection of our society and that mirror is going to be reflecting what we see. If we do not like what we see in that mirror the problem is not to fix the mirror, we have to fix society.”
The relative newness of the LMS (compared to the institution of higher education) and its discrete and tangible character make it an attractive object through which to channel our hopes and anxieties (or, in Fritz’ and your parlance, do some ‘hand-wringing’) about higher education. To some extent we’re using “the LMS” as a synecdoche for the college or the university. That works to some degree but at some point the trope/technique loses its heuristic value. Sooner or later we have to dispense with the heuristic and talk concretely and directly about what we like or don’t like about higher education. If we don’t we risk confounding rather than clarifying our understanding of what challenges we are presently facing.
Phil Hill says
Wow, bringing some deep thoughts and philosophy into the conversation.
Luke Fernandez says
Vint Cerf might be philosophical but it’s an idea that’s easily accessible to all of us in tech ed. Witness dana boyd who makes essentially the same point at the end of her book __It’s Complicated__:
It is easy to make technology the target of our hopes and anxieties.
Newness makes it the perfect punching bag. But one of the hardest—
things we as a society must think about in
the face of technological change is what has really changed, and what
has not. As computer scientist Vint Cerf has said, “The internet is a
reflection of our society and that mirror is going to be reflecting what
we see. If we do not like what we see in that mirror the problem is not
to fix the mirror, we have to fix society.” It is much harder to examine
broad systemic changes with a critical lens and to place them in
historical context than to focus on what is new and disruptive
http://www.danah.org/books/ItsComplicated.pdf (pa 211-212)
Following Cerf and boyd If we want to determine what is really wrong with the LMS we may have to first clearly articulate what is wrong with higher education.
John Fritz says
“What We Are Learning About Online Learning…Online,” eh Phil? 😉
Great reminder, Luke, about the big picture and what’s at stake and maybe even possible. Somewhat related (I hope) is this concluding thought from the lit review of my dissertation, that I’m literally just taking a break from now, to add this comment:
“Finally, for all the potential transformation instructional technology is supposed to bring to education, does instructional technology actually make a difference? If so, how? Increasingly, there is a growing impatience and skepticism that leads to a “so what?” question asked by Williams (2004) and other historians of technology. One reason for this is that the rapid pace of technological change breeds a predisposition toward viewing that which is new as innovation, which Castells says is “at the root of economic productivity, cultural creativity, and political power-making” (Castells, 2004, p. 11). When change is so rapid, how does one keep up? Clearly, we do so by innovating or changing, but at what cost to the time, effort and judgment that mastery requires? In addition, the rapid change of IT culture, with very few incremental steps and bridges, is vastly different from the deliberate and considered pace of academic culture. Indeed, Williams points to the dangers of “presentism,” in which “the past is simply ignored: the present is taken as it appears, as a self-contained reality, as if there were no past realities that have shaped it” (p. 435).”
Castells (2004): http://books.google.com/books?id=bHsdJWuM3X0C
Williams (2004): same ref above, pp. 432-438
If we’re willing to accept that LMS minivans “reflect what we actually are and what we do” as @PhilOnEdTech tweeted about this thread, then I think they could become even more useful than we’ve imagined, and not just by adding features, functions and tools. If LMS use were considered a proxy of course design (for faculty) and engagement (for students), then even new technologies that follow might benefit from an approach that cures or at least addresses the loss of reflection and evaluation in discussions about technology’s impact generally.
However, I’ve run into lots of resistance to this “proxy” notion (or synecdoche that Luke has nicely provided). Funny, but if I were to say “students who show up to a face-to-face class are more likely to learn, earn higher grades, graduate and succeed than students who don’t,” nobody would bat an eye. After all, “eighty percent of success is just showing up,” as Woody Allen reminds us. But heaven help us if we try to extend and study this same line of thinking online let alone to the LMS as reflective mirror, even though “attend” is the root of both attendance and attention. To do so leads to bean-counting or “clickometry” (yeah, I’ve gotten that one) that undermines the sacred confines of life inside the classroom. Even though a freshman sitting in the back row of a 300-seat lecture is probably also experiencing a kind of “distance learning.”
I’ll stop now before launching us into another sort of hand-wringing about the quality of online vs. F2F learning, if we’re wiling to admit to Luke’s and Phil’s points that the LMS can be a mirror of what probably occurs and exists already. If so, then studying the data trail that faculty and students leave behind in the LMS could be a way to identify and perhaps reverse-engineer effective practice that helps persuade faculty colleagues to consider changing their pedagogy. Now, if ANY technology could get THAT kind of reflective practice rolling among faculty, then I think we’d really see a “scaled efficacy” that is transformative, indeed, most likely in wide-spread course redesign complemented by rigorous assessment of student learning outcomes as well as student success metrics (e.g., persistence, retention, attainment, graduation, etc.).
After all, at their best, I find most faculty are genuinely curious, inquisitive and innovative communicators who want to connect with their students and help them succeed. Imagine if we could use the minivan they drive everyday to not only shine light on the knowledge, skills and abilities of their students, but also their own pedagogical practice and that of their peers.
“You say you want a revolution . . . .” 😉
Phil Hill says
Dammit, you never disclosed that this topic fits in your dissertation. Always dangerous situation :}
But heaven help us if we try to extend and study this same line of thinking online let alone to the LMS as reflective mirror, even though “attend” is the root of both attendance and attention.
John Whitmer (formerly of CalState, currently at Bb) was part of a study led by San Diego State U that focuses on just this concept. At IMS John presented data showing correlation between LMS engagement and grades. Curious to get his input here (and I’m tweeting / emailing him now).
John Fritz says
Hmm, check my first reply for the dissertation “disclosure,” but yes, I know John and cite his dissertation in my own.
Other good folks to hear from would be Leah Macfadyen (Univ. of British Columbia), Shane Dawson & Tim Rogers (Univ. of South Australia), Steve Lonn, Tim McKay and Stephanie Teasley (Univ. of Michigan), Mike Sharkey (Blue Canary), Kim Arnold (Univ. of Wisconsin), John Campbell (Univ. of W.Va), Matt Pistilli (IUPUI), John Rome (Univ. of Arizona), Tristan Denley (Austin Peay), and Deb Everhart (Blackboard).
We may all be going about it in slightly different ways, but collectively, I think there’s widespread interest (and merit) in looking at the LMS as a mirror based on the relationship between student activity, grades and (I think) course design. The biggest challenge I see with the current state of analytics is a tendency to focus on perfecting predictions, perhaps at the expense of trying a good intervention or two, and then sharing results so we can all iterate. Given what’s at stake, I understand the impulse to “get it right.” Nobody wants to see students give up because their login patterns for the first 10 days of class are below that of their peers. On the other hand, “what is our obligation of knowing” something about our students that perhaps they do not, as John Campbell has rightly asked. Bottom line: even if we know we can predict something, how do we share it with students in a way that will raise their awareness enough to seek or accept help the institution is probably all too willing to provide.
I do agree with you that adaptive or personalized learning is really intriguing, but I read so many different accounts of approaches (and costs) that it still feels early to see how that will shake out. For me, it’s hard to beat Khan Academy, which I love. Shameless plug: I think we’ve done some interesting things with Bb’s own “adaptive release” function, which is a fairly quick & easy way to inject adaptive learning elements into a course (see http://doit.umbc.edu/itnm/practices/adaptive). Alas, most faculty don’t know it’s there, and even Bb doesn’t put it on its list of product features, which I’ve bugged them about. But we’ve had faculty from a variety of disciplines use it effectively, which sorta tells me it has legs.
This has been a fun and interesting discussion, Phil. I feel badly for ballooning your original post. However, my focus has simply been to suggest we can learn something from studying the minivan (who drives it, with whom, where, how fast, how well, etc.) and not just focusing on what it lacks, especially before we move on to the next shiny object, whatever that may be.
Woah — you know you’re an impactful blogger when you can bring John Fritz back into a discussion from his dissertation hybernation from public life!
A few thoughts:
First, since most of the folks we trade company with online and at conferences are innovators, it’s easy to lose sight of the significance of LMS adoption rates — we have our first enterprise-class academic technology application with the LMS, which brings the possibility of systematic changes in learning and teaching. As John noted above, with widespread use, we can then assess learning practices (and effective designs) in a coordinated, scaled way. Could we maybe even hope this will lead to the allocation of resources to support their effective use?
In the San Diego State Study John reference above, we’ve been able to predict 50-60% of the variation in student grade with relatively simple weekly queries that identify varying levels of participation in learning activities and resources — use of the LMS, publisher resources, clickers, and graded items — all things that are available for many courses at other institutions. More frequent data streams can capture 80-90% of the variation in student grade. A poster from the Learning and Knowledge Analytics conference about that study is here (http://bit.ly/1dXIyaI). Compare this to the 5-6% possible with student demographic / SIS data, and the potential power is clear.
A good part of the reason I joined Blackboard was believing in the potential power of this data analysis to make an impact on student learning. One of the most interesting things to me has been thinking about what this data actually reflects — as I believe it to be less a transparent mirror on “learning”, than a secondary indicator of deeper educational constructs like “effort”, or “diligence”, or …. Given the detailed behaviors that are tracked, it’s possible to create analysis that will reveal deeper (in that they’re more meaningful) constructs like understanding, engagement, etc — as we gain more experience interpreting and analyzing this data.
Also, I’ve found it interesting to note that no one in ed tech companies talks about the LMS as a closed all-inclusive destination app anymore; the evolution and progress in this core enterprise technology is on the way.
Luke Fernandez says
There’s been an interesting metamorphosis in our use of the mirror metaphor in this comment thread. Serf uses the metaphor of the mirror to call into question our focus on technology; at some point we need to stop using technology as a proxy because that proxy obfuscates the real underlying problems in education. If we keep using the proxy instead of locating the problems in some aspect of the larger system of education we displace our prejudices about that system and start locating them in the technology itself. Morozov in __To Save Everything Click Here__ makes a similar complaint when he accuses internet critics of “internet-centrism” or of attributing some essence to the internet when it really doesn’t have one:
“….where is the missing manual to “the Internet” — the one that explains how this giant series of tubes actually works — that the geeks claim to know by heart? Why are they so reluctant to acknowledge that perhaps there’s nothing inevitable about how various parts of this giant “Internet” work and fit together?…..[Tech criticism] won’t get any better until we stop thinking that there is a “Net” out there. How can we account for the diversity of logics and practices promoted by digital tools without having to resort to explanations that revolve around terms like “the Net”? “The Net” is a term that should appear on the last — not first! — page of our books about digital technologies; it cannot explain itself.” [18-20]
There probably is more of an essence to the LMS than in “the Net” but substitute the word in the above and we begin to grasp why our discussions about the LMS might at times be deteriorating into the equivalent of “Internet-centrism.”
So there are perils in using technology as a mirror. But as you all note, sometimes it helps especially when it serves [as John says] to “shine light on the knowledge, skills and abilities of their students, but also their own pedagogical practice and that of their peers.” We could use the metaphor of the mirror to describe that process. But given Serf’s more pejorative description and the possibility that the mirror simply reflects our own prejudices I wonder whether a better way to describe what we’re trying to leverage in the LMS is via the term “informate” which Shoshana Zuboff coined in __In the Age of the Smart Machine__:
What is it, then, that distinguishes information technology from earlier generations of machine technology? As information technology is used to reproduce, extend, and improve upon the process of substituting machines for human agency, it simultaneously accomplishes something quite different. The devices that automate by translating information into action also register data about those automated activities, thus generating new streams of information. For example, computer-based, numerically controlled machine tools or microprocessor-based sensing devices not only apply programmed instructions to equipment but also convert the current state of equipment, product, or process into data. Scanner devices in supermarkets automate the checkout process and simultaneously generate data that can be used for inventory control, warehousing, scheduling of deliveries, and market analysis. The same systems that make it possible to automate office transactions also create a vast overview of an organization’s operations, with many levels of data coordinated and accessible for a variety of analytical efforts.” (Zuboff, 1988; p. 9)
Following Zuboff, the LMS ‘informates,’ and via the process of ‘informating’ we learn more about what works and doesn’t work in the physical classroom.