A few weeks back, I had the pleasure of attending the IMS Learning Impact Leadership Institute (LILI). For those of you who aren’t familiar with it, IMS is the major learning application technical interoperability organization for higher education and K12 (and is making some forays into the corporate training and development world as well). They’re behind specifications like LIS, which lets your registrar software automagically populate your LMS course shell with students, and LTI, which lets you plug in many different learning applications. (I’ll have a lot more to say about LTI later in this post.)
While you may not pay much attention to them if you aren’t a technical person, they have been and will continue to be vital to creating the kind of infrastructure necessary to support more and better teaching and learning affordances in our educational technology. As I’ll describe in this post, I think the nature of that role is likely to evolve somewhat as the interoperability needs of the sector are beginning to evolve.
The IMS is very healthy
I’m happy to report that the IMS appears to be thriving by any obvious measure. The conference was well attended. It attracted a remarkably diverse group of people for an event hosted by an organization that could easily be perceived as techie-only. Furthermore, the attendees seemed very engaged and the discussions were lively.
On more objective measures, the organization’s annual report bears out this impression of strong engagement. They have strong international representation across a range of organization types.
Whether your measure is membership, product certifications, or financial health, the IMS is setting records.
This state of affairs is even more remarkable given that, 13 years ago, there was some question as to whether the IMS was financially sustainable.
If you look carefully at this graph, you’ll see three distinct periods of improvement: 2005-2008, 2009-2013, and 2013-2018. Based on what I know about the state of the organization at the time, first period can most plausibly be attributed to immediate changes implemented by Rob Abel, who took over the reins of the organization in February of 2006 and likely saved it from extinction. Likewise, the magnitude of growth in the second period is consistent with that of a healthy membership organization that has been put back on track.
But that third period is different. That’s not normal growth. That’s hockey stick growth.
I am not a San Franciscan. By and large, I do not believe in heroic entrepreneur geniuses who change the world through sheer force of will. Whenever I see that kind of an upward trend, I look for a systemic change that enabled a leader or organization—through insight, luck, or both—to catch an updraft.
There is no doubt in my mind that the IMS has capitalized on some major updrafts over the last decade. That is an observation, not a criticism. That said, the winds are changing, in part because the IMS has helped move the sector through an important period of evolution and is now helping to usher in the next one. That will raise some new challenges that the IMS is certainly healthy enough to take on but will likely require them to develop a few new tricks.
The world of 2005
In the first year of the chart above, when the IMS was in danger of dying, there was very little in the way ed tech to interoperate. There were LMSs and registrar systems (a.k.a. SISs). Those were the two main systems that had to talk to each other. And they did, after a fashion. There was an IMS standard at the time, but it wasn’t a very good one. The result was that, even with the standard, there was a person in each college or university IT department whose job it was to manage the integration process, keep it running, fix it when it broke, and so on. This was not an occasional tweak, but a continual effort that ran from the first day of class registration through the last day of add/drop. If you picture an old-timey railroad engineer shoveling coal into the engine to keep it running and checking the pressure gauge every ten minutes to make sure it didn’t blow up, you wouldn’t be too far off. As for reporting final grades from the LMS’s electronic grade book automatically to the SIS’s electronic final grade record, well, forget it.
If you ignore some of the older content-oriented specifications, like QTI for test questions and Common Cartridge for importing static course content, then that was pretty much it in terms of application-to-application interoperability. Once you were inside the LMS, it was basically a bare-bones box with not much you could add. Today, the IMS lists 276 officially certified products that one can plug into any LMS (or other LTI-compliant consumer), from Academic ASAP to Xinics Commons. I am certain that is a substantial undercount of the number of LTI-compatible applications, since not all compatible product makers get officially certified. In 2005, there were zero, because LTI didn’t exist. There were LMS-specific extensions. Blackboard, for example, had Building Blocks. But with a few exceptions, most weren’t very elaborate or interesting.
My personal experience at the time was working at SUNY Systems Administration and running a search committee for an LMS that could be centrally hosted—preferably on a single instance—and potentially support all 64 campuses. For those who aren’t familiar with it, SUNY is a highly diverse system, with everything from rural (and urban) community colleges to R1s to everything in between, with some specialty schools thrown into the mix like the Fashion Institute of Technology, a medical school or two, an ophthalmology school, and so on. Both the pedagogical needs and the on-campus support capabilities across the system were (and presumably still are) incredibly diverse. There simply was not any existing LMS at the time, with or without proprietary extensions, that could meet such a diverse set of needs across the system. We saw no signs that this state of affairs was changing at pace that was visible to the naked eye, and relatively few signs that it was even widely recognized as a problem.
To be honest, I came to the realization of the need fairly slowly myself, one conversation at a time. A couple of art history professors dragged me excitedly to Columbia University to see an open source image annotation tool, only to be disappointed when they discovered that the tool was developed to teach clinical histology, which uses image annotation to teach in an entirely different way than is typically employed in art history classes. An astronomy professor at a community college on the far tip of Long Island, where there was relatively little light pollution, wanted to give every astronomy student in SUNY remote access to his telescope if only we could figure out how to get it to talk to the LMS. Anyone who has either taught a been an instructional designer for a few wildly different subjects has a leg up on this insight (and I had done both), but even so, there are levels of understanding. The art history/histology thing definitely took me by surprise.
A colleague and I, in an effort to raise awareness about the problem, wrote an article about the need for “tinkerable” learning environments in eLearn Magazine. But there were very few models at the time, even in the consumer world. The first iPhone wasn’t released until 2007. The first practically usable iPhone wasn’t released until 2008. (And we now know that even Steve Jobs was secretly skeptical that apps on a phone were a good idea.) It is a sign of just how impoverished our world of examples was in January of 2006 that the best we could think of to show what a world of learning apps could be like was Google Maps:
There are several different ways that software can be designed for extensibility. One of the most common is for developers to provide a set of application programming interfaces, or APIs, which other developers can use to hook into their own software. For example, Blackboard provides a set of APIs for building extensions that they call “Building Blocks.” The company lists about 70 such blocks that have been developed for Blackboard 6 over the several years that the product version has been in existence. That sounds like a lot, doesn’t it? On the other hand, in the first five months after Google made the APIs available for Google Maps, at least ten times that many extensions have been created for the new tool. Google doesn’t formally track the number of extensions that people create using their APIs, but Mike Pegg, author of the Google Maps Mania weblog, estimates that 800-900 English-language extensions, or “mash-ups,” with a “usable, polished Google Maps implementation” have been developed during that time—with a growth rate continuing at about 1,000 new applications being developed every six months. According to Pegg, “There are about five sites out there that facilitate users to create a map by taking out an account. These sites include wayfaring.com, communitywalk.com, mapbuilder.net—each of these sites probably has hundreds of maps for which just one key has been registered at Google.” (Google requires people who are extending their application to register for free software “keys.” Perhaps for this reason, Chris DiBona, Google’s own Open Source Program Manager, has heard estimates that are much higher. “I’ve seen speculation that there are hundreds or thousands,” says DiBona, noting that estimates can vary widely depending on how you count.
Nevertheless, even the most conservative estimate of Google Maps mash-ups is higher than the total number of extensions that exist for any mainstream LMS by an order of magnitude.
There seemed little hope for this kind of growth any time in the foreseeable future. By early 2007, having failed to convince SUNY to use its institutional weight to push interoperability forward, I had a new job working at Oracle and was representing them on a specification development committee at the IMS. It was hard, which I didn’t mind, but it was also depressing. There was little incentive for the small number of LMS and SIS vendors who dominated specification development at that time to do anything ambitious. To the contrary, the market was so anemic that the dominant vendors had every reason to maintain their dominance by resisting interoperability. Every step forward represented an internal battle within those companies between the obvious benefit of a competitive moat and the less obvious enlightened self-interest of doing something good for customers. This is simply not the kind of environment in which interoperability standards grow and thrive.
And yet, despite the fact that it certainly didn’t feel like it, change was in the air.
Glaciers are slow, but they reshape the planet
For starters, there was the LMS, which was both a change agent in of itself and an indicator of deeper changes in the institutions that were adopting them. EDUCAUSE data shows that the US LMS market became saturated some time roughly around 2003. At that time, Blackboard and WebCT had the major leads as #1 and #2, respectively. The dynamic for the next 10 years was a seesaw, with new competitors rising and Blackboard buying and killing them off as fast as it could. Take a look at the period between 2003 and 2013 in Phil’s squid graph:1
It was absolutely vicious.
None of this would materially affect the standards making process inside the IMS until, first, Blackboard’s practice of continually buying up market share eventually failed (thus allowing an actual market with actual market pressures to form) and, second, until the management team that came up with this decidedly anti-competitive strategy…er…chose to spend more time with their respective families. (I’ll have more to say about Heckle and Jeckle and their lasting impact on market perceptions in a future post.)
But the important dynamic during this period is that customers kept trying to leave Blackboard (even if they found themselves being reacquired shortly thereafter) and other companies kept trying to provide better alternatives. So even though we didn’t have a functioning, competitive market that could incentivize interoperability, and even though it certainly didn’t feel like we had one, some of the preconditions for one were being established.
Meanwhile online education growth was being driven by no fewer than three different vectors. First, for-profit providers were hitting their stride. By 2005, the University of Phoenix alone was at over 400,000 enrollments. Second, public access-oriented institutions, many of which had been seeded a decade earlier with grants from the Sloane Foundation, were starting to show impressive growth as well. A couple were getting particular attention. UMUC, for example, may not have had over 400,000 online enrollments in 2005, but they had well over 40,000, which is enough to get the attention of anyone in charge of an access-oriented public university’s budget. More quietly, many smaller schools were having online success that were proportional to their sizes and missions. For example, when I arrived at SUNY in 2005, they had a handful of community colleges that had self-sustaining online degree programs that supported both the missions and the budget of the campuses. Many more were offering individual courses and partial degrees in order to increase access for students. (Most of New York is rural, after all.)
The third driver of online education, which is more tightly intertwined with the first two than most people realize, is that Online Program Management companies (OPMs) were taking off. The early pioneers, like Deltak (now Wiley Education Services), Embanet, Compass Education (now both subsumed into Pearson), and Orbis (recently acquired by Grand Canyon University) had proved out the model. The second wave was coming. Academic Partnerships and 2Tor (now 2U) were both founded in 2008. Altius Education came in 2009. In 2010, Learning House (now also owned by Wiley) was founded.
Counting online enrollments is a notoriously slippery business, but this chart from the Babson survey is highly suggestive and accurate enough for our purpose:
If you’re a campus leader and thirty percent of your students are taking at least one online class, that becomes hard for you to ignore. Uptime becomes far more important. Quality of user experience becomes far more important. Educational affordances become far more important. Obviously, thirty percent is an average, and one that is highly unevenly distributed across segments. But it’s significant enough to be market-changing.
And the market did change. In a number of ways, the biggest one being that it became an actual, functioning market (or at least as close to one as we’ve gotten in this space).
When glaciers recede
Let’s revisit that second growth period in the IMS graph—2008 to 2013—and talk about what was happening in the world during that period. For starters, online continued its rocket ride. The for-profits peaked in 2010 at roughly 2 million enrollments (before beginning their spectacular downward spiral shortly thereafter). Not-for-profits (and odd mostly-not hybrids) ramped up the competition. ASU launched its first online 4-year degree in 2006. SNHU started a new online unit in 2009. WGU expanded into Indiana in 2010, which was the same year that Embanet merged with Compass Knowledge and was promptly bought by Pearson. (Wiley acquired Deltak two years later.)
Once again, the more online students you have, the less you are able to tolerate downtime, a poor user interface that drives down productivity, or generic course shells that make it hard to teach students what they need to learn in the ways in which they need to learn. Instructure was founded in 2008. They emphasized a few distinctions from their competitors out of the gate. The first was their native multitentant cloud architecture. Reduced downtime? Check. The second was a strong emphasis on usability. The big feature that they touted which was their early runaway hit was Speed Grader. Increased productivity? Check.
Instructure had found their updraft to give them their hockey stick growth.
But they also emphasized that they were going to be a learning platform. They weren’t going to build out every tool imaginable. Instead, they were going build a platform and encourage others to build the specialized the tools that teachers and students need. And they would aggressively encourage the development and usage of standards to do so. On the one hand, this fit from a cultural perspective. Instructure was more like a Silicon Valley company than its competitors, and platforms were hot in the Valley. On the other hand, it was still a little weird for the education space. There still weren’t good interoperability standards for what they wanted to do. There still hadn’t been an explosion of good learning tools. This is one of those situations where it’s hard to tell how much of their success was prescience and how much of it was luck that higher ed caught up with their cultural inclination at that exact moment.
Co-evolution
The very same year that Brian Whitmer and Devlin Daley founded Instructure, Chuck Severence and Mark Alier were mentoring Jordi Piguillem on a Google Summer of Code project that would become the initial implementation of LTI. In 2010, the same year that Instructure scored its first major win with the Utah Education Network, IMS Global released the final specification for LTI v1.0. All this time that the market had felt like it had been standing still, it had actually been iterating. We just hadn’t been experiencing the benefits of it. Chuck, who had been thinking about interoperability in part through his work on Sakai, had been tinkering. Students like Brian and Devlin, who had been frustrated with their LMS, had been tinkering. The IMS, which actually had a precursor specification before LTI, had been tinkering. While conditions hadn’t become visible on the surface of the glacier, way down, a mile below, the topology of the land was changing.
Meanwhile in Arizona, in 2009, the very first ASU+GSV summit was held. I admit that I have had writer’s block regarding this particular conference the last few years. It has gotten so big that it’s hard to know how to think about it, much less how to sum it up. In 2009, it was an idea. What if a university and a company that facilitates start-ups (in multiple ways) got together to encourage ed tech companies to work more effectively with universities? That’s my retrospective interpretation of the original vision. I wasn’t at many of those early conferences and I certainly wasn’t an insider. It was hard for me, with my particular background, to know what to make of it then and even harder now.
But something clicked for me this year when it turned out that IMS LILI was held at the same hotel that the ASU+GSV summit had been at a couple of months earlier. How does the IMS get to 523 product certifications and $8 million in the bank? A lot of things have to go right for that to happen, but for starters, there have to be 523 products to certify and lots of companies that can afford to pay certification fees. That economy simply did not exist in 2008. Without it, there would be no updraft to ride and consequently no hockey stick growth. ASU+GSV’s phenomenal growth, and the ecosystem that it enabled, was another major factor influenced what I saw at IMS LILI this month.
There is a lot of chicken-and-egg here. LTI made a lot of this possible, and the success LTI (and IMS Global) have experienced would not have been possible without a lot of this. The harder you stare at the picture, the more complicated it looks. This is what “systems thinking” is all about. There isn’t a linear cause-and-effect story. There are multiple interacting feedback loops. It’s a complex adaptive system, which means that it doesn’t respond in linear or predictable ways.
Update: I got a note from Rob Abel noting that a lot of the growth in the last leg came from an explosion of participation in the K12 space. That’s good color and consistent with what I’ve seen in my last couple of LILI conference visits. It’s also consistent with the rest of this analysis. K12 benefitted from all of the dynamics above—the maturation of the LMS market, the dynamics in higher education online that pushed toward SaaS and usability, the massive influx of venture funding, and so on. All of those developments, plus the work inside IMS, made the K12 growth possible, while the dynamics inside K12 added another feedback loop to this complex adaptive system.
But respond it finally did. We have some semblance of a functioning market, and with its rise, blockers preventing the formation of a vibrant interoperability standards ecosystem of the type we have today have largely fallen. Now we have to address the blockers of the formation of the vibrant interoperability ecosystem that we will need tomorrow. Because it will be qualitatively different. Tomorrow’s blockers are not market formation problems but rather collaboration methodology problems. They are about creating meaningful learning learning analytics, which will require solving some wicked problems that can only be tackled through close and well structured interdisciplinary work. That most definitely includes the standards design process itself.
After the glacier comes the flood
What I saw at the IMS LILI this year was, I think, a milestone. The end of an era. Market pressures now favor interoperability. The same companies that were the most resistant to developing and implementing useful interoperability standards in 2007 are among the most aggressive champions of interoperability today. This is not to say that foundational interoperability work is “over.” Far from it. Rather, the conditions finally exist where it can move forward as it should, still hard but relatively unimpeded by the distortions of a dysfunctional market.
That said, the nature and challenges of interoperability our sector will be facing in the next decade are fundamentally different from the ones that we faced in the last one. Up until now, we have primarily been concerned with synchronizing administration-related bits across applications. Which people are in this class? Are they students or instructors? What grades did they get on which assignments? And how much does each assignment count toward the final course grade? These challenges are hard in all the ways that are familiar to anyone who works on any sort of generic data interoperability questions.
But the next decade is is going to be about data interoperability as it pertains to insight. Data scientists think this is still familiar territory and are excited because it keeps them at the frontier of their own profession. But this will not be generic data science, for several reasons. (I will tell you right now that some of them disagree with me on this. Vehemently.) First, even in the most richly instrumented fully online environments that we have today, they are highly data impoverished relative to what we need to make good inferences about teaching and learning. For heaven’s sake, Amazon still recommends things that I have already bought. If I just bought a toaster oven last month, then how likely is it that I want to buy another one now? And I buy everything on Amazon. If they don’t know enough to make good buying recommendations on consumer products, then there’s no way that our learning environments are going to have enough data to make judgements that are orders of magnitude more sophisticated.
Well then, some answer, we’ll just collect more data! More more more! We’ll collect everything! If we collect every bit of data, then we can answer any question. (That is a pretty close paraphrase of what one of the IMS presenters said in one of the handful of learning analytics talks I went to.)
No. You won’t collect “everything”—even if we ignore the obvious, glaring ethical questions—because you don’t know what “everything” is. Computer folks, having finally freed themselves from the shackles of SQL queries and data marts, are understandably excited to apply that newfound freedom to the important problem space of learning. But it is not a good fit, because we don’t have a good understanding of the basic cognitive processes involved in learning. As I wrote about (at length) in a previous post, we have to employ multiple cutting-edge machine learning techniques just to get glimpses of learning processes even when we are directly monitoring students’ brain activity because these are extraordinarily complex processes with multiple hidden variables. Trying to tease out learning processes inside a student’s head based on learning analytics from running machine learning algorithms on LMS data is a little like trying to monitor the digestive processes of a flatworm on the bottom of the Marianas Trench based on studying the wave patterns on the surface of the ocean. There are too many invisible mediating layers to just run a random forest algorithm on your data lake—it all sounds very organic, doesn’t it?—and pop out new insights about how students learn.
That doesn’t mean we should just throw up our hands, by any means. To the contrary, IMS Global has some extraordinarily good tools close at hand for tackling this problem. But it does mean that they are going to have to take some of the stakeholder engagement strategies they’ve been working at diligently to the next level, to the point where the standards-making process itself may evolve over time.
Theory-driven interoperability
There is an excellent data and processing resource that the learning analytics folks have yet to think deeply about how to leverage, as far as I can tell from the conference. The computational power is impressive (and impressively parallel). It is the collective intelligence of educators and learning scientists. Because there are too many confounds to making useful direct inferences from the data, educational inferencing needs to be theory-driven. You need to start with at least some idea of what might be going on inside the learner’s head. One that can be either supported or disproven based on evidence. And you need to know what that evidence might look like. If you can spell all that out, then you can start doing interesting things with learning analytics, including machine learning. There is room for learning science, data science, and on-the-ground teaching expertise at the table. In fact, you need all those kinds of expertise. But the folks with those respective kinds of know-how need to be able to talk to each other and work together in the right ways, which is really hard.
The IMS has an outstanding foundation for this sort of work, because their Caliper specification turns out to provide the basis for a perfectly lovely lingua franca. To begin with, its fundamental structure is triples, which is the same basic idea as the original concept behind the semantic web. If you’re not a computer person and this is starting to make your eye’s glaze over, don’t worry, because this is plain English. Three-word sentences, in fact. Noun, verb, direct object. Student takes test. Question assesses learning objective. Student highlights sentence. Sentence discusses Impressionism.
IMS Caliper expresses learning analytics in statements that can easily be translated into three-word plain-English sentences. These sentences can be strung together into coherent paragraphs. Notice, for example, how the last two example sentences are related. Three-word sentences in this format can be chained together to form longer thoughts. New thoughts. With this one, very simple grammatical structure, we have a language that is generative in the linguistic sense. As long as you have words to put into these grammatical placeholders, you can string thoughts together. Or “chain inferences,” to sling the lingo. And it turns out, unsurprisingly, that Caliper has a mechanism for defining these words in ways that both humans and machines can understand them.
That has to be the bridge. Humans have to understand the utterances well enough to be able express their theories on the front end and understand whatever the machine is telling them it may have learned on the back end. Machines have to understand them specifically enough to be able to parse the sentences in their own, literal, machine-y way. Theoretically, Caliper could be an ideal language to enable educators and computer scientists to discuss theories about how to better support students as well as how to test those theories together.
The challenge is that the IMS community, at least based on what I saw in the sessions I attended, is not using the specification as an interdisciplinary communication tool in this way yet. What I saw happening instead was a lot of very earnest data scientists pumping as much Caliper data as the can into their data lakes. They come to the conference, give a talk and, to their credit, shrug their shoulders and admit that they really don’t know what to do with those data yet. But then they go home and build bigger pipes, because that’s their job. That’s what they do.
It’s not their fault. I’ve been friends with some of these folks for a very long time indeed. There are good people here. But if you work in the IT department, and you’re not a learning scientist or a classroom educator, and the faculty are somewhere between dismissive and disdainful of the idea of talking to you about working together to improve teaching and learning, then what can you do? You do what you know how to do and hope that things will change for the better over time.
It’s not the IMS’s fault either. The conference I attended was called the IMS Learning Impact Leadership Institute. That’s not a new name. Caliper has board that helps guide its direction. That board includes educators who are the kind of advocates that I would like to see on such a body. They are productive irritants in the best possible way. But that’s not enough anymore. This is just a really hard problem. It’s the challenge of the next decade. To meet it, we need to do more than just make sure the right people are in the room together. We need to develop new ways of working together. New roles, methodologies, ways of talking with each other, and ways of seeing the world.
I’m going to preview a bit of a post that I have in my queue for…I’m not sure when, but some time soon…by mentioning “learning engineering.” This term has gotten a lot of buzz lately, along with some criticism. I’ll be writing up my own take on it, but for now I’ll say that one reason I think the term is gaining some currency is that it represents a set of skills for being a mediator in the kind of collaboration that I’m describing here.
As it turns out, it was coined by Nobel prize-winning polymath and Carnegie Mellon luminary Herb Simon, after whom Carnegie Mellon University’s Simon Initiative was named. And, as it also turns out, the Simon Initiative hosted this year’s EEP summit and made some news in the process by contributing $100 million worth of open source software that they use in their research and pratice of…wait for it…learning engineering.
Here’s a slide that they used in their talk explaining what the heck learning engineering is and what they are doing when they are doing it:
(By the way, the videos of all talks from the summit will be posted online, as promised. Please be patient a little longer.)
This post has already run long, so rather than unpacking the slide, I’ll leave you with a question or two. Think about this graphic as representing a data-informed continuous improvement methodology involving multiple people with multiple types of expertise. What would that methodology need to look like? Who would have to be at the table, what kinds of conversations would they have to have, and how would they have to work together?
I’m not suggesting that “learning engineering” is a magical conjuring phrase. But I am suggesting that we need new approaches, new competencies, and likely a new role or two if we are going to get to the next updraft.
Lynn Zingraf says
Michael…you talk about Canvas and their embrace of open standards, but have you forgotten ANGEL Learning’s impact in this area? ANGEL was years ahead of Canvas in the adoption of IMS standards. In fact, if you recall, the company’s tag line was “Simple. Powerful. Open.” I worked there with Ray Henderson who was a strong proponent of LMS openness and the implementation of IMS standards within the LMS. He then brought that philosophy to Blackboard after they acquired ANGEL Learning, changing that company’s perspective to embrace IMS standards as well. In fact, Phill Miller (also from ANGEL), Blackboard’s Chief Learning & Innovation Officer now sits on the IMS board, as well as Ray who has returned to a board seat as well. IMHO, this is some IMS history that deserves some recognition in your coverage.
Oliver Heyer says
Michael, thanks for this thoughtful piece. A good retrospective on IMS and analysis on its future challenges. I wanted to offer a different take from what may be my narrow R1 perspective on the motivations of the data lake engineers, who in the face of an immature and uncertain “learning sciences” domain just keep fllin’ er up until the grounds shift a bit more. The learning analytics panel you refer to might have been better framed another way. Learning or learner data are at this stage more of an impetus for re-thinking campus data strategies than they are the raw materials of analysis for better instructional design. Right now, the most important people I’m talking to are not faculty or even our own IDs or Center for Teaching & Learning — instead, it’s our EDW folks. The current promise lies in easily pooling and making actionable a wider range of campus data, allowing us to think about how to operate and innovate more as a single business (the API model for sharing enterprise data only makes the silos more efficient.) The concomitant rise of computing power AWS, GCP provide with the notion of learning analytics — however elusive and difficult it may in be to show value today — put ed tech in a position to drive the conversation. This seems appropriate when your business is education.
Michael Feldstein says
Lynn, you make a fair point about ANGEL’s leadership; they do deserve a place in this story. My reason for putting Instructure in was less about them being a cause than effect in this case. They took off like a rocket because they came in having seen and taken advantage of all the groundwork that had already been laid. And you’re right that some of that work was done by Ray, Phill, Dave Mills, and other ANGEListas.
Oliver, I think you also make a fair point that building the municipal plumbing capacity is important if you eventually want to productive usage like sinks and toilets. They tried to do the reverse in England, and the result was cholera. But it’s not either/or, and my concern is that, as I sampled the panels across the conference, the prevailing approach seems to be to go ahead and do the one while waiting for the other to somehow…happen. At some point, you’re going to run out of lift-and-shift work.
Rishi Raj Singh Gera says
Thanks for great insights Michael!