While at EDUCAUSE last week, we heard from several sources that the IMS is walking away from LTI 2.0. I’m not sure what term the organization is using to characterize what they’re doing; there’s nothing on the public web site that I can find at the moment. But the reality is that they are no longer encouraging the adoption of LTI 2.0 and are actively investing their energy in developing the 1.x branch, starting with something that they’re calling “LTI Advantage.” (More on that in a bit.)
To be clear, I believe this is a good decision, and I also believe the fact that they felt the need to do so is not particularly scandalous. Everybody makes mistakes. The important thing is to learn from them. So, with that in mind, and with the IMS quarterly meeting coming up this week, I thought it might be useful to write a post-mortem that could potentially be of use going forward.
Interoperability Standards as Multilateral Trade Deals
One of the biggest problems the IMS faces is that a lot of the participants on the specification committees don’t fully understand the big picture of how standards adoption works in the real world. They are mostly technologists with a sprinkling of product managers. They usually have the best of intentions. But they do not have the right roles to see the business ramifications of spec development clearly, and they very often do not get adequate guidance from the stakeholders in their respective organizations that do have the right roles.
First and foremost, interoperability specifications are a lot like multilateral trade agreements. They are agreements that everybody will operate according to certain rules under certain circumstances. The fact that these agreements are written in engineering language and implemented in software code doesn’t change the fact that they are multi-party treaties. One of the major implications that falls out of this circumstance is that organizations will not adopt the standard—or “sign the treaty”—unless they believe the benefits will outweigh the costs. If a specification is going to be broadly adopted, then it needs to be designed so that all the adopting parties will see direct benefit. Remember, every minute of developer time spent implementing a standard could have been spent developing a feature or fixing a bug instead. If their customers don’t care about the benefits of the standard and developers don’t see internal benefits (like reducing the time they have to spend on one-off integrations), then the rational decision for product or project owners is to not implement the standard.
Such a decision does not mean that all progress grinds to a halt. In the trade world, there are many more bilateral trade agreements than there are multilateral trade agreements. In the software world, the equivalent to a bilateral agreement would be a proprietary integration. Many companies today that want to provide richer integration with another platform—especially with an LMS—will use the multilateral agreement that is LTI 1.x as a baseline and extend it with the bilateral agreements of proprietary integrations. This is as it should be. Not every integration needs to be standards-based. In fact, a lot of innnovation happens beyond the edge of the standard as software developers come up with new ideas about how their products can work together. They hammer out a bilateral trade agreement to get that new idea implemented and to gain a competitive edge for a while. Eventually, enough products may choose to implement the same pattern that it makes sense for everybody to adopt the same way of integrating so that developers don’t have to build slightly different versions of the same integration over and over.
Interoperability standards, being multilateral trade agreements, are rarely innovative or sexy if they are done right. Once in a while, a working group may come up with a new spin that is generative. But this is always a risky proposition. The technologists and product designers on these working groups, being creative people, want to come up with cool stuff. But “cool” isn’t what drives their employers to adopt. There always needs to be a cost/benefit analysis for each potential adopter. If that analysis doesn’t look good, then no amount of coolness will matter.
As far as I can tell from the outside, this is precisely what went wrong with LTI 2.0.
Please Pass (on) the SOAP
Back in the early days of service-oriented architecture (SOA) in software engineering, everybody was using a complex XML-based protocol called SOAP. The idea was that everybody had data to pass around, and if there were discrete services that could be called for that data, then we could have a lot more interoperability. SOAP was considered secure (or securable, anyway), so it could be used for sensitive data. SOA advocates envisioned a world in which all software provided lots of services for other software to use, creating rich opportunities for interoperability. In fact, they thought there would be so many services that it would be impractical to connect them all up by hand. So they invented something called a service bus that helped each piece of software automagically discover the services available to it from other software in the system and also to offer up its own services to any other software that needed them.
As you might imagine, this all turned out to be horribly complex and expensive. I happened to work at Oracle at the time that all this peaked, so I had a ring-side seat. The company was selling hugely expensive service management software that was often implemented by hugely expensive consultants. Some of that stuff took off and is in use in some places—usually big IT shops like moneycenter banks. But it never got the ubiquitous adoption that the enthusiasts expected because it was too much work for not enough payoff. There just weren’t that many services to justify the management costs.
That was roughly ten years ago. Fast forward seven years from then. What is the essence of LTI 2.0? It was essentially an attempt to implement the same idea using the more modern REST style of web programming. It was going to have a strong security model and service discovery. Predictably, it was really hard to implement. A senior development manager from a major LMS vendor—somebody who has been in the industry and involved with the standards process for a very long time—said that LTI 2.0 was by far the most complex standard he’s ever had to implement. If the major LMS providers struggled to implement it, then what are the chances that lots of small LTI-compatible tool makers will implement it? To answer that question, start from zero and subtract a significant number.
But I don’t think LTI would have had a high chance of success even had it been less complicated, because it wasn’t designed to be a multilateral trade agreement. What problem would a secure service bus with discoverability solve for each of the treaty signatories? The LMS companies do get a clear benefit. For them, many of the customers have LMS systems administrators who spend their days hooking up tools for individual instructors. The LMS gets most of the blame for this situation, not because it’s primarily their fault but because their customers perceive it as a pattern of weakness when they use the product. Automagic integrations would make some of their key stakeholders very happy.
On the other hand, if you’re a tool vendor, you usually only get blamed for the challenges of one integration: your tool into the customer’s LMS. And even then, the LMS vendor gets most of the heat. At the same time, hardly anybody is passing data to you, and it can be a struggle to even get the LMS vendors to accept the data that you want to give them. So LTI 2.0 offered most tool makers a lot of pain for no obvious benefit.
The people at the negotiating table for LTI 2.0 were apparently not the right people to negotiate a trade agreement. They were the right people to think of something really cool. For a standard to work, you need the former and want the latter, but only if the people thinking of cool stuff are in close communication with their colleagues who are making the cost/benefit decisions.
Now, I believe there is an opportunity to get to the cool world that the LTI 2.0 working group envisioned. But to get there, you need to flip the script.
Services First
Let’s try another analogy for a moment. In the early days of the telephone, human operators connected every call manually. There was interoperability in the sense that all telephones operated the same way over the network of wires and switchboards, but each individual connection between callers had to be hand-configured. In the early days, that was fine. It made sense. There were few enough callers that it was worth the cost of the operators. But eventually, the number of phone connections grew to the point where hiring humans to manually wire connections was no longer viable for anybody. Call connection times grew, as did service provider costs. At that point it made economic sense to create an automated switchboard.
LTI 2.0 was an attempt to invent a very fancy automated switchboard at a time when there are still hardly any callers. Once we reach the point where many tool providers are offering multiple data services and, in some cases, also receiving them, then they will see direct benefits from a secure and automated service discovery mechanism. At that point, the negotiations can begin regarding feature richness versus cost of implementation.
To get there, the IMS needs to flip the script and start by creating a world that is rich enough in services that people need to care more about managing them all. The IMS has made a good (re)start with LTI Advantage, which just adds roster provisioning, grade return, and deep linking to LTI. These are basic hygiene needs. Basically, this means any tool can find out from the LMS who is in the class and what their roles are, which a lot of tools need and is a step up from the way LTI passes user information along now. LTI Advantage can also enable the tool provider to give the LMS links that support single sign-on to specific places within the tool, and it can enable tools to return multiple grades to the LMS. Lots of apps would find these capabilities handy.
I’ve heard through the grapevine that the next challenge is improving security. Obviously, securing student data is critical. There are a lot of data sharing services that shouldn’t be offered until that security can be guaranteed. (I won’t comment on the security of rostering and grade information in LTI Advantage because I don’t know anything about it, but obviously that is one very sensitive area where schools should be asking questions about security.)
Once security is in place, the next step beyond that should be to focus on generating enough call connections to keep the switchboard operator busy. I have been arguing for some time that Caliper should be used as a data interoperability exchange standard between apps that operates through the LTI window. This could work for bilateral agreements. For example, let’s say that you want to integrate a blog with an LMS so that posts with a certain tag could be automagically submitted upon publication for a particular assignment. Let’s further suppose that the IMS has not yet developed a standard, a multilateral trade agreement, for that kind of integration. If LTI and Caliper provide tools that make developing that integration easy enough, then the two integrating parties could do the work in the style of an official integration, saving time by drawing on the LTI and Caliper infrastructure that’s already in place. If enough other tool makers become interested in the integration, then it could be submitted for ratification as an official extension of the standard. This only works if designing and implementing a new Caliper profile is easy. But if it’s done right, then we should see the number of services explode.
At some point, it will become obvious to all parties that they need autodiscovery to manage all these integrations. Then and only then will it make sense to come up with a hopefully simpler implementation of service discovery for the next run at LTI 2.x.
Open It Up
To reiterate, the path to getting to the next era of interoperability is to make it easy for many developers to build bilateral integrations that draw on the building blocks of the standards and are easy to incorporate as extensions to the standards. Some of this involves prioritizing the work on LTI and Caliper to make this easy. But it also likely will require the IMS to consider changing its policies in ways that will make some stakeholders uncomfortable.
A little history is in order here. A bit more than a decade ago, the IMS nearly died. It wasn’t generating enough revenue to sustain the work. While IMS Global is a non-profit, it still has salaries to pay, rent to cover, and so on. Operating it is not free. One of the steps that the organization made to save it from potential insolvency was to put a lot of the work behind a paywall. I don’t like it, but I get it. And it worked. The IMS appears to be much healthier now and has produced some of its best work in a very long time. Life is about trade-offs.
It may be time to consider a new trade-off. Just like getting the telephone to take off required a network effect—the value of the network increases exponentially with the number of people on it—the value of a world of standards-based services increases exponentially with the number of services on it. To get there, it’s time to at least consider lowering the pay wall. The IMS needs to create an environment with very low barriers to creating standards-compatible integrations. A membership fee, however reasonable it may seem to the membership organization, is a barrier. The IMS leadership should think creatively about how to lower this barrier while maintaining the financial stability of the larger organization.
The Nub of It
To sum up, here’s what I think the IMS should do to recover from the LTI 2.0 misstep and foster a step-function change in interoperability:
- Promote LTI Advantage and build from there.
- Focus next on creating a simple security model that is trustworthy for common use cases.
- Focus Caliper efforts on turning it into a quick and easy to use method for creating data interoperability extensions that take advantage of the existing LTI infrastructure.
- Make a giant push to get developers to build their bilateral integrations using the LTI and Caliper infrastructure, whether or not they submit those integrations as extensions to the standard. This should explicitly include re-examining policies about giving non-members early access to the work being done inside the IMS in cases where such access will increase the velocity of new Caliper-compatible service generation.
- Wait for demand to build on both provider and consumer side before attempting another run at discoverability.