The IMS has announced the initial public release of something they call Caliper, which they characterize as a learning analytics interoperability framework. But it’s actually much, much more than that. In fact, it represents the functional core of something that my SUNY colleagues and I used to refer to as a Learning Management Operating System (LMOS), and is something that I have been hoping to see for eight years, because it promises to resolve the tension between the flexibility of lots of separately developed, specialized learning tools and the value and convenience of an integrated system.
Let’s take a peek at the framework to see why I’m so hopeful about this framework. But before we do that, you should fasten your seat belts and strap on your aviator goggles. It’s going to get geeky in here.
The LMOS
Back in 2005, when I worked at the SUNY Learning Network, some colleagues and I were asked to evaluate the options for the next SUNY-wide LMS. It is important to understand just how diverse SUNY is. There are 64 campuses in the system, ranging from tiny rural Adirondack Community College to giant urban Suffolk County Community College to R1 universities like SUNY Stony Brook to specialty colleges like the Fashion Institute of Technology and the SUNY College of Optometry. These schools have radically different teaching needs from each other. We concluded that no LMS that existed at the time could serve all the needs of this diverse group of institutions equally well.
Now, 2005 was the peak of the Web 2.0 hype cycle, which meant that it was also the beginning of the “LMS is dead” meme. Creative, motivated teachers were starting to do really good online education outside of the LMS using tools like blogs and wikis. But our job at the SUNY Learning Network was to help campuses grow their online education programs at scale, and it was clear to us that a majority of faculty simply did not have the skills (or time, or passion) to cobble together decentralized tools and incur the extra management required to run a class that way. Furthermore, with learning analytics in their infancy, our newborn hope of actually being able to gather enough data on student behavior to learn from them and help them achieve their goals would never come to fruition in a radically decentralized environment. There would be no way to get all the data into one place to analyze it.
To solve this dilemma, we proposed a concept that we called the Learning Management Operating System. Like an operating system on a desktop computer, it would offer low-level services upon which many specialized applications written by many different developers could run and operate. Patrick Masson and I articulated the educational imperative for such a system in an article for eLearn Magazine called “Unbolting the Chairs,” which started with the following argument:
In the physical world, it goes without saying that not all classrooms look the same. A room that is appropriate for teaching physics is in no way set up for teaching art history. A large lecture hall with stadium seating is not well-suited to a small graduate seminar. And even within a particular class space, most rooms are substantially configurable. You can move the chairs into rows, small groups, or one big circle. You can choose to have a projection screen or a whiteboard at the front of the room. You can bring equipment in and out. Most of the time, we take these affordances for granted; yet they are critical factors for teaching and learning. When faculty members don’t have what they need in their rooms, they tend to complain loudly.
The situation is starkly different in most virtual classrooms. In the typical Learning Management System (LMS), the virtual rooms are fairly generic. Almost all have discussion forums, calendars, test engines, group work spaces, and gradebooks. (The Edutools Web site lists 26 LMSs that have all of these features.) Many have chat capabilities and some ability to move the chairs around the room using instructional templates. (Edutools lists 12 products with these additional capabilities.) Beyond these common features, LMSs tend to differentiate themselves with fine-grained features. Does the chat feature have a searchable archive? Can I download the discussion posts for offline reading? These features may be very useful but they are also fairly generic in the sense that they are merely enhancements of general-purpose accoutrements that already exist. Our virtual classrooms may be getting smarter, but they are still pretty much one-size-fits-all. They aren’t especially tailored to teach particular subjects to particular students in a particular way.
This is not as it should be. Virtual classrooms should be more flexible than their physical counterparts rather than less so. Do you teach art history? Then you need an image annotation tool. But probably a different one than the image annotation tool needed to teach histology. Foreign language teachers may want voice discussion boards to check student accents. Writing teachers should have peer editing tools. History teachers should have interactive maps. And so on.
Granted, some of these applications exist today and can be included in an LMS. But there are not nearly as many of them as there can and should be. We contend that the current technical design philosophy of today’s Learning Management Systems is substantially retarding progress toward the kind of flexible virtual classrooms that teachers need to provide quality education. In order to have substantial development of specialized teaching tools at an acceptable rate, LMSs need to be designed from the ground up to make development and integration of new tools as easy as possible.
We also recommended to SUNY that the system should build an LMOS, and we set about trying to define what that would mean. One central architectural concept that we worked with was something that we called a “service broker.” The basic idea is that tools would plug into it and share information with other tools. (One commenter helpfully pointed out that a more appropriate term for this idea was actually a service bus.) I wrote a series of blog posts trying to unpack the idea, including one that described integrating an external blog tool into an LMS environment. The gist of the scenario I described was as follows:
- The service broker would take the RSS feed as input.
- There would be some sort of single sign-on mechanism to verify that the author of the blog is the same student that the LM(O)S knows about.
- The LMS would be able to publish class and assignment information which the blog would be able to read as post categories.
- When a student published a blog post with the appropriate class and assignment categories, the broker would pick it up and make it available to other applications.
- The activity tracker would note that the student had submitted a blog post blah on date blah for assignment blah in class blah.
- The course grade book would add a line item for the student’s submission for the assignment and display the text of the post.
- An aggregator in the course space would display the blog posts from various students for the assignment.
- Later, an ePortfolio app could ask the grade book for the student’s blog post along with the instructor’s grade and comment.
The idea was that an LMOS service broker would have different adapters to accept data from different kinds of learning applications and pass that data to whatever other apps needed it. These adaptors would ideally be standards-based so that it would be easy to plug in new applications from different sources.
In the end SUNY decided that it did not have the risk tolerance to build a new platform. And to be honest, it would have been challenging to pull off with the technology of the time. But eight years later, I still think the vision was a good one. And now, eight years later, I think that Caliper has a chance of fulfilling that vision.
Have you strapped on those aviator glasses yet? OK. Here we go.
Triple Your Pleasure, Triple Your Fun
Of course, we weren’t the first people to think about creating a web of data that could link disparate applications. Big shots like Tim Berners Lee had been talking about a “semantic web,” where sites could talk to each other and automagically interoperate, since the late 1990s. One of the foundational technologies in the semantic web effort was something called the Resource Description Framework, or RDF. And a core idea in RDF was something called a triple. A triple can really be boiled down to a plain English sentence structure: subject, phrase that characterizes a relationship, object. Here are a few examples:
- Joe | is the author of | http://www.themusicalfruit.com/
- http://www.themusicalfruit.com/ | is about | legumes
Note that there is a kind of transitive property possible here. If Joe is the author of TheMusicalFruit.com and TheMusicalFruit.com is a website about legumes, then we can infer that Joe is the author of a website about legumes. Theoretically, you can create long chains and complex clusters of these inferences in something that a mathematician might call a graph.
Let’s look at some triples that are relevant to the student blog example above:
- Ann | is a student in | Intro to Linguistics
- “Whorf hypothesis argument” | is an assignment in | Intro to Linguistics
- “Beam Me Up, Whorf” | is a blog post by | Ann
- “Beam Me Up, Whorf” | is a homework submission for | “Whorf hypothesis assignment”
Using triples like these, you could accomplish a lot of what I described in that use case in 2005. You could see, for example, that “Beam Me Up, Whorf,” Ann’s submission for the Intro to Linguistics assignment called “Whorf hypothesis assignment.”
Ultimately, RDF never took off, for a variety of reasons. But triples are extremely useful and have been employed in a variety of other technologies, including both IMS’s Caliper and SCORM’s TinCan API. They provide a grammar for the semantic web.
Grammar Aren’t Everything
The great thing about triples is that they can express just about any relationship. The bad thing about triples is that they can express just about any relationship. Consider the following triple:
- Fribble | is a frogo of | Framizan.
This is a grammatically valid triple, but it tells us nothing, because we don’t know what the words mean. OK, it’s true, I cheated by using made up words. Let’s see if we can make the situation clearer by adding some English:
- Fribble | is a parent of | Framizan.
Huh. Not much better. Are Fribble and Framazan people? Are they subfolders in a file directory? It turns out that human languages aren’t terribly precise. And if you want disparate computer programs to be able to understand each other without constant human intervention, then you need to be very precise. In addition to a grammar, you need a lexicon. Or, in computer terms, you need an entity model. You need to tell the computer things like this:
- There is an entity type that we call a “person.”
- A person has a first name and a last name.
- A person (in your world) has a unique ID.
- A person (in your world) will always have an email address.
- A person (in your world) might have a phone number.
- Etc.
With this, we can have the computer say something like:
- The person entity with ID “Ann” | has relationship “is a student in” | to the class entity with ID “Intro to Linguistics”
That may sound clunky to you and me, but it’s poetry to a machine.
This is essentially what Caliper adds to a triple structure. It adds a collection of “entities,” or things, that all interoperating computers agree have certain properties. And that, my friends, is what makes time travel work. With both a grammar and a lexicon, learning applications can start talking to each other.
And it turns out that the IMS had a bunch of entity definitions lying around already from their previous standards work. For example, the LIS standard (which is designed to integrate LMSs with SISs) has definitions for a person, a course section, and an outcome. What will be interesting to see is the development of new entities for learning activity types (beyond those that are already specified in QTI). For example, what would we want to know about a reading? A video? A simulation? A note-taking app? We could generate a list of such things pretty easily, and for each thing that we want to know about, we could generate a short list of what we want to know about it. That short list would be the core of the entity model for the thing, and it would be the information that developers would have to expose in their apps in order to be able to plug into Caliper. The downside of adding an entity model is that it’s more work for developers to implement, increasing the chances that any particular developer won’t do it. The upside is more assurance of interoperability and a richer information flow.
So again, to plug into the Caliper LMOS, an app would have to be able to read and/or write some subset of these entities and understand the triple relationships. They would also have to establish a communication channel. Luckily, LTI essentially already does that. The LTI standard was always intended as a kind of a wrapper. It provides single sign-on between learning apps and then enables the two apps to talk to each other. Right now, the standards-based communication over LTI is pretty limited. But Caliper would open up a whole new world of possible communications.
Analytics
It’s pretty easy to see why the IMS has latched onto Caliper as an analytics interoperability standard. The graph lets you crawl relationships among pieces of data to get to the relationships that you want. Assuming that entities have reasonable metadata (like creation dates, for example), we can ask a bunch of questions in the blogging example above:
- Has Ann submitted all her blog post homework assignments for Intro to Linguistics?
- How close to an assignment deadline does Ann typically complete her blog post assignments?
- How well, on average, does Ann do on her blog post assignments?
- Does Ann have a pattern of completion or performance in her blog posts across all of her classes?
- What is the class average on blog post homework assignments?
None of this sounds particularly earth-shattering until you remember that all of this data is being gathered from the students’ own weblogs. This is not software provided by the LMS vendor. It may not even be software hosted or contracted by the university. This could be from the students’ own blogs, with a plugin installed (which in WordPress, at least, is super simple to do).
Let’s see what happens if we extend the learning graph one step further. Suppose we have a relationship in the triple that we call “is a response to.” If you write a post and I write a comment on your post, then my comment “is a response to” your post. If you write a post and I write a post on my own blog referring to yours, my blog post also “is a response to” your blog post. (We might also create an entity called “comment,” so that we can distinguish between a response that is a blog post on another site and a response that is a comment on the same site: “My comment | is a response to | your blog post.”) Interestingly, most blogs have a feature called “pingbacks,” which detect when other blogs have pointed to your blog post by embedding a URL. Suppose that our Caliper WordPress plugin translates that pingback into a triple that can be read by any Caliper-compliant system. Now we can start asking questions like the following:
- How many responses did Ann’s blog posts for Intro to Linguistics generate?
- Which students in the class write the posts that generate the most responses?
- What percentage of responses occurred as comments on the same page as the blog posts, and what percentage was blog posts on commenter’s own site?
- Which classes have the most activity of students responding to other students?
- What is the correlation between levels of student responses to each other and outcomes?
- What is the moment in the course when students started responding more to each other and less to the teacher?
As the size of the graph grows, the number of questions you can answer grows exponentially.
From Learning Management Operating System to Learning Cloud
But let’s suppose that you want to do more than gather data on students’ use of blogs in a class that is otherwise managed in a central LMS. Let’s suppose that you want the students’ blogs to be the LMS (for the most part). Suppose you want to build a course like ds106, where all student work happens out on their own blogs, and the hub of the course is really just an aggregation point. Right now, ds106 accomplishes this goal through a Frankenstein’s monster of WordPress plugins and custom hacks. I don’t mean to denigrate the technical work that they’ve done. To the contrary, I’m astonished by what they’ve been able to accomplish with chewing gum and duct tape. Caliper could potentially provide them with better tools for richer integration in a more elegant way. It could, for example, create a visualization of the conversation across the various blogs, and make that visualization clickable so that students could see the thread and then jump directly to the posts involved. In a real way, those disparate blogs would function as one distributed piece of software. Each piece would be independent. Students could use the blogging platform they want on the host that they want. But the data would flow freely, easily aggregatable, sortable, visualizable, and analyzable. Forget about the Learning Management Operating System. The future is the Learning Cloud.
In our eLearn article, Patrick and referred to a Flickr social markup of a Merode Altarpiece created by an art history class taught by our friend and colleague Beth Harris. We wrote,
What is the learning object here? Or, to put it another way, what is the locus of educational value? Is it the picture itself? Is it the picture plus the comments of the students? Or is it both of these plus the action potential for students to continue to exchange ideas through the commenting system? A learning object-centric view of the world would place the emphasis on the content, ignoring the value of the ongoing educational dialog as something extraneous. But that view clearly doesn’t allow us to encapsulate the locus of educational value in this case. Sometimes people will try to fudge the difference by tacking the word “interactive” in front of “learning object.” This obscures the problem rather than solving it. “Object” is just a longer word for “thing.” It inherently focuses on artifacts rather than activities. It emphasizes content to be learned rather than the actions on the part of students that lead to learning.
To take a more familiar example, consider the spreadsheet. What is it that you share when you email a spreadsheet to a colleague? Is it the content, the interaction potential, or both? Are you simply sharing a “tabular data object,” or is the potential for the recipient to plug in new data and get new results an inextricable part of the thing we’re calling a “spreadsheet?” There is no one right answer to this question; it is entirely context-dependent. Sometimes what we mean by “spreadsheet” is a set of completed calculations and how they were derived. In this case, content is king. However, at other times a “spreadsheet” means a tool for plugging in new numbers to make calculations and run “what-if” scenarios. Sometimes its locus of value is as a “tabular data object,” sometimes it is as a “tabular data-processing application,” and sometimes it is as an inextricable fusion of the two.
So what is the distinction between a learning object and a learning application? What is the difference between the domain of content (and therefore content experts) and the domain of functionality (and therefore programming experts)? We contend that there is no clean separation of concerns. The world does not divide neatly between functionality packages that can be integrated as Blackboard Building Blocks or WebCT Powerlinks on the one hand, and self-contained content packages that can be tied up in a bow and listed in MERLOT on the other hand. The division between learning objects and learning environments is a false dichotomy. Students need both the functionality and the content—the verbs and the nouns—in order to have a coherent learning experience. They learn when they do things with information. They discuss paintings. They correlate news with its location in the world. They run financial scenarios in a business case study. Consequently, managing the learning content or managing the learning environment in isolation doesn’t get the job done. We need to manage learning affordances. We need to focus on providing faculty and students with a rich array of content-focused learning activities that they can organize to maximum benefit for each student’s learning needs.
That article was published in January 2006. One year later, on January 9th, 2007, Apple unveiled the first iPhone. We live in an appy world now. The LMS is not going away, but neither is it going to be the whole of the online learning experience anymore. It is one learning space among many now. What we need is a way to tie those spaces together into a coherent learning experience. Just because you have your Tuesday class session in the lecture hall and your Friday class session in the lab doesn’t mean that what happens in one is disjointed from what happens in the other. However diverse our learning spaces may be, we need a more unified learning experience. Caliper has the potential to provide that.
Rob Abel says
Woa Michael! Way to embrace your inner geek! Some other related links: Caliper whitepaper: http://www.imsglobal.org/IMSLearningAnalyticsWP.pdf My blog entry about Caliper (not as geeky I’m ashamed to say): http://www.imsglobal.org/blog/?p=288 and big analytics summit coming on Nov 7 open to the public: http://www.imsglobal.org/nov2013Oracle.html at Oracle HQ in Redwood Shores, CA, USA
Kalpesh Parmar says
Thanks for giving a new lifeline by reforming the present LMS. I am always looking forward to your amazing posts. Why don’t you write on the topics such as the analytic dashboard or the types of the student’s data needs to be tracked in the learning analytics tools. Please Reply, Thanking you in advance.
peter ming says
I have to agree with Bruce D’Arcus’s sentiment above. This does seem like IMS trying to steal some of the Tin Can traction. Caliper seems to be little more than tin can and the tin can registry. could your elearn article not just have been applied to tin can?
Michael Feldstein says
In a way, it could have just as easily applied to RDF, Peter. Triples are powerful, and they were invented by neither IMS nor SCORM.
I know less about TinCan than I do about Caliper, for no other reason than that I know the folks involved with Caliper and the IMS organization better than I know Rustici and SCORM. It’s not a judgment.The white paper Rob links to above talks about mapping TinCan to Caliper. Standards interoperability is good. From what I can tell, the main piece that Caliper adds is that entity model. But I’d be interested to hear from both Caliper and TinCan folks to talk about whether their models are the same and where they are different.
Peter Ming says
Thanks Michael, I guess what I’m trying to figure out before we decide to push forward with our own Tincan & LRS implementation is whether IMS are telling us TinCan’s lack of an entity model means its not appropriate for learning analytics or whether this is indeed just an attempt as Megan Bowe, (now of Knewton) put it to have #oneringtorulethemall.
Michael Feldstein says
It is fair to worry about spec overlap and inter-organizational politics. And it is also worth noting that the early drafts of both Caliper and TinCan were initially authored by private companies that want to sell implementations of the standards. That said, the two co-authors of Caliper are people that I know and respect a great deal. I can tell you that I do not believe that they have any ill intent. I don’t know the Rustici folks personally, but they have a good reputation backed by a proven track record. I think it best to assume that everybody is acting out of good will until there is evidence to the contrary.
Rob Abel says
Just a brief clarification. Intellify provided the write-up on Caliper and is doing some of the implementation work but the IMS analytics work came out of an IMS work activity featuring the parties named in the press release who were working on this before Intellify got involved. Intellify is doing great work on this but they are not at the “center” of the work (IMS is at the center) in the same way that Rustici is at the center of TinCan. The Caliper framework and Sensor API are the work of the collective IMS membership http://www.imsglobal.org/membersandaffiliates.html and per some of the comments above we expect that the adoption will be nearly 100% in all the LTI certified products/platforms http://developers.imsglobal.org/catalog.html and beyond as we “undock” these services from LTI (as indicated in the whitepaper). IMS doesn’t wish to compete with TinCan – as Michael says, it is referenced in the whitepaper. The technical merits are beyond my skill set but I know the workgroup discussed TinCan and concluded it was not all that was needed to solve the entire scope of what IMS members need to solve. Another issue is that IMS members obviously want an education community managed standard from a viable education standards body like IMS.
Michael Feldstein says
As I said, while I don’t know the Rustici guys, I have nothing but good things to say about them and about the SCORM community. Rob, rather than commenting on other people in other organizations, I think it would be more helpful if you could provide factual detail around the Caliper standards process.
Rob Abel says
Hi Michael – I don’t know the technical details of why the IMS group feels that TinCan was potentially compatible but not sufficient. I thought the whitepaper was pretty self-explanatory in that regard – as per your comment of IMS members feeling they can indeed define some of the schemas for various categories of tools. The other obvious difference is what I already mentioned regarding LTI and w Caliper is going to build off of that success.
As to Peter Ming’s question I’m happy to put him in contact with some IMS folks who might be able to answer if he contacts me.
I think TinCan and Caliper are probably going after different communities that may intersect at some point. We should all keep an open mind as to how that might occur. At the end of the day IMS is looking to solve education community challenges and we are happy to make use of good work from elsewhere when we can find it and it is permissible.
My comment about Rustici (the company – not the person)/TinCan was simply to point out a difference in role versus Intellify/Caliper (and not to make a judgement as to right or wrong in terms of approach – good work is good work regardless of where it comes from). Intellify is a great contributor to Caliper but is not at all at the center of it’s dispersion (as your comments seemed to imply) into the marketplace and should not be portrayed that way – IMS / the IMS members are at that center.
IMS has a lot of opportunities for engagement from all comers. We get lots of questions and inputs on our open forums every day:
http://www.imsglobal.org/community/forum/latesttopics.cfm?forumid=11
and all our meetings are published well in advance on the open web – including the topics and agendas – see reference above to Nov meetings.
Thanks for the interesting post on Caliper!
– Rob
Peter MIng says
Hi Rob – many thanks for your response. Absolutely I’d be keen to have a catch up with one of the IMS folks to get a better understanding.
We are really coming at this from a pretty open angle, trying to understand which is best for us and the colleges we work with. I don’t want to turn Michael’s post into a xAPI vs Sensor API discussion but I know a lot of colleagues are following this blog and are in the early stages of xAPI & LRS investigation/implementation. It would be great to get a little more specific detail where you said its “not all that was needed to solve the entire scope of what IMS members need to solve”. Is there a specific use case you had in mind where tin can just simply wouldn’t work? We may choose to use tin can anyway but it would be good to be aware of the pitfalls that we’ll face, or even perhaps more detail of where its lack of entity model will cause problems for us.
Is one of your main concerns it not being an education community managed standard? I have to say we used ADL’s Scorm for some time and were pleased with the direction they took moving towards RDF.
Many thanks
Rob Abel says
Hi Peter-
I’m just the mindless token manager of IMS 😉 The tech questions you’re asking are good ones – but in IMS it’s the workgroup/members that do the work, not moi. I’m simply repeating what I was told. The way to get answers/make progress is to connect you directly to the technical folks who are doing the work and have some discussions. If you send an email to me at rabel at our domain name I will direct connect you.
I don’t want to get into all the history of IMS vs SCORM as I really think all of that is best left in the past, as clearly we are entering a new era. I’ll just offer that IMS would have no problem whatsoever backing the TinCan/xAPI if that’s what the IMS members want to do. I think their decision on whether or not they would back it would have a lot to do with how it is maintained, evolved, and whether there are any IP issues (ownership) associated with it as well as the technical merits. It’s a long story but standards consortia like IMS or W3C (or lots of others) evolve processes for adoption that fit what the members think will work to move the market. Probably ideally the “owners” of TinCan (I don’t know if there are any owners – but whomever has authority over the work) might consider what it takes for the IMS members to want to sustain it. IMS has grown from 50 members to 220 and is very financially secure for the long term.
Anyway, probably a lot more than you bargained for there . . but please email me as we will value your insights.
-Rob