The IMS has announced the initial public release of something they call Caliper, which they characterize as a learning analytics interoperability framework. But it’s actually much, much more than that. In fact, it represents the functional core of something that my SUNY colleagues and I used to refer to as a Learning Management Operating System (LMOS), and is something that I have been hoping to see for eight years, because it promises to resolve the tension between the flexibility of lots of separately developed, specialized learning tools and the value and convenience of an integrated system.
Let’s take a peek at the framework to see why I’m so hopeful about this framework. But before we do that, you should fasten your seat belts and strap on your aviator goggles. It’s going to get geeky in here.
Back in 2005, when I worked at the SUNY Learning Network, some colleagues and I were asked to evaluate the options for the next SUNY-wide LMS. It is important to understand just how diverse SUNY is. There are 64 campuses in the system, ranging from tiny rural Adirondack Community College to giant urban Suffolk County Community College to R1 universities like SUNY Stony Brook to specialty colleges like the Fashion Institute of Technology and the SUNY College of Optometry. These schools have radically different teaching needs from each other. We concluded that no LMS that existed at the time could serve all the needs of this diverse group of institutions equally well.
Now, 2005 was the peak of the Web 2.0 hype cycle, which meant that it was also the beginning of the “LMS is dead” meme. Creative, motivated teachers were starting to do really good online education outside of the LMS using tools like blogs and wikis. But our job at the SUNY Learning Network was to help campuses grow their online education programs at scale, and it was clear to us that a majority of faculty simply did not have the skills (or time, or passion) to cobble together decentralized tools and incur the extra management required to run a class that way. Furthermore, with learning analytics in their infancy, our newborn hope of actually being able to gather enough data on student behavior to learn from them and help them achieve their goals would never come to fruition in a radically decentralized environment. There would be no way to get all the data into one place to analyze it.
To solve this dilemma, we proposed a concept that we called the Learning Management Operating System. Like an operating system on a desktop computer, it would offer low-level services upon which many specialized applications written by many different developers could run and operate. Patrick Masson and I articulated the educational imperative for such a system in an article for eLearn Magazine called “Unbolting the Chairs,” which started with the following argument:
In the physical world, it goes without saying that not all classrooms look the same. A room that is appropriate for teaching physics is in no way set up for teaching art history. A large lecture hall with stadium seating is not well-suited to a small graduate seminar. And even within a particular class space, most rooms are substantially configurable. You can move the chairs into rows, small groups, or one big circle. You can choose to have a projection screen or a whiteboard at the front of the room. You can bring equipment in and out. Most of the time, we take these affordances for granted; yet they are critical factors for teaching and learning. When faculty members don’t have what they need in their rooms, they tend to complain loudly.
The situation is starkly different in most virtual classrooms. In the typical Learning Management System (LMS), the virtual rooms are fairly generic. Almost all have discussion forums, calendars, test engines, group work spaces, and gradebooks. (The Edutools Web site lists 26 LMSs that have all of these features.) Many have chat capabilities and some ability to move the chairs around the room using instructional templates. (Edutools lists 12 products with these additional capabilities.) Beyond these common features, LMSs tend to differentiate themselves with fine-grained features. Does the chat feature have a searchable archive? Can I download the discussion posts for offline reading? These features may be very useful but they are also fairly generic in the sense that they are merely enhancements of general-purpose accoutrements that already exist. Our virtual classrooms may be getting smarter, but they are still pretty much one-size-fits-all. They aren’t especially tailored to teach particular subjects to particular students in a particular way.
This is not as it should be. Virtual classrooms should be more flexible than their physical counterparts rather than less so. Do you teach art history? Then you need an image annotation tool. But probably a different one than the image annotation tool needed to teach histology. Foreign language teachers may want voice discussion boards to check student accents. Writing teachers should have peer editing tools. History teachers should have interactive maps. And so on.
Granted, some of these applications exist today and can be included in an LMS. But there are not nearly as many of them as there can and should be. We contend that the current technical design philosophy of today’s Learning Management Systems is substantially retarding progress toward the kind of flexible virtual classrooms that teachers need to provide quality education. In order to have substantial development of specialized teaching tools at an acceptable rate, LMSs need to be designed from the ground up to make development and integration of new tools as easy as possible.
We also recommended to SUNY that the system should build an LMOS, and we set about trying to define what that would mean. One central architectural concept that we worked with was something that we called a “service broker.” The basic idea is that tools would plug into it and share information with other tools. (One commenter helpfully pointed out that a more appropriate term for this idea was actually a service bus.) I wrote a series of blog posts trying to unpack the idea, including one that described integrating an external blog tool into an LMS environment. The gist of the scenario I described was as follows:
- The service broker would take the RSS feed as input.
- There would be some sort of single sign-on mechanism to verify that the author of the blog is the same student that the LM(O)S knows about.
- The LMS would be able to publish class and assignment information which the blog would be able to read as post categories.
- When a student published a blog post with the appropriate class and assignment categories, the broker would pick it up and make it available to other applications.
- The activity tracker would note that the student had submitted a blog post blah on date blah for assignment blah in class blah.
- The course grade book would add a line item for the student’s submission for the assignment and display the text of the post.
- An aggregator in the course space would display the blog posts from various students for the assignment.
- Later, an ePortfolio app could ask the grade book for the student’s blog post along with the instructor’s grade and comment.
The idea was that an LMOS service broker would have different adapters to accept data from different kinds of learning applications and pass that data to whatever other apps needed it. These adaptors would ideally be standards-based so that it would be easy to plug in new applications from different sources.
In the end SUNY decided that it did not have the risk tolerance to build a new platform. And to be honest, it would have been challenging to pull off with the technology of the time. But eight years later, I still think the vision was a good one. And now, eight years later, I think that Caliper has a chance of fulfilling that vision.
Have you strapped on those aviator glasses yet? OK. Here we go.
Triple Your Pleasure, Triple Your Fun
Of course, we weren’t the first people to think about creating a web of data that could link disparate applications. Big shots like Tim Berners Lee had been talking about a “semantic web,” where sites could talk to each other and automagically interoperate, since the late 1990s. One of the foundational technologies in the semantic web effort was something called the Resource Description Framework, or RDF. And a core idea in RDF was something called a triple. A triple can really be boiled down to a plain English sentence structure: subject, phrase that characterizes a relationship, object. Here are a few examples:
- Joe | is the author of | http://www.themusicalfruit.com/
- http://www.themusicalfruit.com/ | is about | legumes
Note that there is a kind of transitive property possible here. If Joe is the author of TheMusicalFruit.com and TheMusicalFruit.com is a website about legumes, then we can infer that Joe is the author of a website about legumes. Theoretically, you can create long chains and complex clusters of these inferences in something that a mathematician might call a graph.
Let’s look at some triples that are relevant to the student blog example above:
- Ann | is a student in | Intro to Linguistics
- “Whorf hypothesis argument” | is an assignment in | Intro to Linguistics
- “Beam Me Up, Whorf” | is a blog post by | Ann
- “Beam Me Up, Whorf” | is a homework submission for | “Whorf hypothesis assignment”
Using triples like these, you could accomplish a lot of what I described in that use case in 2005. You could see, for example, that “Beam Me Up, Whorf,” Ann’s submission for the Intro to Linguistics assignment called “Whorf hypothesis assignment.”
Ultimately, RDF never took off, for a variety of reasons. But triples are extremely useful and have been employed in a variety of other technologies, including both IMS’s Caliper and SCORM’s TinCan API. They provide a grammar for the semantic web.
Grammar Aren’t Everything
The great thing about triples is that they can express just about any relationship. The bad thing about triples is that they can express just about any relationship. Consider the following triple:
- Fribble | is a frogo of | Framizan.
This is a grammatically valid triple, but it tells us nothing, because we don’t know what the words mean. OK, it’s true, I cheated by using made up words. Let’s see if we can make the situation clearer by adding some English:
- Fribble | is a parent of | Framizan.
Huh. Not much better. Are Fribble and Framazan people? Are they subfolders in a file directory? It turns out that human languages aren’t terribly precise. And if you want disparate computer programs to be able to understand each other without constant human intervention, then you need to be very precise. In addition to a grammar, you need a lexicon. Or, in computer terms, you need an entity model. You need to tell the computer things like this:
- There is an entity type that we call a “person.”
- A person has a first name and a last name.
- A person (in your world) has a unique ID.
- A person (in your world) will always have an email address.
- A person (in your world) might have a phone number.
With this, we can have the computer say something like:
- The person entity with ID “Ann” | has relationship “is a student in” | to the class entity with ID “Intro to Linguistics”
That may sound clunky to you and me, but it’s poetry to a machine.
This is essentially what Caliper adds to a triple structure. It adds a collection of “entities,” or things, that all interoperating computers agree have certain properties. And that, my friends, is what makes time travel work. With both a grammar and a lexicon, learning applications can start talking to each other.
And it turns out that the IMS had a bunch of entity definitions lying around already from their previous standards work. For example, the LIS standard (which is designed to integrate LMSs with SISs) has definitions for a person, a course section, and an outcome. What will be interesting to see is the development of new entities for learning activity types (beyond those that are already specified in QTI). For example, what would we want to know about a reading? A video? A simulation? A note-taking app? We could generate a list of such things pretty easily, and for each thing that we want to know about, we could generate a short list of what we want to know about it. That short list would be the core of the entity model for the thing, and it would be the information that developers would have to expose in their apps in order to be able to plug into Caliper. The downside of adding an entity model is that it’s more work for developers to implement, increasing the chances that any particular developer won’t do it. The upside is more assurance of interoperability and a richer information flow.
So again, to plug into the Caliper LMOS, an app would have to be able to read and/or write some subset of these entities and understand the triple relationships. They would also have to establish a communication channel. Luckily, LTI essentially already does that. The LTI standard was always intended as a kind of a wrapper. It provides single sign-on between learning apps and then enables the two apps to talk to each other. Right now, the standards-based communication over LTI is pretty limited. But Caliper would open up a whole new world of possible communications.
It’s pretty easy to see why the IMS has latched onto Caliper as an analytics interoperability standard. The graph lets you crawl relationships among pieces of data to get to the relationships that you want. Assuming that entities have reasonable metadata (like creation dates, for example), we can ask a bunch of questions in the blogging example above:
- Has Ann submitted all her blog post homework assignments for Intro to Linguistics?
- How close to an assignment deadline does Ann typically complete her blog post assignments?
- How well, on average, does Ann do on her blog post assignments?
- Does Ann have a pattern of completion or performance in her blog posts across all of her classes?
- What is the class average on blog post homework assignments?
None of this sounds particularly earth-shattering until you remember that all of this data is being gathered from the students’ own weblogs. This is not software provided by the LMS vendor. It may not even be software hosted or contracted by the university. This could be from the students’ own blogs, with a plugin installed (which in WordPress, at least, is super simple to do).
Let’s see what happens if we extend the learning graph one step further. Suppose we have a relationship in the triple that we call “is a response to.” If you write a post and I write a comment on your post, then my comment “is a response to” your post. If you write a post and I write a post on my own blog referring to yours, my blog post also “is a response to” your blog post. (We might also create an entity called “comment,” so that we can distinguish between a response that is a blog post on another site and a response that is a comment on the same site: “My comment | is a response to | your blog post.”) Interestingly, most blogs have a feature called “pingbacks,” which detect when other blogs have pointed to your blog post by embedding a URL. Suppose that our Caliper WordPress plugin translates that pingback into a triple that can be read by any Caliper-compliant system. Now we can start asking questions like the following:
- How many responses did Ann’s blog posts for Intro to Linguistics generate?
- Which students in the class write the posts that generate the most responses?
- What percentage of responses occurred as comments on the same page as the blog posts, and what percentage was blog posts on commenter’s own site?
- Which classes have the most activity of students responding to other students?
- What is the correlation between levels of student responses to each other and outcomes?
- What is the moment in the course when students started responding more to each other and less to the teacher?
As the size of the graph grows, the number of questions you can answer grows exponentially.
From Learning Management Operating System to Learning Cloud
But let’s suppose that you want to do more than gather data on students’ use of blogs in a class that is otherwise managed in a central LMS. Let’s suppose that you want the students’ blogs to be the LMS (for the most part). Suppose you want to build a course like ds106, where all student work happens out on their own blogs, and the hub of the course is really just an aggregation point. Right now, ds106 accomplishes this goal through a Frankenstein’s monster of WordPress plugins and custom hacks. I don’t mean to denigrate the technical work that they’ve done. To the contrary, I’m astonished by what they’ve been able to accomplish with chewing gum and duct tape. Caliper could potentially provide them with better tools for richer integration in a more elegant way. It could, for example, create a visualization of the conversation across the various blogs, and make that visualization clickable so that students could see the thread and then jump directly to the posts involved. In a real way, those disparate blogs would function as one distributed piece of software. Each piece would be independent. Students could use the blogging platform they want on the host that they want. But the data would flow freely, easily aggregatable, sortable, visualizable, and analyzable. Forget about the Learning Management Operating System. The future is the Learning Cloud.
In our eLearn article, Patrick and referred to a Flickr social markup of a Merode Altarpiece created by an art history class taught by our friend and colleague Beth Harris. We wrote,
What is the learning object here? Or, to put it another way, what is the locus of educational value? Is it the picture itself? Is it the picture plus the comments of the students? Or is it both of these plus the action potential for students to continue to exchange ideas through the commenting system? A learning object-centric view of the world would place the emphasis on the content, ignoring the value of the ongoing educational dialog as something extraneous. But that view clearly doesn’t allow us to encapsulate the locus of educational value in this case. Sometimes people will try to fudge the difference by tacking the word “interactive” in front of “learning object.” This obscures the problem rather than solving it. “Object” is just a longer word for “thing.” It inherently focuses on artifacts rather than activities. It emphasizes content to be learned rather than the actions on the part of students that lead to learning.
To take a more familiar example, consider the spreadsheet. What is it that you share when you email a spreadsheet to a colleague? Is it the content, the interaction potential, or both? Are you simply sharing a “tabular data object,” or is the potential for the recipient to plug in new data and get new results an inextricable part of the thing we’re calling a “spreadsheet?” There is no one right answer to this question; it is entirely context-dependent. Sometimes what we mean by “spreadsheet” is a set of completed calculations and how they were derived. In this case, content is king. However, at other times a “spreadsheet” means a tool for plugging in new numbers to make calculations and run “what-if” scenarios. Sometimes its locus of value is as a “tabular data object,” sometimes it is as a “tabular data-processing application,” and sometimes it is as an inextricable fusion of the two.
So what is the distinction between a learning object and a learning application? What is the difference between the domain of content (and therefore content experts) and the domain of functionality (and therefore programming experts)? We contend that there is no clean separation of concerns. The world does not divide neatly between functionality packages that can be integrated as Blackboard Building Blocks or WebCT Powerlinks on the one hand, and self-contained content packages that can be tied up in a bow and listed in MERLOT on the other hand. The division between learning objects and learning environments is a false dichotomy. Students need both the functionality and the content—the verbs and the nouns—in order to have a coherent learning experience. They learn when they do things with information. They discuss paintings. They correlate news with its location in the world. They run financial scenarios in a business case study. Consequently, managing the learning content or managing the learning environment in isolation doesn’t get the job done. We need to manage learning affordances. We need to focus on providing faculty and students with a rich array of content-focused learning activities that they can organize to maximum benefit for each student’s learning needs.
That article was published in January 2006. One year later, on January 9th, 2007, Apple unveiled the first iPhone. We live in an appy world now. The LMS is not going away, but neither is it going to be the whole of the online learning experience anymore. It is one learning space among many now. What we need is a way to tie those spaces together into a coherent learning experience. Just because you have your Tuesday class session in the lecture hall and your Friday class session in the lab doesn’t mean that what happens in one is disjointed from what happens in the other. However diverse our learning spaces may be, we need a more unified learning experience. Caliper has the potential to provide that.