I’m delighted to announce a project aimed at making it easier to share interactive digital content at the lesson level as well as to establish baseline educational analytics for digital curricular materials. I’m tempted to call this a “courseware” interoperability effort, but its potential application is broader than that term would imply to some folks. For example, the work could support well-structured content in LMSs.
This effort is consistent in philosophy with my recent “Content as Infrastructure” post series as well as the post of a version of my IMS talk on interoperability, learning analytics, and pedagogical intent. One of the main outputs of the project will be a white paper, released as an Empirical Educator Project (EEP) contribution, describing the standards proposal that is ultimately developed and its value to education. The project also dovetails very nicely with both previously announced and as-yet-unannounced EEP projects, and I’m very excited about the work.
The idea for the work both grew out of and is funded through a grant from the U.S. Department of Education (ED) to develop “active OER.” In the course of the grant planning process, ASU professor and grant PI Ariel Anbar came to the conclusion that the grant would have a much broader impact if the content being developed were interoperable. He consulted with ED, and they agreed that interoperability would potentially increase the impact of the resources related to the grant. So a small fragment of the grant budget was carved off to test the viability of building a coalition that can make useful progress on proposed standards definitions that are both practically useful and likely to be adopted. At the moment, my work as a facilitator is the main budget expense for the project.
Business and mission goals
We had a kick-off meeting of a small group in late October. (More on who was in it, why they were chosen, and how we hope to expand later in this post.) Here are the notes I captured on the goals and ambitions for impact:
- Reduce platform lock-in for any interactive courseware content, particularly interactive OER content, which will support the following:
- Increase the quality of existing OER content by enabling the preservation of learning design
- Increase the supply of interactive OER content by creating a clear and achievable interoperability standard for content developers
- Increase the availability and value of OER content for value-added platform and service providers by lowering the cost of goods involved in converting the currently available “flat” OER resources into interactive lessons with effective learning design
- Enable educators to more easily mix and match interactive content at the lesson level
- Enable the development of an ecosystem of non-OER content that could be licensed at the lesson level
- Enable lesson-level, cross-platform, cross-content learning analytics which will support the following:
- Data-based continuous improvement of learning content, regardless of its source
- Baseline student learning analytics capabilities that will enable institutions to monitor student progress in an apples-to-apples way across lessons, products, and courses
- Student- and instructor-facing analytics that will help them analyze how well their respective learning and teaching strategies impact outcomes
- A vision for implementation and ecosystem development that incentivizes participation for a wide range of commercial and non-commercial value-added participants in order to:
- Lower the barrier to adoption for courseware platforms by assuring customers that any content they develop or use will not be locked into the licensed platform
- Lower the cost-of-goods and availability of high-quality pre-existing content for value-added OER curricular materials product and service providers
- Enable micro-licensing models for commercial content vendors that develop high-value lesson-level content
- Lower the barrier for non-profit organizations and consortia to create interactive content that is competitive in functionality and measurable quality with baseline student and teacher expectations for commercial courseware
I’ll provide more of my personal take on these goals in a subsequent post, but there is consensus in the group that we should be working toward a set of goals that are good for everyone—students, instructors, institutions, and value-added content and platform providers.
Functional and technical goals
Consistent with the posts I linked to at the top of this one, we’re going to start by identifying questions that educators and students would want to answer about student progress, effectiveness of content design, and effectiveness of learning intervention. Our default atomic unit for this work is the “lesson.” Our starting point for identifying this set of questions will be the ones that the participating implementors have identified as ones that their users/adopters/customers want to answer, but I expect that we’ll expand from that base over the two-year life of the initial project.
Once we know what questions we want to answer, then we will identify the metadata for the content that is needed to answer those questions. For example, which learning objective(s) does this assessment question assess? Is the assessment formative or summative? That sort of thing. No firm decisions have been made about how this would work on the technical level, but the basic idea is that the pedagogical intent of the learning design would be captured in some machine-readable form.
Technically, we’d like to build on as much existing standards infrastructure as possible and propose developing as little new work as possible. While the project, as a piece of a larger ED grant, does not have a formal affiliation with an interoperability standards body, I am pleased to say that IMS Global and its CEO, Rob Abel, have been highly encouraging and offered technical support to the group as we think through the effort. IMS has a lot of the infrastructure that would be needed for the effort already baked into its existing specifications. It makes all the sense in the world to try to re-use or extend standards that are already developed and adopted.
Ultimately, the group will produce a set of recommendations for interoperability standards along with a rationale for those recommendations. The hope and intention are that these recommendations will be taken up and carried forward by the appropriate bodies at the end of the project and that the participants will continue to work together on implementation.
More on process and risk management
Standards development is a tricky business. You want to get to an “everybody in the pool” moment, but at the same time, you can’t win everybody over by promising to boil the ocean. So we thought a lot about how to get this process rolling and balance different risks over time.
At my suggestion, we started by inviting in just a few of the many implementors who ultimately should be at the table. Two—Carnegie Mellon University’s Open Learning Initiative and Lumen Learning—are long-time and active participants in the OER world. While this effort will be helpful to more than just OER, the primary purpose of the grant is for the development of (inter)active OER, so we wanted representatives who could speak to the needs and nuances of the OER ecosystem. The two other implementors we invited—Smart Sparrow and CogBooks—are courseware platform implementers that both work extensively with ASU already. Smart Sparrow is also playing a major role in this OER grant since Ariel has chosen its Inspark Education network to help manage the grant and is building the content on the Smart Sparrow platform. Also, CMU, Lumen, and Smart Sparrow have all been participants in EEP. In addition to the implementers and ASU, we had representatives from Scottsdale Community College and ED at the kick-off meeting.
This is a small enough group with enough interconnections that we have a good chance of making progress on scoping goals without excessive amounts up-front diplomacy required but diverse enough that we would get different opinions and perspectives. It’s a good group for getting started and for testing the basic idea that what we want to accomplish is doable within a reasonable period of time. Ultimately, however, the project will need more and different folks to be involved if it going to result in broadly implemented interoperability standards. The starting group of four implementer participants is going to work toward a letter they all can sign onto that says they are committing in principle to implement any standards that ultimately flow out of this effort. The value in their commitment at this early stage is to decrease the risk of other implementors who may want to join but are skeptical that the effort will produce results. In parallel, the project is seeking additional funding that would enable us to support the participation of more stakeholders—educators, platform implementers, content developers, and standards committees (and possibly students as well).
It is early days for this work. So far, the group has only met once. There is still a lot to do and a lot to be figured out. But I am hopeful that we can both develop useful recommendations for advancing interoperability standards and pioneer some new ways of working together on productive EdTech collaboration in the process.
Dale Johnson says
Great piece to get the ball rolling. Here are some of the questions we are trying to answer.
What percentage of the class did the assessment activity incorrectly (or correctly)?
Which students did the assessment activity incorrectly (or correctly)?
What instructional resources did they use (view) before they did the assessment activity?
How many times did the use each instructional resource?
Which instructional resources did the students use the most (least)?
How much time on average was each instructional resource used?
Is there a correlation between the time of use and the assessment result?
Is there a correlation between formative assessment results and summative assessment results?
Michael Feldstein says
Thanks, Dale. This list does give a good flavor for the range of questions and interests. You have some questions that are essentially Google Analytics-level questions, all the way up to your last two, which start to get to educational effectiveness of the content and, potentially, the classroom interventions.
Dale Johnson says
What do you think about setting up a Google doc to scholarsource as many questions as possible? That might help jumpstart your efforts to identify the questions the systems would have to answer.
Michael Feldstein says
As I said in the post, managing a standards development process is inherently tricky. On top of that, we have limited resources to manage broad input at this point. We will definitely want to open this up, but we’re not ready for crowdsourcing.
Mikhail Bukhtoyarov says
A very interesting and timely initiative. I have three questions:
– Will these standards be based on the existing competency standards/ontologies?
– How will these standards deal with Open Source and Creative Commons licensing?
– How foreign universities/educational institutions can contribute?
Michael Feldstein says
Mikhail, I’m sorry that I missed your comment until now (and I hope you have subscribed to comments so you see my reply). While it’s very early in our work, I will say our intention is to create as little as possible. So in answer to your first question, I can say that we do not aspire to create yet another standard for competencies. Regarding licenses, this project is an offshoot of a grant for creation of OER. So the approach will be compatible with OER content and open source code, although we aspire to make it agnostic to license, including traditional copyright. As for international participation, right now we have the funding to work with our existing cohort, so the issue isn’t international so much as support for seats at the table. We are seeking funding for a bigger table. Furthermore, since we eventually intend to submit our recommendations to the standards bodies that own the standards we will be building on—IMS for sure; we’ll see if it makes sense to include others—they have their own processes for inclusion that will follow ours. So the answer to your third question, for now, is “stay tuned.”