In Spring 2016, faculty, support staff and administrators at Oregon State University met to candidly share their experiences with adaptive learning technology.1 I shared two different videos from the event at EdSurge in this article.
At one point I asked what people saw as risks or barriers to further adoption of adaptive learning courseware, and two people had very similar responses. In a nutshell, the over-active marketing claims from many vendors could be the biggest barrier. This is somewhat counter-intuitive as over-active marketing is commonly seen as the cause of technology being adopted even when it should not be. Listen to their responses (< 30 seconds).
Keep in mind that these are educators who chose to attend the adaptive learning workshop and have a general interest in the technology. These are not hard core skeptics. It’s as if we’re hearing potential advocates say Houston, we’ve had a problem2.
We have been critical here at e-Literate when we find ed tech vendors making spurious marketing claims, and Michael in particular has parlayed this into well-deserved NPR fame. But these answers from OSU go further and suggest that marketing claims are harming the vendors themselves. Our primary concern is whether faculty and staff have accurate information to support their own decision-making, and not the financial health of vendors, but this view of self-limitation is an interesting one to consider.
I talked to George Siemens today to find out to get a broader perspective of vendors making research claims and efficacy claims that cannot be backed up. Seimens agreed that this is a broad problem, for all of ed tech. In our discussion, he brought up two interesting points.
Even before the vendors make their marketing claims, educators in general and researchers in particular “don’t have access to data that would allow us to evaluate whether they’re over-promising.” For a program such as one-to-one laptops, the educational community has the data – which students have what, the pre- and post- conditions, and general information for evaluation of the efficacy of the program. For adaptive learning and other technologies delivered remotely with proprietary applications, “we don’t own the data” and we “don’t have the avenue to evaluate claims; we have to rely on vendors to tell us.”
This situation requires a high level of trust, and that trust is just not present in the current environment. Siemens thinks that personalized and adaptive learning are provocative and exciting ideas based on real educational needs. So what would improve the current level of distrust and faulty claims? Siemens suggested that “We, the research community, should be able to see what’s happening under the hood.” Acknowledging that some vendors would consider their algorithms and data proprietary, he did not suggest completely open data. The situation would “best be served by [academic researchers] signing contracts to get data out”, allowing independent research and analysis. Note that this suggestion goes beyond independent research on program outcomes only, but getting access to internal data and even algorithms.
When I asked Siemens about the adoption question, he was not sure how over-promising affected adoption in general, but he noted the Teresa Sullivan effect (the University of Virginia president who was ousted then reinstated based on not acting early enough with MOOCs). At least for pilots, the educational community is “buying impression that you’re not falling behind” rather than buying real data and real results.
I believe these two views are consistent – that vendor over-promising is a major barrier to further adoption of adaptive learning and that pilots don’t really rely on efficacy data. The adoption in question is when schools or departments seek to move beyond pilots and get multiple courses or even entire programs to use similar technology. That is where adaptive learning is today – lots of pilots, lots of claims that cannot be backed up, and difficulty getting more faculty and programs to evaluate given unknown results.
Should we see broader adoption of adaptive learning? Maybe, maybe not. And that is the problem – we don’t know under which conditions broader adoption (and adaptation from initial pilots) makes sense or doesn’t make sense. Vendors should be very cautious about promising results and instead promise to work with schools to help them get results. And one key step towards a healthier environment would be opening access to data and algorithms for academic researchers to do independent analysis.