Richard Culatta from the US Department of Education (DOE, ED, never sure of proper acronym) wrote a Medium post today describing a new ED initiative to evaluate ed tech app effectiveness.
As increasingly more apps and digital tools for education become available, families and teachers are rightly asking how they can know if an app actually lives up to the claims made by its creators. The field of educational technology changes rapidly with apps launched daily; app creators often claim that their technologies are effective when there is no high-quality evidence to support these claims. Every app sounds world-changing in its app store description, but how do we know if an app really makes a difference for teaching and learning?
He then describes the traditional one-shot studies of the past (control group, control variables, year or so of studies, get results) and notes:
This traditional approach is appropriate in many circumstances, but just does not work well in the rapidly changing world of educational technology for a variety of reasons.
The reasons?
- Takes too long
- Costs too much and can’t keep up
- Not iterative
- Different purpose
This last one is worth calling out in detail, as it underlies the assumptions behind this initiative.
Traditional research approaches are useful in demonstrating causal connections. Rapid cycle tech evaluations have a different purpose. Most school leaders, for example, don’t require absolute certainty that an app is the key factor for improving student achievement. Instead, they want to know if an app is likely to work with their students and teachers. If a tool’s use is limited to an after-school program, for example, the evaluation could be adjusted to meet this more targeted need in these cases. The collection of some evidence is better than no evidence and definitely better than an over-reliance on the opinions of a small group of peers or well-designed marketing materials.
The ED plans are good in terms of improving the ability to evaluate effectiveness in such a manner that accounts for rapid technology evolution. The general idea of ED investing in the ability to provide better decision-making information is a good one. It’s also very useful to see ED recognize context of effectiveness claims.
The problem I see, and it could be a fatal one, is that ED is asking the wrong question for any technology or apps related to teaching and learning. [emphasis added]
The important questions to be asked of an app or tool are: does it work? with whom? and in what circumstances? Some tools work better with different populations; educators want to know if a study included students and schools similar to their own to know if the tool will likely work in their situations.
Ed tech apps by themselves do not “work” in terms of improving academic performance1. What “works” are pedagogical innovations and/or student support structure that are often enabled by ed tech apps. Asking if apps works is looking at the question inside out. The real question should be “Do pedagogical innovations or student support structures work, under which conditions, and which technology or apps support these innovations?”.
Consider our e-Literate TV coverage of Middlebury College and one professor’s independent discover of flipped classroom methods.
How do you get valuable information if you ask the question “Does YouTube work” to increase academic performance? You can’t. YouTube is a tool that the professor used. Now you could get valuable information if you ask the question “Does flipped classroom work for science courses, and which tools work in this context?” You could even ask “For the tools that support this flipped classroom usage, does the choice of tool (YouTube, Vimeo, etc) correlate with changes in student success in the course?”.
I could see that for certain studies, you could use the ED template and accomplish the same goal inside out (define the conditions as specific pedagogical usage or student support structures), thus giving valuable information. What I fear is that the pervasive assumption embedded in the program setup, asking over and over “does this app work” will prove fatal. You cannot put technology as the center of understanding academic performance.
I’ll post this as a comment to Richard’s Medium post as well. With a small change in the framing of the problem, this could be a valuable initiative from DOE.
Update: Changed DOE to ED for accuracy.
Update: This is not fully to the level of response, but Rolin Moe got Richard Culatta to respond to his tweet about this article.
@RMoeJo it’s true. I believe research has to adapt to pace of tech or we will continue to make decisions about edu apps with no evidence
— Richard Culatta (@RCulatta) August 25, 2015
Rolin Moe: Most important thing I have read all year – @philonedtech points out technocentric assumptions of US ED initiative
Richard Culatta: it’s true. I believe research has to adapt to pace of tech or we will continue to make decisions about edu apps with no evidence
- And yes, they throw in a line that it is not just about academic performance but also administrative claims. But the whole setup is on teaching and learning usage, which is the primary focus of my comments. [↩]
Nate Angell says
I most definitely agree that the question is posed backwards: it’s as if technological determinism is assumed and our task is just to measure it.
I was aiming at this same idea in a much more obscure way in my recent post on @holden’s post on @mfeldstein’s post: http://xolotl.org/mindset-middleware/