The current pandemic is forcing us to rethink much that we used to take for granted. This is true in EdTech as much as anywhere else. For example, while privacy advocates have been shouting into the void about how we should all pay more attention to seemingly boring and definitely complicated legal and technical privacy protections in the EdTech products we use, the Zoom debacle has finally brought more attention to the issue. To reduce the chances of many equally bad privacy problems going unnoticed, we must improve both the tech privacy literacy of people purchasing or selecting the products and the craft of transparency on the part of the product providers. University folk need to become better at reading privacy assurances critically while the service providers need to become better at writing those assurances clearly.
The same goes for the teaching value and effectiveness of products. While this term has been a scramble just getting students online and doing the best that we can for them, next term there will be less excuse for slapdash use of random EdTech products. We must get better at evaluating the strength of vendor claims regarding the improvements in student impact that educators can achieve using their products and, likewise, the vendors much get better at communicating their evidence of impact. This has always been true, but now it is also more obviously urgent. We need to get better at talking about what works, under which circumstances, and how we know what we think we know.
EEP sponsor Macmillan Learning, my most recent guest on the e-Literate Standard of Proof webinar series, talked about how they both test what they know about the effectiveness of their products and communicate what they think they know to customers in the crucial early stages of product development, when there aren't enough data to make strong claims based on mountains of evidence. This work must be a collaboration with educators in at least two senses. To begin with, EdTech products generally rarely have a significant impact on their own. They usually have the most effective when they are paired with appropriate and effective teaching practices. So early EdTech product development work must involve working with faculty to understand both what they do in the classroom and what they might be willing to do try doing differently if they had a product with affordances that supported a teaching practice that is novel to them. Second, as product developers making increasingly confident claims about their product, those claims—and the process by which they were derived—must be reviewed by academic experts who can provide peer review to both the experimental design before it is used with students and the interpretation of results as the data come in.
As a (paid) member of Macmillan's Impact Research Advisory Council (IRAC), I have had the pleasure of both observing and participating in their process. Their model of early research and transparency in reporting is a contribution to the field.
Here's the interview: