Digital badges seem to be filling in the cracks for career readiness—and sometimes college readiness—that aren’t covered by formal degree and certificate programs.
"Outcomes" covers impact—including the measurement of it—for technology-enabled education initiatives.
Northern Arizona University appears to be getting good results with their math emporium model, based on their internal analysis. The study isn’t water-tight, but it is fairly compelling.
In my last post, I talked about the need for educators in general and faculty in particular to develop literacy around data and analytics. But it’s really broader than that. Back when college was intended for a relatively small percentage of the population, the idea of “weeding out” students who couldn’t make it without help was not obviously out of alignment with its mission.
The Department of Education’s new tool for evaluating colleges is…uh…not so great.
More than anything else, SRI’s meta-analysis of adaptive learning studies shows that we won’t be able to prove what works until we start designing better and more consistent studies.
This is almost old news now, but we just haven’t been able to dig into it yet. As part of its Adaptive Learning Market Acceleration Program (ALMAP) program, the Gates Foundation funded SRI to do a study of the results of the grants after two years. I hope to finally clear some time to parse through […]
The Chronicle has an article out today, “Can the Student Course Evaluation Be Redeemed?”, that rightly points out how student course evaluations are often counter-productive to improving teaching and learning. The article refers to a Stanford professor’s call for an instructor completed “inventory of the research-based teaching practices they use”, but most of the article centers […]