For those following my two recent posts about the National Bureau of Economic Research (NBER) working paper claiming to analyze “The Returns to Online Postsecondary Education”, there are two articles I recommend for broader coverage – both from Inside Higher Ed. The first article is “Online Ed’s Return on Investment”, where Carl Straumsheim summarized the report itself as well as early critiques from me, Russ Poulin from WCET, and Jeff Seaman from Babson Survey Research Group.
“Even a quick check with one of the databases they did use … would show they are off on their counts and should have made them rethink their assumptions,” Poulin said in an email.
Jeff Seaman, co-director of the Babson Survey Research Group, called the methodology “seriously flawed.” The Babson Group previously produced annual reports on the size of the online education market but began to focus more on in-depth surveys after the federal government began collecting and reporting online enrollment data.
The second article, titled “Impressions of the Hoxby Study of Online Learning”, came out today where Doug Lederman asked for impressions from a dozen higher ed observers. Most observers were critical of the data issues as we have noted at e-Literate, but there were a few with positive reactions. I think that Deb Adair (from Quality Matters) has an excellent response, concluding with the following [emphasis added]:
In short, the study has little relevance for understanding the current and future economic value of online education as online learning has rapidly evolved from something provided primarily by for-profit institutions for primarily online students (who apparently, as implied in the article, need to matriculate at the least-rigorous institutions, perhaps for nefarious reasons) to a much more wide spread adoption and deployment. About a quarter of all post-secondary institutions participate with Quality Matters (including all types, levels and Carnegie classifications with only a small fraction representing for-profit institutions) and the diversity in online programs, delivery and students is tremendous. Online learning is not something being perpetuated in some dark little corner of the academic world with scheming profit-seeking institutions short-changing academic rigor to line their pockets in collusion with students of dubious integrity in ways that defraud the taxpayer. That’s not even an “alternative fact.” The truth is that the economic value of education is difficult to measure regardless of delivery modality and the different approaches to teaching with technology today is fast making the mode of delivery irrelevant if not impossible to differentiate.
There’s more great commentary in both articles, and kudos to Inside Higher Ed for their coverage.
To Be Clear, However . . .
Doug makes a point in the setup to the article that is worth mentioning.
But given the prominence of the study’s author, Caroline Hoxby, and her (accurate) assertion that the growing prominence of online education demands that it be subjected to serious scholarly examination, it is probably unwise to dismiss the paper outright.
I agree that the subject is important in doing open research on the outcomes of various education issues, and from a public policy standpoint, looking at the return on investment is a good idea. And yes, the study’s author as well as the NBER imprimatur lends credence to the subject and will likely ensure that the paper continues to be discussed, criticisms notwithstanding. So I agree that we should not dismiss the paper outright.
Having said that, what we have is not a simple case of solid analysis on a subset of the claimed data with sloppy language implying broad interpretation. If you wanted to reinterpret the paper to view this as solid research for for-profit institutions, or even for exclusively-online for-profit institutions, you would still have problems. The fatal flaw was conflating online students with online coursework with online programs with online institutions, all in an effort to take institution-based IPEDS data and enable student-based longitudinal data to be combined with IRS records. In the process, all kinds of input data got mashed together and excluded in such a way that the data input to the economic analysis no longer makes sense. By using “probabilities” in the definition of “substantially online”, the report ends up adding in students taking face-to-face courses into the world of online learners, for example.
Furthermore, I question the methods to even define the institutions or students as “exclusively online” or “substantially online” prior to 2012. The author claims that IPEDS was her data source to study all online learners from 2009 – 2014, yet IPEDS did not start collecting the data referenced in the article until Fall 2012. Maybe there is an explanation for this data, but it does not come in the working paper.
As Deb Adair points out, there is not a monolithic online learning approach that can be compared to a monolithic face-to-face learning approach. There is a broad range of usage and approaches. such that it is very difficult to come up with useful research on online vs. face-to-face. This report does not solve this problem.
No amount of solid number crunching or economic models can overcome flawed input data and flawed assumptions. I truly hope that NBER and the author will either clarify methods or pull and revise the paper to be based on accurate input information. The subject is important and should not be ignored. But the problems with the data do not lend analysis as being valid even for a subset of schools.
[…] post Recommended Reading: IHE coverage of NBER paper and critiques appeared first on […]