David Wiley has a great post up on efficacy and OER in response to my original post about Pearson’s efficacy plan. He opens the piece by writing about Benjamin Bloom’s famous “2 sigma” problem:
The problem isn’t that we don’t know how to drastically increasing learning. The two-part problem is that we don’t know how to drastically increase learning while holding cost constant. Many people have sought to create and publish “grand challenges” in education, but to my mind none will ever be more elegant than Bloom’s from 30 years ago:
“If the research on the 2 sigma problem yields practical methods – which the average teacher or school faculty can learn in a brief period of time and use with little more cost or time than conventional instruction – it would be an educational contribution of the greatest magnitude.” (p. 6; emphasis in original)
So the conversation can’t focus on efficacy only – if there were no other constraints, we actually know how to do “effective.” But there are other constraints to consider, and to limit our discussions to efficacy is to remain in the ethereal imaginary realm where cost doesn’t matter. And cost matters greatly.
David then launches into a discussion of what he calls his “golden ratio,” or standard deviations per dollar. I have long been a fan of this formulation and quote it frequently. I’m not going to try to summarize his explication of it in his post; you really should go read it. But I would like to tease out a few implications here.
Cost/Effectiveness Analysis
By expressing cost and educational impact in a ratio, David is engaging in something called cost/effectiveness analysis. You may be more familiar with the closely related term “cost/benefit analysis.” The main difference between these two is that in the latter benefit is expressed in financial terms while in the former it is expressed in non-financial terms (such as learning gains, in this case). This is a powerful tool which is unfortunately misapplied more often than not. When people invoke cost/benefit, what often mean to invoke is cost, as in, “Do you really think this is worth it?” It is used to selectively question an expenditure that somebody doesn’t like. (Note that I am not accusing David of making this error; I’m just talking about common usage.) In Congress, cost/benefit is often a requirement tacked on to a bill to decrease the likelihood that the thing the amendment author doesn’t like will actually get funding. Likewise in education, cost/benefit or cost/effectiveness is loosely invoked for things that the invokers don’t think are worth the money up front, whether it’s textbooks, LMSs, or teacher salaries.
But the better way to apply the tool is comparatively across the range of possible investment decisions. “Given X amount of money, do we get more standard deviations for our dollars by investing in A or B?” This moves us away from a focus on preventing spending on things we don’t like and toward a focus on maximizing utility, which is what David is after. And this is where it gets complicated. A good part of David’s post is about the complexities of measuring and impacting the numerator in standard deviations per dollar. Unfortunately, we have a lot of trouble tracking the denominator as well. Even the institutional costs can be complex, as Phil’s recent response to Chris Newfield regarding the true cost of the UF/Pearson deal illustrates. It gets a lot more complicated when we start asking, “Cost to whom?” The controversy around the UF deal centers around the cost to the institution and ultimately to the state. Textbooks are paid for by students. Mostly. Sort of. Except when they spend university scholarship money on them. Or state or Federal financial aid on them. None of this argues against the framework that David is presenting. It just makes the practical application of it more challenging.
But It’s Worse Than That
So far, we’ve been talking about the ratio as if “efficacy” is represented in the numerator. David reinforces this impression when he writes,
So the conversation can’t focus on efficacy only – if there were no other constraints, we actually know how to do “effective.” But there are other constraints to consider, and to limit our discussions to efficacy is to remain in the ethereal imaginary realm where cost doesn’t matter.
But that’s not really his argument. His argument is cost impacts access which impacts efficacy. If students fail to use the prescribed product because they cannot afford to buy it, and they therefore do poorly in class, then the cost of the product is inextricable from the measure of its efficacy. This is an excellent example of what Mike Caulfield meant when he referred to the “last mile” problem. An educational product, technique, or intervention can only be said to be “effective” when it has an effect. It can only have an effect if it is actually used—and often only if it is actually used in the way it was designed to be used. Of course, if students can’t afford to buy the product, then they won’t use it and it therefore is not effective for them.
So maybe the entire ratio, including numerator and denominator, collectively expresses a measure of effectiveness, right? Not so fast. There are two colleges that are fairly close to where I live. Once, Berkshire Community College, has a total non-residential cost of $5,850 per year for Massachusetts residents taking 15 credits per semester. The other, Simon’s Rock College, has a total residential cost of $60,000 per year. A cost of $100 for curricular materials could have a dramatic impact on access (and therefore efficacy) in the former environment but negligible in the latter. Standard deviations per dollar does not capture this difference. We could instead express the denominator in terms of percentage of total cost, which would help somewhat for this particular purpose. But what we really need is empirical data quantifying the impact of cost on student access under different conditions. Doing so would enable us to separate the numerator and the denominator once again. If the impact of cost for a particular educational population is already factored into the numerator, then we can get back to a discussion of bang for the buck. We also can make more nuanced evaluations. It may be that, because of the access issue, a commercial product is more effective for Simon’s Rock students than it is for BCC students. Further, we could (theoretically) perform a calculation to determine its effectiveness for University of Massachusetts students, which would presumably be different from either of the other two.
I guess what I’m trying to say is that efficacy is complicated. It’s a great goal, but teasing out what it means and how to measure it in authentic and useful ways is going to be very difficult.
[…] ← Efficacy Math is Hard […]