We interrupt our regularly scheduled post series on Argos Education for this breaking news…
No, I’m not talking about the news that Blackboard was acquired by some ERPish rollup creation of a private equity company. I searched my soul to examine my deepest thoughts and feelings on that topic. Here’s what I came up with:
- I’m happy that Bill Ballhaus finally got paroled. While he and I clashed once or twice, he seems like basically a nice guy. He didn’t deserve to be stuck trying to unload that company for five years.
- I’m both happy and worried for the nice people who work at Blackboard. I hope this means that most of them will still have jobs.
- I have similar feelings about Blackboard customers. I hope this works out for them.
- I am deeply grateful that Phil Hill exists so that I don’t have to spend my time trying to make sense of this deal.
That’s it. That’s all I’ve got. My therapist tells me this is good progress in recovering from the formative trauma that turned me into the blogger that I am today.
No, I’m talking about Pearson suing Chegg for copyright infringement because Chegg has reconstructed Pearson’s back-of-chapter homework answers:
Specifically, the suit revolves around Chegg Study, a subscription service that among its offerings features answers to end-of-chapter homework questions from various texts for $14.95 a month, with the textbook questions often copied nearly verbatim or with just slight changes, the suit alleges. Pearson lawyers told the court that “the majority” of Chegg’s roughly $644 million in total revenue in 2020 came from its sales of answers through Chegg Study, which reportedly counted some 6.6 million subscribers as of 2020.
Pearson Education Sues Chegg, Alleging ‘Massive’ Copyright Infringement
You may recall my post responding to the Forbes article about them, which was entitled “This $12 Billion Company Is Getting Rich Off Students Cheating Their Way Through Covid.” The main piece of new information I added was that most of the large publishers themselves had licensed these answers to Chegg a while back. Those contracts have expired. And now we have at least a hint of how much Pearson thinks their contract with Chegg may have cost them. While the numbers here don’t translate directly into Pearson losses, they do give us an order-of-magnitude sense of the amount of business here.
More immediately interesting to me, though, is this paragraph from the complaint:
As part of Pearson’s focus on pedagogy, Pearson and its authors devote
Pearson v. Chegg complaint
significant creative effort to develop effective, imaginative, and engaging questions to include in the textbooks it publishes. Pearson’s end-of-chapter questions are strategically designed and carefully calibrated to reinforce key concepts taught in the textbooks, test students’ comprehension of these issues, enhance students’ problem-solving skills, and, ultimately, improve students’ understanding of the subject matter. Pearson’s textbooks can contain hundreds or thousands of end-of-chapter questions. These end-of-chapter questions form core components of the teaching materials contained in Pearson textbooks and are frequently hallmarks of Pearson titles. As such, the availability, quality, and utility of these questions are often important considerations when educators select which textbooks to adopt for their courses.
While the lawsuit doesn’t interest me, the core issue about the value of assessment question construction does. At first blush, it’s tempting to think that the answer to “Solve for x: x2 + 3x +9 = 0″ could not possibly be intellectual property. But, as Pearson’s complaint states further down,
One of the points of my original post was that companies like Chegg exist in part because instructor grading policies put students in a position between wanting to learn and needing to pass. While that tension is unavoidable in some cases, it is largely avoidable in many cases. We simply haven’t taught instructors about productive ways to grade formative assessments (or even what formative assessments are).
But the lawsuit brings up a separate and equally important angle. When we step back from the deforming effect of grading and simply look at the assessments themselves, we see teaching craft. Both the construction and the order of well-written questions are designed to probe the finer points of students’ understanding. And in some cases, it teaches those finer points. When students are asked to solve a problem that has one new twist from the previous one, sometimes they learn just by figuring out that twist in the moment.
Pearson is arguing in their complaint that substantial teaching craft is built into their question construction. It has a financial value to the company because it’s part of the value proposition of their product. Setting aside the legal question of whether recreating homework answers constitutes copyright infringement under current US law, I agree with Pearson about the value. I also think that the publishers should not have a de facto monopoly on the craft that creates that value.
Sometimes the questions don’t work as intended. It could be a flaw in the question construction, like an obvious correct answer or, conversely, confusing wording that trips up some students who understand the concept, causing them to give an answer that is considered “incorrect.” Or the sequencing could be off. If the order of questions can be carefully crafted for pedagogical reasons, then anti-cheating functions like question randomization or algorithmically generated questions can take away that tool. And sometimes the question may be written in a way that requires cultural or other context that not all students have in order to understand them. (IQ and other standardized tests were particularly notorious for their cultural biases in their early decades.)
For a variety of reasons, good educators will tweak and invent assessment questions to fit the needs of the students in front of them. And yet, how many assessment EdTech tools can you think of that do a good job of helping instructors learn and share craft in this regard? It’s quite easy to break a good, psychometrically validated assessment design unintentionally by adding problems that are easier or harder than the instructor thinks they are, for example. On an even more basic level, nobody ever taught me how to write a good distractor or even what that term meant. I had to pick that craft up on my own, first by instinct and later by reading and by asking experts. (I am lucky enough to have had access to such experts.)
There has to be a balance. On one hand, instructors should be taught how to recognize and analyze the embodied craft of the assessment questions that they use as part of a curricular product. On the other hand, we should not assume that those assessments are perfectly designed or even that there is any such thing as a single perfectly designed assessment for all teaching contexts in which a curricular product may be used.
Instructors will very often use a mix of pre-made assessment questions and their own. And if they don’t have one tool that lets them do both, then they will mix bits of assessments from different tools with very little way to integrate the information usefully. No product I know of is currently doing a good job of helping instructors think through how to mix and match well. Even dedicated publisher homework platforms like Pearson Mastering or MHE ALEKS, which are theoretically intended for supporting exactly this use case, struggle to strike a balance between the features where the platform is making design- and data-driven decisions and the features where the instructors can insert their own problems. Product designers tend to think of the former as their real mission and the latter as something they have to accommodate because some instructors demand a measure of control.
This is a market failure. Some of the consequences are as follows:
- Instructors don’t make full and appropriate use of advanced curricular products.
- Instructors often have no way, either functionally in the product or intellectually with the skills they have been taught, to appropriately modify a crafted assessment to make it better rather than worse.
- There are not tools that enable instructors to look across their entire collection of assessments and easily evaluate how the pieces are working together.
- There are no clear and easy means for instructors to collaborate with the designers of their curricular products, other adopters of the product who are master teachers, or even their campus faculty support staff, to help them upcycle the curricular product into a course design that is as good as or better than what they’re starting with.
One way to look at copyrighted material is as embodied craft. We have gotten hung up on the idea that the particular embodiment is the locus of educational value. Hence, the lawsuit. Instead, we should really be working on how we can enable and encourage career educators to become expert practitioners and empower them with tools that enable them to practice their craft with more skill and precision. While they shouldn’t have to craft everything from scratch, neither should they be forced to live in a world where there’s a hard and opaque wall between the course design components that they are upcycling and the ones that they are crafting themselves.