- Scaling the Seminar
- The Anatomy of a Scaled Digital Seminar
In my recent first look at Engageli, I wrote about the importance of scaling the humanities seminar. The short version is that budget pressures will force universities to make cuts to programs that are the most costly to run. Since STEM programs tend to generate more grant money and social sciences programs can often teach at least the lower-level courses in large numbers, the humanities are vulnerable. The pedagogy of the seminar constrains the size of even the 100-level composition classes that all students take (in contrast to the vast majority of other 100-level courses). We are not good at scaling seminar-style courses with quality yet. Consequently, programs that rely on seminar-style pedagogy are vulnerable during hard economic times. Faculty are more likely to be let go, and programs are more likely to be cut.
This action can create a vicious cycle because teaching these disciplines with quality currently depends on having sufficient faculty. The more faculty are cut, the weaker the program becomes. The weaker the program is, the fewer students it attracts. The fewer students there are in the program, the less financially healthy the program becomes. As a result, cutting into a program can easily lead to the program’s eventual elimination altogether. While specious rhetoric around humanities degrees not being career-relevant doesn’t help, the real pressure on humanities departments will come from the institution’s per-student cost.
These are good reasons to find ways to scale the seminar, but there are others. In fact, I’m going to argue that scaling the seminar is essential for equity if we want to avoid a two-track system of education—one at expensive universities and another for the rest of us. This is already a serious problem that the current situation is likely to make worse. I’m also going to argue that it is not an oxymoron to talk about large seminars. I think it’s possible to approximate—and even innovate on—the pedagogical affordances of a seminar with significantly more students in one class section.
Finally, I will argue that one major reason we haven’t arrived at this solution before now is that EdTech is stuck in something of a cul-de-sac. We’ve been pursuing the solutions that have helped improve student self-study by providing machine-generated feedback on formative assessments. While these solutions have proven beneficial in some disciplines and have been especially important in helping students through developmental math, they have hit a wall. Even in the subjects where they work well, these solutions usually have sharply limited value by themselves. They work best when the educators assigning these products use their feedback to reduce (and customize) their lectures while increasing class discussion and project work.
Despite the proliferation of these products, flipped classrooms and other active learning techniques are spreading slowly. And meanwhile, there is a whole swath of disciplines for which machine grading simply doesn’t work. This most broadly applies to any class in which the evaluation of what students say and write is a major part of the teaching process. EdTech has been trying for too long to slam its square peg into a round hole.
I am calling for a new research agenda. One in which we focus on scaling educational conversation and human-to-human engagement with as much energy as we have been investing in machine assessment. I am far from the first person to call for a focus on this problem. For a variety of structural and historical reasons, it hasn’t gained traction. But the situation has changed. Now is the time for such a research agenda. Now is the time for us to figure out how to augment and scale educator-facilitated pedagogy. We can’t just keep trying to automate assessment and hoping the rest will work out. Scaling the seminar will not be easy. But I will argue that the challenges are less daunting than those we face by continuing down the current path while the results could produce higher quality and more equitable education.
This post is the first in a multi-part series.
The gold standard
If we’re being honest, the seminar is still considered the de facto gold standard for all disciplines. However much we may pretend that large lectures are acceptable in teaching some disciplines, the truth is that rich colleges and universities minimize them and market that lack of scale as one of the hallmarks of an elite institution.
On the undergraduate level, one way—perhaps the dominant way—in which top-tier colleges and universities measure and express the quality of the education they offer is through their student/faculty ratio. This is a proxy for the seminar/large-lecture ratio.
This pattern holds online as well. Take a look at the differences in the design of online graduate programs from top-branded universities versus those from older, more access-oriented programs. We can use 2U as a proxy for the former since they essentially built their company by persuading top-tier universities that they can build top-tier online graduate programs. 2U’s motto is “No back row,” and their course designs are heavily weighted toward synchronous classes of constrained size. 2U scales graduate seminars by offering many sections of the identically designed course with more junior (i.e., less expensive) instructors trained to teach the same course design. One way to express this strategy is that 2U is scaling the seminar “horizontally.” They are still offering normal-sized seminars but are pushing down the cost of incrementally adding more sections.
One of the reasons they have been able to sell this approach to faculty senates that would have rejected a larger and more asynchronous design is that 2U has preserved the seminar’s basic structure. Keep in mind, though, that this model tends to work best with one-year degree prices in the $40,000 or $50,000 range. It incrementally improves the scalability of seminars. That’s not a criticism of 2U. It’s simply an acknowledgment that this approach to scaling does not bend the cost curve radically enough to solve the problems that I am calling out.
The reality is that we have to substantially lower the cost of higher education in the United States while increasing the number of available seats in degree programs if we are going to meet equity goals. And we also have reasons to do so for institutional sustainability. If colleges can serve more students—with quality—at a lower cost per student, then we can achieve both equity and sustainability goals.
This shift cannot work if it begins and ends with a mantra to “do more with less.” Access-oriented institutions are already doing more than they can sustain with less than they need to be sustainable. Rather, we need to create tools that are “force multipliers.” We can think of that metaphor in the military sense, i.e., increasing the effectiveness of the human forces we have on the ground, or in the physics sense, i.e., increasing the force that one human can apply by using a pully or an inclined plane or an engine. Technology externalizes techniques into tools that enable us to apply those techniques more effectively with less effort. EdTech should externalize pedagogical techniques in ways that enable us to apply them to help students learn more effectively with less effort per student on the part of the educator.
I believe in the transformative potential of EdTech. I also believe that we have been thinking too narrowly about the kinds of tools we can create and how we can apply them.
The cul-de-sac
EdTech has generally approached scaling by trying to replicate the growth of large lecture classes, i.e., by scaling the machine scoring of assessments. On one level, this makes sense. We know that students do better when they get frequent and timely feedback, and we also know that providing that feedback is incredibly time-consuming. The feedback loop is the fundamental unit of teaching and learning. Students try something, see what happens, and adjust accordingly. Educators give students something to try, watch what happens, and adjust accordingly. The more rapidly and frequently students can get feedback, the more likely students are to learn, and the more rapidly they tend to progress. So scaling feedback loops—both the number that can be provided to each student and the number that can be supported for an entire class at one time—is an important problem to solve for both quality and cost reasons. Automating assessment is a straightforward way to scale feedback loops without burdening the instructor.
Keep in mind that this trend started long before “EdTech” was a word. Universities figured out that TAs are less expensive to use as graders than professors and that Scantrons are less expensive than TAs. Universities were using Scantrons and similar machine grading technologies to scale assessment when I was a student back in the 1980s, at a time when personal email addresses were still relatively uncommon.
Unfortunately, scaling via machine assessment is problematic. Some types of student output—like writing or comments in a discussion or projects—are difficult for machines to evaluate. To do even a passably reliable job, machine assessment inevitably flattens the very notion of feedback. This approach can work OK for helping students master foundational knowledge and skills that are low on Bloom’s Taxonomy but fall apart quickly when trying to provide feedback on the kinds of complex analysis and problem-solving skills that we associate with a college education. There are underutilized methods for pushing at the boundaries of machine assessment somewhat, such as educational games and inquiry-based courseware designs, but these approaches have their limits too.
Think about the feedback that students get in a seminar. Classroom conversation is rich with feedback. In addition to direct feedback from both the instructor and fellow students, there’s indirect feedback that results from multiple people engaged in purposeful conversation and collaborative problem-solving. This high-bandwidth environment for learning feedback doesn’t easily lend itself to simulation or replacement by software-assisted self-study. Likewise, when evaluating student expressions of sophisticated ideas, even the most enthusiastic proponents of writing assessment software have to grant that today’s algorithms are left in the dust on the subtleties compared to even a mediocre human reviewer. And again, while writing and humanities are the most obvious cases, I’ve had math and physics professors tell me that they can facilitate a deeper understanding of even foundational concepts in a small, discussion-based seminar than they can in any other way.
So while conventional machine assessment enables us to help more students achieve passable literacy levels in reasonably well-structured knowledge domains, it’s not consistently good at teaching critical thinking in these domains. And it’s virtually useless at teaching fluent expression or collaborative problem solving—two key skills for the modern workplace (not to mention the modern democracy, such as it is).
The fundamental limiter to scaling the class is the human instructor, who has only so many hours in the day. Augmenting instructor feedback with machines the approach that EdTech has emphasized so far. But what if we could scale the student peer-to-peer feedback with quality? While instructors do not scale as class sizes grow, peers scale as the class size grows by definition. Instead of putting all our eggs in the one basket of trying to make computers provide feedback that is as good as humans, what if we focused more energy on getting students to provide feedback that is as good as the instructor’s?
The new “blended” and the new “flipped”
COVID-19 is teaching us a lot in a hurry about the quality of that high-bandwidth educational feedback machine that we call the classroom. And we are already beginning to see products like Engageli begin to respond to those lessons learned. We can begin to see a world in which those rich synchronous educational conversations can take place fully online or even in some random and unpredictable mix of some participants being online and others being in a physical room together. We can aspire to a new type of “blended” classes that mix synchronous and asynchronous experiences rather than a mix of online and face-to-face.
As I wrote in my Engageli first look post, that would be great but not enough. In an environment where software mediates the face-to-face as well as the online synchronous class experience, the software could scaffold pedagogy in ways that a physical classroom cannot—for both the educator and the students. At the tail end of my Engageli review, I started to explore ways in which software can help improve the quality of human-to-human feedback. I used the example of Riff Analytics, which helps students to recognize whether they are speaking out, taking turns, and affirming statements made by their peers in group conversations. In other words, Riff helps students to become more effective at collaborative, purposeful, and equitable peer conversations.
We can take this idea much further, particularly if we blend synchronous and asynchronous tools. The ideal we should be trying to replicate and improve on is not just the blended classroom but the flipped classroom. One of the perennial challenges with flipping the classroom is that facilitating consistently productive group work among students is incredibly hard. It’s a lot of work that sometimes fails regardless of the instructor’s skill and best efforts. It’s hard even in a small class when conducted by an instructor who practices it regularly. It’s really hard in a large class when conducted by an instructor who is mostly trained and experienced in a conventional lecture model.
We shouldn’t be surprised at the stories of instructors who tried to flip their classes and had horrible experiences. We haven’t provided them with the tools that they need. With its competency-based analytics, courseware can help faculty prepare by giving them a sense of the foundational knowledge that students may be either mastering or struggling with. But by itself, that information is not sufficient for a successful flipped classroom. The hardest part about flipping a class is getting the groups to collaborate effectively and consistently. I can’t think of any widely adopted EdTech tools that are specifically designed to help with this part of the challenge.
I think it’s possible to scale the seminar and active learning across most disciplines. I think we should put more energy into improving the quality of peer feedback rather than single-mindedly focusing on improving machine feedback. There is research, and there are products that can help to achieve this goal. Both have been chronically undernourished because all the glory (and money) has gone to machine assessment. But this historic moment we are all living is creating an opportunity to rebalance our efforts. COVID-19 has forced EdTech—and many educational institutions—to think about the richness of human-to-human conversational experiences with more clarity and specificity.
In my next post in this series, I’ll describe how we might tackle this challenge using the hardest one of the courses to scale that I can think of—English Composition—as my example.
Fred M Beshears says
You may want to consider Tutored Video Instruction. It’s a simple idea that goes back almost fifty years. Essentially, to replace a lecture course, you organize students into small study groups of around 5 to 7 students. You make one student the study group leader (aka the “tutor”), and you provide the group with a video of a lecture that’s been broken up into segments. You also provide them with study questions and problem sets. The group leaders start and stop the lecture video at scheduled intervals. In between lecture video segments, the group leader puts the study questions and problems for that segment to the group.
Back in the mid-1970s, the Dean of Stanford’s School of Engineering, James Gibbons, pioneered this approach to distance education. He still believes in it today.
Here’s something that John Seely Brown had to say about TVI in his classic book The Social Life of Information.
John Seely Brown on Tutored Video Instruction
https://memeinnovation.wordpress.com/2020/06/23/john-seely-brown-on-tutored-video-instruction/
Also, here’s a recent interview with Gibbons, who’s still active at Stanford.
Lessons in remote learning from the 1970s: A Q&A with James Gibbons
The former dean of Stanford Engineering looks to experiments he did more than 45 years ago to help answer the question that’s on everyone’s mind: How will online learning work out?
By Andrew Myers
August 14, 2020
https://engineering.stanford.edu/magazine/article/lessons-remote-learning-1970s-qa-james-gibbons
Luke Fernandez says
Love the woodcut. One thing that might bear more investigation is whether the seminar is really just something that elite schools can afford. I teach at a public university with an annual tuition that is under 10k a year. And yet there are still lots of seminar style classes being offered here where enrollments are under 20 students. And AFAIK we’ve got a sustainable fiscal model. Maybe the challenge in sustaining the seminar has less to do with scaling it than in making other parts of the university more efficient.
Fred M Beshears says
Back in the early 1970s, Stanford came up with an idea that they used in their distance education program. It’s not a faculty led small seminar, but it does involve organizing students into small, student-led study groups. The innovator here was Jim Gibbons, the dean of Stanford’s School of Engineering. He decided to combine a technology that was new at the time, the cheap VCR, with an old idea of organizing students into study groups.
He called it Tutored Video Instruction, which is a bit misleading because the tutor is actually the student who’s leading the study group. Now days this approach is called team-based learning. IMO, it deserves at least as much attention as some other ideas that are more popular today (e.g. the flipped classroom).
As John Seely Brown describes below, the student-led study group team meets face-to-face to work their way through lecture videos that have been broken up into relatively small segments (e.g. 10 minutes). At the end of each segment, the team leader stops the video and the study group works collectively on problem sets and discussion questions. There is not paid teacher in the room, but today one could envision a slight alternative where the students could meet virtually with a paid tutor via Zoom.
IMO, even though it’s sometimes thought of as “distance education,” the TVI approach would be an improvement on the large lecture classes we see today. Yes, the large lecture is technically face-to-face instruction. But the large the class gets, the harder it is to see the distinction.
Back when I worked at UC Berkeley (1987-2007), we used to joke about our large lecture classrooms: “How many rows back do you have to sit before it becomes distance education?”
In any case, Michael, I think that TVI (aka team-based learning) deserves to be mentioned in your discussion of how we might scale up small seminars so they can be seen as alternatives to large lecture classes.
——————————————————
From The Social Life of Information (2000)
by John Seely Brown and Paul Duguid
(p. 221 – 223)
PEER SUPPORT
One of the most intriguing social aspects of learning is that, despite the metaphor of apprenticeship, the relationships involved in enculturation are not simply ones of novice and expert. Putting learners in contact with “the best in the field” has definite value. Peers turn out to be, however, an equally important resource.
An early attempt at distance teaching by video revealed this quite unexpectedly. Jim Gibbons, former dean of engineering at Stanford, taught an engineering class to Stanford students and engineers from Hewlett-Packard. When it became impractical for the engineers to attend, Gibbons started recording the class and sending the video to the engineers. The engineers would watch these tapes as a group. At regular intervals they would stop the tape and discuss what Gibbons and the class were talking about, coming to some sort of collective understanding before going on. [24]
To Gibbons’s surprise, the engineers, though they had lower academic credentials coming into the course, consistently outperformed the classroom students when tested on course material. This finding has proved remarkably robust, and other courses using the “TVI” method have had similar comparative success.
Gibbons has been careful to note, however, that the success did not simply result from passing videos to learners. The name TVI stands for tutored video instruction, and the method requires viewers to work as a group and one person from that group to act as tutor, helping the group to help itself. This approach shows, then, that productive learning may indeed rely heavily on face-to-face learning, but the faces involved are not just those of master and apprentice. They include fellow apprentices.
The ability of a group to construct their education collectively like this recalls the way in which groups form and develop around documents, as we noted in chapter 7. Together, members construct and negotiate a shared meaning, bringing the group along collectively rather than individually. In the process, they become what the literary critic Stanley Fish calls a “community of interpretation” working toward a shared understanding of the matter under discussion. [25]
TVI is not an easy answer. As Gibbons and his colleagues argue in one discussion, “The logistics of creating videos, organizing training for small groups, finding and training tutors, etc. can be daunting.” [26] For many individual learners, of course, the logistics of finding a group – which in Gibbons’s approach precedes finding a tutor because the tutor comes from the group – can also be daunting. So colleges and universities play a critical role in providing this sort of access.
Gibbons’s results provide positive evidence for the importance of a cohort for learning. There is interesting negative evidence, too. Studies have shown that people doing course work in isolation, though they may do as well on the tests, find the credentials they receive are less valuable than those of their peers who worked in conventional classroom groups. Employers, the research of Stephen Cameron and James Heckman reveals, discriminate between the two. Those who possess all the information of their peers but lack the social experience of school are not valued as highly. This discrimination has led to what Cameron and Heckman call the “nonequivalence of equivalence diplomas.” [27] It will be important to see on which side of the equivalence divide the degrees of providers who allow students to take their degrees wholly on-line will fall. [28]
In making these distinctions, employers would seem implicitly to distinguish degrees according to the type of access they reflect, access not only to practices and practitioners, but also to peer communities. Stanley Fish once called an essay about communities of interpretation “Is There a Text in this Class?” With distance education, where texts are shipped to individuals, it will become increasingly important to ask, “Is there a class (or community) with this text?” [29]
There’s more at:
John Seely Brown on Tutored Video Instruction
https://memeinnovation.wordpress.com/2020/06/23/john-seely-brown-on-tutored-video-instruction/