For a couple of years now, we’ve been saying that higher education is at the beginning stages of a long transition from a philosophical commitment to student success toward an operational commitment to it. In other words, colleges and universities are beginning to grapple in earnest with how to rewire themselves so that their culture and processes are deliberately optimized and continuously tuned to support their students in getting the best education possible. This is a profound shift. It will require major changes to the ways in which academia works and the ways in which ed tech designs and markets its products. It will be very hard and take a long time. But the drivers of this change are in place.
Recently, I wrote about how our concept of Empirical Education has built into it a theory of change. The implication is that it has the backbone for a methodology of change. Our work as both analysts and consultants shown us that the increasingly aligned strategic priorities throughout the sector, when combined with the knowledge that is scattered across it, can be distilled down into a powerful yet flexible methodology for system change in education analogous to Design Thinking or one of the Agile software development methodologies. It can be a set of processes, built on a fairly small set of fundamental principles but supported by a lot of detailed craft knowledge and a rich ecosystem of supporting tools. It can be owned by no-one, although there would likely be some premiere practitioners of it. Colleges and universities could use it to redesign themselves to be more student-centric and, in the process, also more educator-centric. Product and service companies could design their offerings around it and compete based on their ability to help their academic customers better implement it.
This sense of possibility has been the animating impulse behind the Empirical Educator Project (EEP). We started with only a hazy idea of what we were building. Over the last twelve months of working with academics and ed tech product people, some aspects have become clearer. I have grown more confident in the potential of the idea even as I have grown more overwhelmed with clearer understanding of the size of the undertaking.
I am going to articulate my latest thinking about it in this post.
The time is now
There is a saying among consultants that potential clients won’t hire a consultant until and unless they both realize that they have a serious problem and come to accept that it is not a problem they can solve on their own. That holds equally true for a wide range of difficult changes that require help or cooperation, from coping with an addiction to building a functioning government to changing an institution. Higher education has been an incredibly stable system. As in, remarkably consistent over a period of about a thousand years. Historians of education tend to write about changes that take place over decades or half-centuries. There has been a looming question of whether such a slow-changing institution can adapt to such fast-changing times. Unsurprisingly, this debate has been raging for a few decades now, with relatively little sector-wide change to show for it. Is the system terminally rigid, or is it in a state of punctuated equilibrium that will shift in an appropriately dramatic amplitude once it reaches an inflection point?
I believe the latter is the case, and I believe that we are at that inflection point. Access-oriented institutions—particularly publicly funded ones—have already been under pressure for some time now to show better outcomes for students in terms of rough measures like graduation rates and time to graduation, as well as some more meaningful but difficult measures popping up on the margins such as employment and career success. On the other end of the spectrum, the elite institutions whose brands have popularly defined excellence in education for the last century or more are starting to realize that they need to adjust to changing student expectations if they are going to continue to be considered the gold standard for the next century or more. The MOOC craze was complex and problematic, but it woke the elites up to the potential for use of technology-enabled approaches to enhance their teaching practices rather than detract from it, even as their students show up on campus with increasingly high expectations for the kinds of access to knowledge, interactive experiences, and high-touch communication that technology can enable. And in the middle, private universities with decent regional reputations and tuitions that approach those of Ivy League schools are increasingly under pressure to justify their tuition with something of more permanent value to students than climbing walls and dining halls. More and more, the buzz is about innovative partnerships with employers, or about learning analytics, or about student success systems supporting better guidance counseling. In other words, we are seeing colleges and universities grope toward approaches that enable them to more reliably support student success. And they are looking for help to do it.
The focus of that previous paragraph is primarily on undergraduate education, but it also increasingly applies to graduate education. We sometimes see this problem manifest itself in financial terms, where it gets somewhat obscured by the current conversations around Online Program Management (OPM) companies. Universities often launch career-oriented graduate programs such as MBAs and MSWs because (a) they are looking for more revenue to make their institutions more sustainable, (b) online programs can scale without scaling costs like real estate and physical classrooms, and (c) they know there is a market of people who are inclined to sign up for online graduate programs that can fit with their work and family schedules while also giving them credentials that will help them advance in their career ambitions. But as the online MBA market gets saturated, universities increasingly have to find differentiators. And they can’t use climbing walls or dining halls. In the end, the only effective and durable differentiator for an online career-oriented graduate degree program is its effectiveness at helping the students achieve their goals. In this space, the immediate university driver is revenue and the immediate student goal is career advancement. So the sector tends to view this change narrowly. But if you zoom out a little, it becomes clear that the trend with graduate programs and OPMs is just one particularly clear example of where the academic institution’s financial sustainability issues are driving it toward a sharper operational focus on its mission.
Let’s turn now to the educational vendors, who are also at an inflection point across product categories. All of these companies—curricular materials providers, LMS vendors, SIS vendors, analytics vendors, and so on—they are all looking to move up the value chain and argue that their products can directly, meaningfully, and provably impact student outcomes.1 They have to, because most of the major ed tech product categories are either in danger of commodifying or in danger of failing (in the case of established product categories) to achieve meaningful market penetration (in the case of new ones).
The textbook companies hit the wall first. As students increasingly found ways to avoid buying new books (or any books), the textbook publishers raised their prices, which started a vicious cycle of reduced sell-through followed by price increases followed by further reduced sell-through followed by further price increases. This was ultimately unsustainable, particularly since the internet has made obtaining basic factual information and focused educational supplements—think YouTube—easily obtainable and free. Increasingly, publishers had to make the case that their content is somehow better than the commodity content. But better how? For a long time, the “better” publishers worked on was instructor convenience. But there’s only so far that slides, extra problem sets, and auto-graded homework can compensate for the vicious pricing cycle, particularly since the commodity materials get more organized and feature-rich over time. Eventually, the major publishers came to the conclusion that the only sustainable “better” they could shoot for is more educationally effective.
Pearson was the first out of the gate with a massive push for “efficacy.”2 They have bet and are still betting the company on that strategy. But as I have written about here before, the fundamental problem is that products can’t really be “efficacious” in and of themselves unless the educators in whose class the materials are being used (a) agree with the efficacy goals that have been defined by the product developers and (b) change their teaching to work with the educational strategies designed into the products. More fundamentally, the educators have to trust the research claims of the vendors in order to even think about the product-defined efficacy goals, much less adjust their teaching strategies. Pearson’s original articulation of efficacy failed to account for any of this. They have since adjusted their course, and other curricular materials developers—most notably McGraw-Hill Education and Macmillan among the larger players—have followed suit by also focusing more on encouraging faculty to buy into research-backed teaching practices and then, having obtained that buy-in, show how their products support and implement those practices.3 But for all their good efforts—and they are generally, good, honest efforts—these vendors are pushing string. Most academics will never take them seriously as a source of advice for considering deep and scary changes to their teaching practice.
Meanwhile in the LMS space, the developed markets have saturated and are stabilizing. New adoptions appear to be down. There are many developing markets to plumb, but they are slow and expensive to develop. So LMS vendors too have been trying to move up the value chain by talking more and more about student success. D2L has focused for some time now on the course design process and has been adding tools to its portfolio like LeaP, which is a tool for recommending personalized supplemental curricular materials.4 Blackboard has gone so far as to promote themselves as “your partner in change,” to the point of deprecating their flagship LMS project as “not enough.”5
And yet, the LMS companies face the same uphill battle with credibility that the textbook publishers do. By and large, academics are not going to look to their LMS providers for guidance on how to change their teaching practices. The same goes for the upstart product categories like learning analytics. Vendors will struggle to convince academics to change their teaching practices, but their products will mostly fail to demonstrate meaningful learning impact until the academics adopt practices that take full advantage of the products. All these vendors need to climb a wall of credibility with academics, but they can’t do it unless somebody throws them a rope. (Companies with significant faculty-facing service components have the best chance of swimming upstream, but that’s another post for another time.)
All the institutions in the sector—all types of colleges and universities, all types of ed tech vendors—have realized that they have a problem and are starting to realize that they can’t solve it on their own. They recognize that the core problem is that colleges and universities need to get much better at supporting student success, however their particular students may define it. They all want to get there and are starting to look to each other for help. But they don’t know how, and most of them can’t do it alone.
There’s only one stakeholder group in this picture that has not gone through the process of seeing that they have a deep problem and accepting that they need help solving it yet. Have you spotted who they are?
The people who can actually solve the problem
While the shift in incentives has reached a tipping point for the institutions, the same cannot be said for the faculty. Their graduate training is largely unchanged. Their tenure and promotion criteria are largely unchanged. The rewards and accoutrements of professional accomplishment are largely unchanged. Faculty have been given no reason to change; therefore, they don’t. Everybody knows this is true.
Or not. There are several vital aspects of this story which everybody “knows” that are either misleading or flat out wrong.
First, faculty do change. Anybody who has significant experience with the development of online learning programs or other course redesign efforts has seen it happen. They have faculty say that their experience in the redesigned class has changed the way they teach in other classes. They have watched skeptical faculty turn into preachers of the gospel. There are converts. Despite a dearth of incentives and a plethora of disincentives, despite uneven support, despite the fact that most will earn no glory for it on the other side of the closed doors of their respective classrooms, faculty do embrace pedagogical change when they have the right sorts of experiences that enable them to see the benefits.
Where are these amazing faculty members? They are everywhere and nowhere. They tend to be invisible on their home campuses, although if you ask around in different departments, you might be lucky enough to catch sight of one or three. (Or a dozen.) They have often learned the hard way that there is little benefit and significant pain involved with preaching on their home campuses, so many of them keep quiet and quietly work their magic in their own classrooms. If you want to see them in numbers, you usually have to go to one of the conferences where they congregate. I am going to one this week. One of the main activities of the participants will be crying on each other’s shoulders about how under-appreciated and under-resourced their efforts are on their respective home campuses.
It is also untrue that incentives for faculty to excel in their teaching craft remain rare. It’s still early days, but there are green shoots everywhere. Most of the time, we only hear about a small number of schools that are doing remarkable things. Arizona State University, Southern New Hampshire University, and Western Governors University, over and over again. If you’re a little more knowledgeable, you might have heard about work at University of Central Florida or Georgia State University. And if you’re paying attention to formal scholarship, you might a little about work coming out of places like Carnegie Mellon University, Duke, and Stanford. We could look a little further down the publicity pyramid at places like the University of Maryland Baltimore County. You very likely haven’t heard about the amazing work happening at diverse schools ranging from James Madison University to Coppin State University. I wouldn’t have known anything about the accomplishments of either of these institutions if I hadn’t stumbled upon them through my various travels in this very odd job of mine.
And because the news tends to focus on a few exceptional institutions, it also focuses on three contributors to success that are among the hardest to change: leadership, governance, and money. It is simply not true that the only institutions making real change have once-in-a-generation presidents, an iron grip on the faculty, and/or tons of funding. We see innovation everywhere. And everywhere it happens, it happens because institutions are finding new ways to draw on their most precious yet plentiful resource: their faculty.
There is an old term of art that deserves reviving and refreshing: the scholarship of teaching and learning (SoTL). SoTL is often seen as a grassroots effort by faculty who care about teaching to wrap it in the cloak of academic validity. If the only way that excellence in teaching will be valued by the institution is to get it into peer-reviewed journals, then let’s find a way to get it into peer-reviewed journals. In the past, institutions generally didn’t take the bait. Many treated SoTL as a pat on the head to faculty who were slaving away carrying the heaviest teaching and advising loads. “Here, you care about this teaching stuff. Have a workshop. You can pretend what you’re doing is scholarship for a while. And we’ll give you a certificate!”
That is changing. More and more institutions are realizing that faculty aren’t the problem; they are the solution. But that grassroots energy that comes from SoTL and other faculty empowerment efforts must be aligned with institutional efforts through support, incentives, and research. More and more institutions are making that connection. For example, here’s a graphic illustration of the dynamic, taken directly from Georgetown University’s Designing Our Future(s) web site:
Here are some lessons learned from Georgetown’s white paper about the progress the initiative has made so far:
A few core rules for this innovation work have emerged. First, every project has to push against some structural constraint (the 15-week semester, the credit hour, the nine-month calendar, etc.) and test variations of it. Second, projects cannot be idiosyncratic or depend on the particular interests of one talented faculty member; they have to be pilots from which we can generalize and which we might apply to other scenarios or problems. Lastly, we only fund a Red House project for one year (or the equivalent); after that, if a project is to survive, it has to be absorbed into the curriculum and faculty workload.
Beyond these basic rules we have also learned some valuable lessons about the viability of experimental and creative curricular work in a culture designed for deliberative shared governance and slow change:
- We developed strong stakeholder involvement as part of our iterative design process—one that frequently included associate deans, the registrar, compliance officers, and financial aid representatives—early in each project’s development. Likewise, we communicated well and regularly with our board, alumni, and donors
- We do not give ourselves as good a grade on continuous communications with faculty. Early on there were many open invitations and speaker events, and a drumbeat of updates. As the work became more intense and demanding, we focused inward, and neglected to continue to reach back out to this important community. We learned it is absolutely critical to spiral communications outward, and to be as inclusive and open as possible, especially as the work takes specific shape within a core group.
- Very early on we should have established a formal faculty review and approval process for Red House pilots. We assumed we would work within the Curriculum Committee approval structures, long established for important reasons, but which do not in the end benefit a research and development initiative. Last year, a Designing the Future(s) Advisory Committee was created, with the sole mission of approving and monitoring innovation projects. This system is now working very well; it might have accelerated progress if it had been instituted earlier.
These are very early lessons, and they are somewhat Georgetown-specific. But it’s easy to see some more general principles emerge that could be useful across a wide range of educational and cultural contexts. And some of the most fascinating and remarkable changes are happening at institutions that you never read about, including some that have traditional faculty governance, few financial resources, and leaders who are extraordinary in the “normal” sense that many committed, hard-working, people-oriented academic leaders are in colleges and universities of all shapes and sizes.
I am going to write about some specific examples of this sort of organizational alignment in upcoming posts. For now, I want to spend a little time on the characteristics of a good methodology.
Toward a methodology of Empirical Education
When I think about general methodology that can be adopted and adapted across a wide range of contexts, the two models that come to mind immediately are Agile software development and Design Thinking. I’ll focus on Agile (and particularly Scrum) for the moment because I know it better, but as far as I can tell, the same basic principles apply to Design Thinking.
First, the methodology should be designed to unleash the creativity of the knowledge workers involved in the critical processes. All too often, we take really smart people and put them in a strait jacket of process. We tend to design our mission-critical processes to get us predictable results, often by controlling the human element through various management techniques. The problem arises when we ask for predictable results in an unpredictable environment, having handicapped the very smart people who are best able to minimize the problems that arise out of unforeseen circumstances while maximizing the benefits of unforeseen opportunities. There is no knowledge work I know of that has more frequent and dramatic unforeseeable challenges and opportunities than education. We wrap a lot of process around education, but it’s not the right kind of process to promote excellence by getting the most out of talented educators, just as using Gantt charts was not the right sort of process to promote excellence in software development by getting the most out of talented engineers.
At the same time, empowering knowledge workers is not the same thing as letting them do whatever they want. I have been in an Agile software development environment where the engineers interpreted Agile to mean that they decide everything. The results were not good. All Agile methods that I am familiar with have multiple roles, with each role having certain authority and responsibilities. These roles are designed to be mutually supportive, and the success or failure is very much a success or failure of the entire team and its teamwork. This is a big cultural change for many institutions, where “academic freedom” has come to be used reflexively as a shield from any demands, sometimes because some of those demands are unreasonable or unwise. There has to be a well-defined process by which student success is understood to be the collaborative responsibility of the academic team, working together as an ensemble.
These two basic principles—empowering individuals and working as teams—can generally be captured in a fairly small number of rules and roles, regardless of the flavor of Agile being practiced. And most Agile teams that get them right will function adequately while getting more satisfaction from their work—under relatively unchallenging circumstances. They may even feel that they are doing Agile well. But then there is a whole world of craft that is all about handling context-specific challenges. How do you balance functional versus non-functional requirements? How do you prioritize aging aspects of the software, a.k.a. “technical debt”? How do you manage large projects that require many Agile teams? How do you deal with extrinsic constraints on release timing (like the start of an academic term)? Agile practitioners can always improve their craft, both as individuals and as teams. Entire industries of tools and consulting have grown up around supporting excellence in that craft.
Which is utterly unlike the way in which the industries that surround education function (or fail to function) today. There is a reason for that. An industry designed to promote operational excellence of knowledge workers cannot succeed in absence of a shared understanding among the knowledge workers about what operational excellence looks like. Agile software development is a craft with a lot of consensus around the principles, a track record of results, and enough expert practitioners that knotty problems, along with their solutions, can be shared fairly efficiently across a very large and loosely organized profession. There is a lot of debate too, which is the sign of a healthy ecosystem of knowledge workers advancing the leading edge of their craft. But that debate occurs within the context of a common understanding that is woven into the culture. Practitioners in those debates are rewarded with recognition of their expertise and contribution to the field. And their employers love having these experts and reward them appropriately because their excellence at creatively applying and innovating with the methodology advances institutional goals.
With that cultural substrate in place, a tool or service vendor can come in and say, “We help you solve X sort of problem in your Empirical Education process,” and the prospective customers will, understand what is being offered, be capable of evaluating its utility, and place (monetarily quantifiable) value on that utility. That’s what we need for learning analytics, adaptive learning, or just about any whizzy, trendy ed tech thingamabob you can think of or will be thought of.
Most or all of the elements for a methodology of operational excellence in education exist in the world today. They need to be gathered, distilled, and refined into a learnable, repeatable, and adaptable practice. That is the outcome we aspire to achieve in collaboration with the participants in the EEP, not to mention support from the collective wisdom and will of higher education writ large.
Moving forward
As I wrote earlier, I will be blogging about relevant examples we are seeing, on both the institutional side and the vendor side, in the coming days. And EEP will soon be announcing the first release of some tools that can help form a foundational layer of the institutional infrastructure for Empirical Education. In the meantime, if you are going to be at the Online Learning Consortium Accelerate conference, I will be moderating an EEP-relevant panel discussion of SoTL on Thursday at 11:15 AM in Oceanic 1. From there, some of us will head to the exhibition hall, where we will have an EEP meet-up at the Soomo booth (#226) at 12:15 PM. You don’t have to be a member of the current EEP cohort to join us; the EEP-curious are welcome.
- I am use phrases like “student outcomes” and “student success” interchangeably and broadly for the purposes of this post, even though I know that they can have different connotations. [↩]
- Disclosure: Pearson is a sponsor of EEP. [↩]
- Disclosure: McGraw-Hill Education is a sponsor of EEP and subscriber to our Trusted Advisor market analysis service. Macmillan is a sponsor of EEP. [↩]
- Disclosure: D2L is a sponsor of EEP and a subscriber to our LMS market analysis service. [↩]
- Disclosure: Blackboard is a sponsor of EEP and a subscriber to our LMS market analysis service. [↩]
Bryan Alexander says
This feels like it shares currents with evidence-based medicine.
Which is a good thing.
Peter Hess says
Is the “not” in this sentence meant to be there?: “…but their products will not mostly fail to demonstrate meaningful learning impact until the academics adopt practices that take full advantage of the products…”
Michael Feldstein says
That extra “not” wasn’t not not supposed to be there. It’s been unknotted. Thanks for the catch, Peter.
Joseph Tryble says
Operational excellence in Agile mode. Great insight to bring the required change. Time is ‘critically’ now! Good thought for the vast number of higher education institutions in Asia as well.
edwardoneill says
Yes, we should improve teaching based on what works.
But I fear the dream of local teams of empowered knowledge workers is too small.
Even working in the most highly functional teams, such workers are struggling against an institutional context which blocks at every turn the kind of sharing of evidence that would allow the whole educational system to improve.
….
So evidence of learning might improve TEACHING locally, but what does it do for courses and programs? For higher education itself?
[Continued at: https://plus.google.com/+EdwardOneill/posts/h1eBio6aPNv ]
Michael Feldstein says
Edward, one of the challenges that is discussed regularly in Agile is the impedance mismatch between the way good engineering teams need to work and the way that good executives need to manage their business operations. Deadlines are a classic and easy-to-understand example. Agile methodologies emphasize that there can be a lot of unknowns under the covers when developing features that can impact timeline, so they avoid promising X feature set by Y date as much as possible. But delivering X feature set by Y date is exactly the kind of certainty that executives need (or think they need) in many cases. Negotiating that misalignment is part of the craft, and that’s the kind of craft that’s needed in higher ed as well.
edwardoneill says
Thinking about my (overlong) reply after writing it, I thought of it this way.
You are writing about dispersed groups working separately, I think, and sharing results: empirical knowledge coming from the ‘bottom up.’
I am pointing out that results are only sharable and applicable beyond their context within some common framework. I don’t believe the scientific method is enough of a common framework, and I am arguing that the obvious framework is: outcomes or competencies.
Teaching strategy S may be effective to degree D for batch of students B, but if we hold the outcomes constant, we could compare not just teaching strategies (which are really harder to replicate than we imagine) but also learning resources, assessments, and other things.
Put differently: we’re not just concerned with how these or those human beings teach, we’re concerned with the exchangeability of all the elements of the learning process.
We neither want to reinvent the wheel, nor to use a specific wheel that someone else found effective: we want to find the right type of wheel for our make and model.