Anyone who has been awake in higher education in the last couple of years knows that there is a lot of attention on outcomes and assessment lately (although with distinctly different emphases in the U.S. and the E.U.). A natural consequence of this attention is that the various LMS platform developers are adding capabilities that are focused in this area. Blackboard has probably created the biggest splash with their highly promoted Outcomes product, but all the major platforms are doing work in this area, to different degrees and employing different strategies. I’ve been curious for some time about how the different approaches to this thorny problem space will shape up, which is why I am grateful that Ken Chapman, Desire2Learn’s Leade Product Manager, was willing to sit down with me at EDUCAUSE and talk to me about what D2L is doing in this area.
Before we get into the details, though, we need to lay out the basics of the problem space. Fundamentally, outcomes assessment is about connecting a student’s class experience with some larger goal. For example, take the case of a student reading Chaucer in a literature class. Did she learn how to better analyze a poem? Did she learn how to read Middle English? Did she learn about Chaucer’s point of view and historical context? Did she learn skills and values that will make her more likely to pass other classes and graduate? Did she learn how to write a better essay?
Notice that these assessment points in my quick list are quite different from each other. This is the root of one of the most intractable problems in the outcomes debate: What should we be assessing? Which of the questions listed in the previous paragraph is the most important to answer? What is the most important possible outcome of an education? These are cultural, political, philosophical, practical, and ideological questions all tangled up into one big hairball. There isn’t one universally best answer. Some of where you come down depends on why you’re asking the question in the first place. Are concerned with training the next generation of literary scholars? Are you looking to maximize students’ likely economic benefit from their education, regardless of career path? Are you trying to create better citizens? Or do you care most about helping the student cultivate a rich and fulfilling life of the mind? The answers to these questions have a strong impact on whether it makes more sense to look at test scores or portfolios, whether assessment instruments should be the same across courses or even across states, and lots of other critical implementation questions. Without widespread agreement on goals and priorities, there will be no widespread agreement about what to assess or how to assess it. It is nearly impossible to get such widespread agreement in many cases. And yet, there is also a sense that if we give up on assessing outcomes altogether, we run the risk that the schools that students, parents, communities, and governments invest in will produce nothing of value for anyone.
This is the morass into which the LMS developers must journey. They can’t dodge the challenge of outcomes assessment and they can’t afford to oversimplify it either. At a minimum, they have to provide tools that will support at least a significant subset of the kinds of goals I listed above. In an ideal world, the tools would different stakeholder groups make thoughtful and effective decisions about their goals and priorities by supporting a life-cycle process for the development and continuous re-evaluations of outcomes definitions and assessments. In this first post of this series, I’m going to look at how D2L defines the outcomes structure itself. In the second, I will describe some of the capabilities they offer for tying assessments to those outcomes, and in the third post, I will talk about how all this can link to a learning object economy and offer some final thoughts.
Why would they want to do that? Suppose that the psychology department has decided that all psych majors should acquire a certain set of competencies before they graduate, and that understanding confidence intervals is one of those competencies. If other departments share the competency definition, then a student who learns about confidence intervals in his sociology class can track that he has made progress toward learning what he needs to know for his major. Now, on the other hand, it may turn out that the various departments feel their students have different needs with respect to understanding and properly using confidence intervals, so maybe they don’t share competencies. Under D2L’s system, they don’t have to. The department can define their competencies separately. They system supports cooperation but doesn’t mandate it.
So that’s the basics of D2L’s competencies structure. In my next post, I’ll look at the assessments and rubrics capabilities in more detail.
[…] Stephen quotes Michael’s summary of the D2L rubrics, specifically: “Every competency has at least one learning objective under it. In turn, every learning objective has at least one assessment which is the actual instrument for checking to see if students have met the learning objective.” […]