I’ve been meaning to write this up for a couple of weeks now. Ken Udas and I recently had a great conversation with folks from OpenBRR and Edutools/WCET about creating a community and framework to evaluate both Open Source and proprietary LMSs, drawing on the knowledge and resources of both OpenBRR and Edutools. In attendance were Tony Wasserman from Carnegie Mellon West, Murugan Pal of SpikeSource, Scott Leslie of BCcampus (and EdTechPost fame), Russell Poulin of WCET, and Bruce Landon of Douglass College (and LandOnline fame). What follows are some notes from the conversation.
Challenges to Community-based Evaluation of Technology
First we discussed the issues that makes creating and sustaining a software evaluation community so difficult:
- Organizations may not want to share information related to core competencies. (This is less of an issue in the public sector in general and higher education in particular.)
- Some data has a relatively short shelf life. It is difficult for volunteer-based communities to maintain current data.
- Many organizations feel an obligation to do their own due diligence beyond any published reviews.
- Particularly for applications that interact heavily with human-to-human business processes (e.g., CMS, ERP, CRM and, arguably, LMS), standard evaluation matrices may be inadequate regardless of how sophisticated and nuanced they may be. These tools must be supplemented with evaluations based on organization-specific use cases, with data possibly in narrative form.
- Regardless of the outcome of evaluation, many public-sector organizations are obligated to follow particular RFP processes that don’t always accommodate differences in Open Source product evaluation and procurement.
Observations About the Loci of Value
Next, we explored the sweet spot, where evaluation communities can deliver the most value with a practical amount of effort:
- Evaluation frameworks are often easier to share and re-use than the data itself.
- There are two models for sharing evaluations; centralized (e.g., Consumer Reports) and decentralized (e.g., Amazon.com readers). Value for either depends on reputation management and on transparency of the evaluation process.
- Evaluation can happen on three levels:
- General evaluation frameworks as well as less perishable evaluation data can be gathered on an industry-wide basis into a centralized repository.
- More fine-tuned frameworks, some use cases, and more perishable data can be shared among smaller communities of peer institutions.
- At some point, individual organizations will need to bridge the “last mile” between evaluations that work for their cohort of peer institutions and their own organization-specific needs. This can be done either internally or through the employment of an external consultant.
- A full-service support ecosystem for software selection might include the following elements:
- A peer-institution evaluation community “erector set”, including processes, training, and supporting community software;
- A library of use cases to support the supplementation of BRR-style evaluations;
- Tools for helping organizations reconcile the evaluation processes with their RFP and procurement institutional processes
Industry-wide general BRR-style rubrics and relatively non-perishable data;
From there, we talked a bit about next steps, which I won’t go to here. Suffice it to say, though, that I hope there are some. This is a great group of people and I’m learning a lot from them.
Derek says
This is indeed a tricky area -mostly because the shelf life of these evaluations can be very limited with changes taking place so frequently.
Key issue here as far as evaluation goes is “what are you evaluating against – what would be the criteria?”
This is where things get tricky, because the development of thought that is occurring in this space (as you’ve acknowledged elsewhere in your blog) is heading away from the tradional LMS solution approach – towards a more learner centric and learner owned model.
I’d favour the development of the decentralized approach you refer to – let’s learn from each other’s stories, takin their context into account etc. – otheriwse we’ll end up going round in circles trying to estblish cumbersome evaluation frameworks that noone reads, or follows, and which date so quickly.
PS – say Hi to Ken Udas – I knew him while he was here in New Zealand!