If you have never been involved with an open source project, one of the great sources of mystification (and anxiety) for you might be how a group of people spread out all over the world with a wide range of motivations can come together to work on a complex project and produce something coherent and useful. Of course, we know that such things do happen. (For a beautiful illustration of it happening under conditions of extreme lack of visible coordination, watch Jon Udell’s classic analysis of the evolution of a Wikipedia page, Heavy Metal Umlaut.) We know that it can work. But, from the outside, it’s hard to understand how.
The truth is that there is a wide range of governance structures that work for different open source projects, and none of them are particularly mysterious once you come to understand them better. Some projects, like Moodle and Linux, have structures that bear some superficial resemblance to normal management structures in that they have strong central managers who make a lot of final decisions. I say “superficial” resemblance because, in many cases, these managers are acting as part traffic cop and part adjudicator, rationalizing contributions and suggestions for direction from all corners of the community more than they are giving top-down commands.
Other open source projects, like Sakai, take a more distributed approach to management. When they can, they tend to modularize the development so that small groups can work largely independently from each other and can reduce the coordination required to those areas in which their pieces have to work together, either from a technical perspective through integration or from a functional perspective through, for example, common user interface conventions. For those functions where cross-module decision-making has to happen, the community develops mechanisms that look a lot like a representative democracy. In some cases, community members will vote directly on an issue that needs to be resolved. In other cases, they will select representatives to work together in a small group to work through the issues. Just how much (and what kind) of coordination is required depends on a number of factors. Software that has a significant user interface generally requires more coordination than software that does not. Programs where the modules share a lot of technical integration interfaces or services also tend to require more coordination than those that do not. Projects that are in the beginning of their life cycle tend to require more coordination those that are mature.
Over the years, the Sakai community has tried a number of different coordination structures. One of the most recent innovations (and experiments, really) is the Product Council (PC), of which I am a member. What follows here is my own personal meditation on how the PC came about, what function it is attempting to serve and how successful we’ve been at it so far.
Historically, Sakai’s architecture has lent itself relatively well to the manage-by-modularization approach. Like most current-generation LMSs, it is very tool-centric. For the most part, the people who are working on the test engine don’t have a whole lot that they need to coordinate with the people who are working on the discussion board. The main areas of coordination tend to be in establishing cross-module consistency in user experience, documentation, and quality assurance. Often, these sorts of needs can be met very well through a combination of community discussion/debate, votes at the end of that debate, and documentation of the resulting standards and norms. Coordination in Sakai has operated roughly in this manner for a while now. And always, the final check is adoption. The Sakai software allows schools to choose whether they implement each tool (allowing them to swap in different discussion boards, for example). If a project doesn’t get widely adopted then we try to understand why. In some cases, a tool is developed to meet very local concerns and just isn’t appropriate for the wider community, which is perfectly fine. But in other cases, either the needs and use cases weren’t identified correctly or some quality norm in the community—either explicit or tacit—was not met. We look carefully at low-adoption tools to figure out what they can tell us about community needs, and then we try to make those needs more explicit.
Overall, it has worked pretty well. That said, the Sakai community decided last year that it would be worthwhile to experiment with other coordination mechanisms, for several reasons. First, while the process in place worked reasonably well, it could work even better. When we talked to tool/module/project owners, it became clear that they could use some help. One problem is that the norms and criteria are, to this day, not as clear and well documented as they ought to be in order to be maximally helpful to the project owners. The community has tried several times to create checklists and other aids, but we weren’t able to muster the sustained effort required to really get the job done. Some tribal knowledge (and, sometimes, guesswork) is required of project owners to figure out what needs to be done. Another problem is that project owners were not always getting the attention they needed from community members who weren’t as close to the project and could therefore look at it with a fresh eye. While it was always possible in theory for project owners to request that feedback from the community at large via the listservs, in practice it is helpful to have people who feel responsible to help project owners to round up the support that they need to get their work peer reviewed.
The second major reason why the community decided to try supplementing its coordination mechanisms is that Sakai’s modules are becoming progressively less independent from each other as we recognize the need for more services that cut across tools. The classic example is grading. We knew fairly early on that assignments and tests need to be graded and therefore the assignments and testing tools should integrate with the grading tool. But it has become progressively more obvious over time that almost any activity a student performs could merit evaluation and feedback from the teacher and therefore should be “gradable.” Thus we need a system-wide grading/evaluation service that will work across a wide variety of tools. This requires more coordination among the project leaders. This concern is real and here today in Sakai 2, but we anticipate it will become much larger in the next generation of the software. One of the guiding principles of Sakai 3 design is to move away from independent tools as the dominant paradigm and toward mashable educational affordances that are instantiated in services. Many different kinds of user-generated content in Sakai 3 should be gradable, discussable, searchable, taggable, archiveable, shareable, and so on. While this approach could yield huge improvements for students and teachers, it also imposes a much larger coordination burden on the project leaders, especially in the early stages of development. So, whether we are talking in terms of evolution in Sakai 2 or revolution in Sakai 3, the fact is that the community’s evolving idea of what an LMS could and should be is raising the bar on our cross-project coordination skills.
And thus the Sakai Product Council was born. Convening for the first time at last summer’s Sakai conference, the PC’s function is about 90% coordination and 10% governance. First and foremost, our goal has been to take existing documentation on project standards that came about through previous community efforts, place them in a framework of a project life cycle, work with current project owners to help them apply those standards appropriately to their particular work, and then feed the lessons learned back to the community in the form of better documentation. In this respect, the PC is really just a group of volunteers putting sustained effort into carrying forward the work that the community has started. It is entirely possible (and even likely, in my view) that, once those standards are articulated in a way that community members (especially project owners) feel is maximally useful, the PC will fade into the background, meet less frequently, and make fewer decisions. The 10% governance that the PC fulfills is to act as a gatekeeper on what goes into the default Sakai distribution known as “core”. The PC has no say over who puts what resources toward developing which tools in what way. If a project group wants to develop something, ignore the PC entirely, and release it for anyone to adopt, they are absolutely free to do so. If the project group wants to gain broader exposure (and maybe more development resources) by including their project in the default distribution of Sakai, then the PC will help them ensure that they have met the requirements to be included. If we do a good job of writing up the community-backed pre-flight checklists and helping the project owners, then a “blessing” of a project by the PC should be mostly a formality in the vast majority of cases.
Since the PC has only been in existence for six months, it is fair to say that the jury is still out on how well it will work. Our first task was to look at the projects that were proposed for inclusion in Sakai 2.7 core. Nate Angell has a good write-up of both the scope and the results here. My impression is that the project owners found our involvement to be helpful on balance, although I hope that they will give us some feedback in this area. (In fact, note to my fellow PC members: We should formally solicit this feedback from the 2.7 project leaders, including both those whose projects made it in and those who did not.) While we wait for the community to gather its thoughts on what needs to be done next in 2.x development, we are turning our attention to what the PC can do in the Sakai 3 context. This is a much larger and, frankly, more daunting challenge. With a mature product product like 2.7, we only had to look at a small handful of and could judge them against pretty well-established community criteria. With Sakai 3, we have largely a blank sheet of paper. We have to look at the whole release and figure out what makes sense.
By the PC’s one-year anniversary at this June’s Sakai conference, the community will have the work we did on 2.7 as well as at least some work on 3.0 as inputs to evaluate how well this experiment is working. I am confident that we will recalibrate based on what we have learned. The PC may continue as it has, adjust its course, or be disbanded entirely. Democracy is always an ongoing experiment. That’s what keeps it vital. And that’s what keeps open source communities working.
DavidH says
Michael, thanks for the reflections from the PC point of view. What I find missing from both this and Nate’s report is evaluating the PC relation to other groups (RM, MT etc). I think reflection and dialogue here is frankly more important that with project owners. This should focus both on what the PC has done (tool promotions) but also areas where the PC can halp and support these initiatives – for instance by advising on areas of the prduct like browser version and database support
Michael Feldstein says
I’m not sure if I agree that dialogue with the release management and maintenance teams is more important than with the project owners, but I agree that it’s pretty important. And since I don’t recall the topic of how all these groups need to work together coming up except in the most brief and superficial ways, we definitely need to talk about this.