* Sustainability and Governance in Developing Open Source Projects as Processes of In-Becoming. Daniel Curto-Millet. Technology Innovation Management Review, January 2013.
Here is the summary:
“Daniel Curto-Millet, a doctoral student at the London School of Economics and Political Science in the United Kingdom encourages us to ontologically redefine sustainability. His study of openEHR and the Opereffa framework have shown him how sustainability is not a state that is stable (even in its desire for stability), but instead sustainability is a process where the multitude of actors, artefacts, archetypes, and so on, and are all in constant flux. He thus feels we need to conceptualize sustainability in a manner that allows us to make sense of it processually – in other words, as in “becoming” (Deleuze and Guattari, 1987). In order to be able to do this, he draws our attention to everyday negotiations, working outs, and engagements that openEHR and its larger ecosystem perform with and within to achieve a more detailed understanding of sustaining (and not sustainability).”
We’ve chosen three excerpts:
* Why Open Source Governance Should Be a Verb
Typically, it is understood that a project is open source if its license conforms with criteria set by the Open Source Initiative (OSI). At its foundation, open source is a static, legal definition describing what can be done with the source code (Perens, 1999; Raymond, 2001). Whether this matters at all to the general public, or whether it has any immediate effect beyond the development team, is a problem known as the “Berkeley Conundrum” (Fitzgerald, 2003). It matters when considering the reaches that code as law can have, the specific mechanisms of social control it can induce, and the implications to democratic values (Lessig, 2006). Lessig’s view is more political, less static. His concern turns from legal code to one of consumption, production, and social responsibility. These issues are even more relevant given the new domains, into which open source has entered and which are far away from its academic and hacker origins (Fitzgerald, 2006; Lindman and Rajala, 2012). When discussing open source and archetypes (abstract representations of meaningful clinical concepts in EHRs), a board member says:
“When I hear open source I tend to think software rather than knowledge so it’s quite different. So the philosophy is, the issue is how do you know when an archetype is good. How do you know, the phrase is, how do you quality assure a model? That your fellow colleagues, the developers, that national governments know that this archetype is safe, doesn’t contain manifestly bad practice, whatever. And most people take the view that of a kind of waterfall approach, so you get the great and the good and the wise say what the requirements are, some clever people […] develop the archetypes, and then we pass this off to a standards body, […] have some experts, blood pressure experts who say oh yes, yes, yes, that’s right, they tick the boxes, they will have some formal criteria against which they’ll be marking the archetypes and they will pull in experts, maybe cardiologists. Now I just don’t think that’s going to happen. I don’t think it is possible to know when an archetype is good enough.” [emphasis added]
This quotation evidences the new nature of open source software, away from the code, in the knowledge realm; it shows some of the values and goals associated with using an open source approach (quality through edition, diffusion, and acceptance of the archetypes; open source rivalling with standards bodies as an institutionalizing power; and continual and acceptable change. “When is an archetype good enough?” Who can answer that other than someone following an open source process?
Open source, and the artefacts it engenders, are definitions in the making, processes, arguments, and particular engineering models. The knowledge engendered is not a thing, a static good eventually catalogued, it is potentially embedded in a continual process of being made, of evolving. Given that an open source commons is an ongoing construction that can never be considered “finished”, it can be difficult to place a commons in time and ask the question: “When is an open source commons?” To answer the question, and to understand why it is so difficult to answer, we must study the nature of these commons.
* Open Source Commons Through Open Source Collective Actions
Ostrom’s work was framed by the economics of resource scarcity. Notably, one of the spin-offs of her framework helped inspire a framework on social-ecological systems (SES) (McGinnis, 2011). How can Ostrom’s work be useful in a field where what is abundant or scarce is not one of the usual resources that we think of (Anderson, 2009)?
What is an open source commons? According to the static, legal definitions of open source, the code is the foremost of commons. It is the central artefact to which people are contributing. However, focusing only on the source code is limiting, because it does not take into account the entirety of what Ostrom calls the “action situation”, where actors interact and evaluate outcomes (Ostrom, 2011). In open source, this is not one physical space, but many interrelated ones (e.g., presence in the code, in the mailing lists, in the documentation, in the IRC channels, in the annual conferences, even in the press). It is useful to see that open source is not just online coding, but that it occurs in a wide variety of different media. The rules and engagements are likely to be different in each media, and the ownership of those different spaces depends on various rules and norms of engagement. Ciborra would probably say that these technologies carry different “necessities of hospitality” (Ciborra, 2004).
The notion of an open source commons is also a fleeting one, with the increasing range of domains into which open source is entering (Fitzgerald, 2006). The complexity of what a common is, and therefore, the ownership of those commons is much more complex than it used to be. As an example, openEHR could be said to have several layers of commons. The project’s goal is to become a standard in health by defining and creating archetypes that in turn define meaningful clinical concepts. These archetypes are based on a reference model that has become an established standard. The reference model was principally inspired by the efforts of openEHR and other previous projects in which the core members participated. Archetypes are potential clinical requirements for any system that adopts openEHR; they describe clinical concepts such as blood pressure. To define archetypes, the openEHR foundation proposes two editors, one from a company with goals closely aligned to the foundation, and another from Linköping University. The editors themselves are based on parsers that understand the Archetype Definition Language (ADL). When archetypes are drafted, they are placed in the Clinical Knowledge Manager (CKM), which is a repository of archetypes where they can be discussed, analyzed, reviewed, approved for publication, translated, etc. On top of that come templates, which are supposed to instantiate the deliberately generalized and generalizable archetypes to particular contexts of use.
Now, all of these are resources in the making. All these layers can have their own licensing, and, maybe more importantly, have their own interrelated action situation. How could this complexity be managed without undue reduction and simplification? How should these “crops” be studied? What should an archetype look like? Who decides what it should accomplish? Once again, because of the continual, in-becoming nature of knowledge-commons, we fall back to the true commons in openEHR, and in many other open source projects: processes of creation. The processes involved and the knowledge created are so entangled that it is difficult to distinguish the assemblage of actors from the processes they are driving that not only try to reshape the world, but come to a collective understanding of their own collective actions. Since there is no “when” bounding the creation of knowledge-commons to a specific, well-defined time, the next logical step is to study how these knowledge-commons are created, and what are the processes that sustain them.
* The Sustainable Processes: Creating Abundant Commons
Sustainability in open source refers to the project’s ability to support itself over time (Chengalur-Smith et al., 2010). It has already been studied, especially through the lens of the community, free-riding, and project size (Lerner et al., 2000; 2006).
Recent efforts have looked at processes instead of static commons. Studies have shown that power relations are important in the process of contributing to the source code (Iivari, 2009; 2010). Also, values, culture, and organizational shifts have been identified as key issues in the adoption of open source into corporate processes (Lindman and Rajala, 2012; Shaikh and Cornford, 2009). Finally, technology has been seen to play a role in the way it enables collaboration in a distributed scale (Laurent and Cleland-Huang, 2009; Noll, 2010; Scacchi, 2009).
It is difficult to place a taxonomy to the current study of open source precisely because of the evolving understanding of its complexity. Open source is a negotiated concept, and the processes of creating open source software can be competitive and conflictive, and it can disrupt technologies beyond expert walls. It is becoming an abstract political machine, shifting itself to accommodate new ideas, pushing for changes (Deleuze and Guattari, 1987). Open source becomes a way to diffuse innovation and to act upon it. When asked about the use of open source in developing software in multiple-expert-domain, an interviewee said:
“Well, you could argue that you don’t need open source to build that relationship, but the thing about it is that, if you want to build an ecosystem of clinicians and developers all collaborating around the same software, let’s say around the NHS [National Health Service], then it needs to be generally open source, or at least the clinical models need to be open source.”
Open source becomes an enabler and an enactor of ecosystems. Through its links to its rooted academic history, to the hacker folklore which is slowly dissipating, to the corporate worlds it is entering, to the legal definitions that impose obligations and grant rewards, and so many other links, it creates a viable alternative to the development of worldly projections. Some would say that it has created itself as an obligatory passage point, an indispensable question that has to be asked when thinking about developing a new software project (Callon et al., 2009). “Should we go open source?” is implanted in practice, just as the software engineering norm “don’t reinvent the wheel” has been impressed into every computer scientist. Another interviewee said:
“And the rigour bit, for me, in the scientific world, most of physics couldn’t exist without open source software, because that’s the way people, you know, software is extraordinarily complex, unless you’ve actually got it in your hand and you work with it, you don’t really know. And there’s so much software around in the world that nobody really knows that… And it gets sold for millions and millions of pounds and then it turns out to be not what people wanted. We really need the practitioners in the field to be much more grounded.”
This quotation emphasizes another aspect of open source development processes: it is sustainable through the scientific, rigorous, transparent values that it enacts by the publishing of the artefacts. This definition encapsulates the requirements engineers’ philosopher stone: how to build the correct system above building it correctly (van Lamsweerde, 2009; Letier and van Lamsweerde, 2004). In other words, how can the proper processes be employed that will ensure that a useful system is built? This brings the discussion full-circle back to the governing of open source.”