Essay of the Day: Stigmergic Organization and the Economics of Information

* Essay: Why is Open Access Development so Successful? Stigmergic organization and the economics of information. By Francis Heylighen.

(Source: : B. Lutterbeck, M. Bärwolff & R. A. Gehring (eds.), Open Source Jahrbuch 2007, Lehmanns Media, 2007)

The abstract:

“The explosive development of “free” or “open source” information goods contravenes the conventional wisdom that markets and commercial organizations are necessary to efficiently supply products. This paper proposes a theoretical explanation for this phenomenon, using concepts from economics and theories of self-organization. Once available on the Internet, information is intrinsically not a scarce good, as it can be replicated virtually without cost.

Moreover, freely distributing information is profitable to its creator, since it improves the quality of the information, and enhances the creator’s reputation.

This provides a sufficient incentive for people to contribute to open access projects. Unlike traditional organizations, open access communities are open, distributed and self-organizing. Coordination is achieved through stigmergy: listings of “work-in-progress” direct potential contributors to the tasks where their contribution is most likely to be fruitful. This obviates the need both for centralized planning and for the “invisible hand” of the market.”

An excerpt on self-organization through stigmergy:

Francis Heylighen:

“To understand the distributed organization that characterizes open access development, we can draw inspiration from recent theories of self-organization (e.g. Heylighen & Gershenson, 2003) and complex adaptive systems (e.g. Muffatto & Faldani, 2003). A particularly relevant idea, used in the modelling of collective intelligence (Heylighen, 1999) and the simulation of swarming behavior (Bonabeau, Dorigo & Theraulaz, 1999), is the concept of stigmergy (Susi & Ziemke, 2001). A process is stigmergic if the work (“ergon” in Greek) done by one agent provides a stimulus (“stigma”) that entices other agents to continue the job.

This concept was initially proposed to explain how a “bazaar” of dumb, uncoordinated termites manage to build their complex, “cathedral-like” termite hills (Grassé, 1959). The basic idea is that a termite initially drops a little bit of mud in a random place, but that the heaps that are formed in this way stimulate other termites to add to them (rather than start a heap of their own), thus making them grow higher until they touch other similarly constructed columns. The termites do not communicate about who is to do what how or when. Their only communication is indirect: the partially executed work of the ones provides information to the others about where to make their own contribution. In this way, there is no need for a centrally controlled plan, workflow, or division of labor.

While people are of course much more intelligent than social insects and do communicate, open access development uses essentially the same stigmergic mechanism (cf. Elliott, 2006, and the simulation by Robles et al., 2005): any new or revised document or software component uploaded to the site of a community is immediately scrutinized by the members of the community that are interested to use it. When one of them discovers a shortcoming, such as a bug, error or lacking functionality, that member will be inclined to either solve the problem him/herself, or at least point it out to the rest of the community, where it may again entice someone else to take up the problem.

Like stigmergic organization in insects (Bonabeau et al., 1999), the process is self-reinforcing or autocatalytic (Heylighen, 1999; Heylighen & Gershenson, 2003): the more high quality material is already available on the community site, the more people will be drawn to check it out, and thus the more people are available to improve it further. Thus, open access can profit from a positive feedback cycle that boosts successful projects. This explains the explosive growth of systems such as Wikipedia or Linux. (A possible disadvantage of such “rich get richer” dynamics is that equally valuable, competing projects, because of random fluctuations or sequence effects, may fail to get the critical mass necessary to “take off”.)

While most large-scale open access projects (such as Linux) have one or a few central figures (such as Linus Torvalds) that determine the general direction in which the project is headed, this control is much less strict than in traditional hierarchical organizations. Most of the work is typically performed in a distributed, self-organizing way. The lack of precise planning is more than compensated for by the fact that information about the present state of the project is completely and freely available, allowing anyone to contribute to anything at any moment. This provides for a much larger diversity of perspectives and experiences that are applied to finding and tackling problems, resulting in what Raymond (1999) calls Linus’ Law: “given enough eyeballs, all bugs are shallow”. Moreover, since contributors select the tasks they work on themselves, they tend to be more interested, motivated and knowledgeable about these tasks.

In this way, open access development fully profits from the evolutionary dynamic of variation, recombination and selection (van Wendel de Joode, 2004; Muffatto & Faldani, 2003). Openness attracts a greater number and diversity of participants, increasing the likeliness of cross-fertilization of their ideas into new combinations.

This strongly accelerates the variation that is necessary to produce evolutionary novelty. This large and diverse community moreover enhances selection, since the new ideas will be tested in many more different circumstances, thus systematically eliminating the errors and weaknesses that might not have shown in a more homogeneous environment. All in all, this leads to greater flexibility, innovation, and reliability.

Stigmergy is more than blind variation and natural selection, though: the visible traces of the work performed previously function as a mediator system (Heylighen, 2007), storing and (indirectly) communicating information for the community. In that way, the mediator coordinates further activity, directing it towards the tasks where it is most likely to be fruitful. This requires a shared workspace accessible to all contributors (similar to what in AI is called a “blackboard system”). This external memory registers which tasks have already been performed and what problems still need to be tackled. The Web has provided a very powerful such workspace, since it enables the storage and public sharing of any “work-in-progress” information product.

To better understand the methods used by open access communities, we need to further distinguish direct from indirect stigmergy. In direct stigmergy, as exemplified by the termite-hill building, it is the “work-in-progress” itself that directs subsequent contributions. Indirect stigmergy may be exemplified by the way ants create trails of pheromones that direct other ants to food sources. The trails are left as “side-effects” of the actual work being performed: finding and bringing food to the nest. Such specially created traces may be needed because the task—finding the proverbial “needle (food) in the haystack (surroundings)”—is too complex to be performed 9 without detailed clues. Thus, “indirect stigmergy” uses an additional medium for information storage. Yet, the coordination achieved in this way still keeps the hallmarks of distributed self-organization: the information is addressed to no one in particular, and may or may not be picked up by a particular individual at a particular moment.

In open access development, indirect stigmergy can be recognized in forums where bugs or feature requests are posted. These forums are themselves not part of the information product being developed, but they are regularly consulted by the developers, thus attracting their attention to tasks that seem worth performing. The problem with such an additional medium is that it adds to the complexity of the (self- )organization, especially if there is a lot of potentially relevant information posted there so that it becomes difficult to establish priorities. Here again we can learn a lesson from social insects. The pheromone trails left by ants undergo an efficient form of reinforcement learning (Heylighen, 1999): trails that lead to rich food sources will be used by many ants and thus amplified, trails that lead to poor or empty sources will weaken and eventually disappear. Since ants preferentially follow strong trails, this mechanism ensures that the most important tasks or opportunities are tackled first.

Applied to open access development, this means that we need adaptive mechanisms to make the most important requests stand out. An example of such mechanism can be found in Wikipedia. When a contributor marks a word as a hyperlink, but there is no article discussing this concept yet, an empty page is created inviting other contributors to fill in its content. This is direct stigmergy: whenever people look for that concept, they are directed immediately to the work that still needs to be done. But when there are many thousands of as yet incomplete entries, priorities must be established. Rather than having a central committee decide which entries are most important, Wikipedia implements a simple form of collective decision-making (cf. Heylighen, 1999): the entries that have most hyperlinks pointing to them are listed first in an automatically generated list of “most wanted articles”.

Such a mechanism to display collective demand can be seen as the non-proprietary analog of the market. The price mechanism efficiently allocates resources to the production of those goods for which demand is highest, by offering the highest monetary rewards for them. Similarly, a “most wanted” ordering of requests offers the highest probability of recognition for the work performed, or of “good feelings” engendered by an altruistic deed. Such stigmergic prioritization is arguably even more efficient than a market, since there is no need for the complex and often irrational processes of buying, selling, bargaining and speculation that determine the eventual price of a commodity—leading to the typically chaotic movements of commodity prices on the stock exchange. Moreover, although price could be interpreted as a — very abstract — stigmergic signal, this variable is merely one-dimensional. On the other hand, open access tasks could be ranked on a website according to independent stigmergic criteria, such as urgency, difficulty, expected utility, required expertise, etc. In that way, potential contributors would be helped in finding the task that suits them best.”

In Conclusion:

“Since the fall of the Soviet Union, the common assumption has been that markets, private property rights, and commercial organizations are necessary to efficiently produce and distribute products. In addition to the apparent failure of communism, this view has been supported by two centuries of economic thought developing sophisticated models that purport to show that the market is the optimal way to allocate resources. In the last few years, however, the collective development of “open access” information products on the Web has emerged like a salient exception to this conventional wisdom. The present paper has proposed a theoretical justification for this phenomenon.

First I have noted that the basic economic assumptions of rivalry and excludability are not applicable to information shared over the Internet. Once created, information is intrinsically not a scarce good, and therefore there is no a priori reason to restrict access to it. On the contrary, freely distributing information is likely to profit its creator, since it helps to improve the quality of the information, and to enhance the creator’s expertise and reputation. Moreover, open access obviously profits everybody else, and in particular those who otherwise would be too poor to pay for the information.

I then used the paradigm of self-organization through stigmergy to explain how open access development can be efficiently coordinated. Thanks to websites listing “workin-progress”, people willing to contribute to the collective development of an information product are efficiently directed to the tasks where their contribution is most likely to be fruitful. This obviates the need both for centralized planning and control, and for the “invisible hand” of the market matching supply to demand.

These innovations appear fundamental enough to revolutionize our socio-economic system (cf. Weber, 2004), offering high hopes for the future, e.g. in stimulating innovation, education, democratization, and economic development. While openaccess distribution is not applicable to material resources, their cost as a fraction of the total economic cost of any good or service is becoming progressively smaller in a society that is ever more heavily dependent on information. Therefore, it could be theoretically envisaged that most economic value would eventually be produced under an open-access system. To make such a scenario less speculative, we will first need to investigate the complex issue of information production that requires considerable material investment, such as pharmaceutical research with its expensive equipment, where patents and other ways of “closing off” information are rife. The issue becomes less daunting, though, if we remember that this kind of research mostly builds on publicly funded (and thus normally open access) work.

To be able to fully compete with the established market-based system, moreover, the still very young open access movement will need to further learn from its experiences, addressing its remaining weaknesses and building further on its strengths. This will in particular require developing better standards and rules, and more powerful software solutions for harnessing stigmergy and allocating recognition and feedback—the main drivers behind the success of open access according to the present analysis.

For example, in the Wikipedia system—which otherwise keeps a very detailed track of all changes made to all documents by all users—it is impossible at present to get an overview of how much a particular user has contributed to the system. Given Wikipedia’s versioning system, it should be possible to measure how much of the text11 entered by a given user survives in the present state of the encyclopedia. This would provide a useful measure of both the quantity and the quality of that author’s contributions, thus establishing a benchmark by which to measure expertise and activity level. Similarly, more advanced algorithms (e.g. inspired by Google’s PageRank or Hebbian learning, Heylighen, 1999) could be implemented to organize and prioritize tasks. Such intelligent methods for coordinating distributed information production could turn the World-Wide Web from merely a collective memory or shared workspace into a true “Global Brain” for humanity, that would be able to efficiently solve any problem, however complex (Heylighen, 1999, 2004).”

Leave A Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.