Germ Form Theory (2): what stage for peer production?

Yesterday, we explained the five step theory of social change, based on an extended essay by Stefan Merten and Stefan Meretz, the founders of the originally german and now international Oekonux movement.

Today, we excerpt the part of the essay which tries to determine which step has already been reached by peer production (and its particular expression in free software).

Their conclusion is that we are mid-way as it were, in the midst of the third (out of five) steps necessary for full social transformation of one type of society into another.

Stefan Merten and Stefan Meretz:

Emergence step: Free Software in the 1980s/1990s

Let’s look at how Free Software came about historically. Indeed it can be seen as a germ form development process in the realm of software production.

Free Software started out in the 1980s. A good part of the initial initiative for Free Software came from Richard M. Stallman who started the GNU project in 1983 and founded the Free Software Foundation in 1985 [Wikipedia-GNU]. Richard M. Stallman had worked at the MIT and from the practice there he was used to a very open style of developing software. Before the 1980s software was generally considered an add-on to the expensive hardware–rather like a manual for your new TV. People who worked with software were used to a free and unhindered flow of software artefacts and to changing these artefacts to their own needs. Mr. Stallman loves to tell the story where he was suddenly confronted with a new era in which software became a commodity [Stallman-Printer]. He personally ran into the limitation that to be a commodity software must be made scarce and secret–the exact opposite of the practice Stallman was used to. The end of that story was that Stallman founded the GNU project and created the first Free Software tools [GNU-Manifesto].

As is typical for a germ form during the emergence step, in the eighties of the last century, the phenomenon of Free Software was hardly even discernible. The term Open Source–may be better known than Free Software today–was invented 15 years later and main stream media did not notice Free Software at all. Nevertheless, the experts in the field became acquainted with Free Software by the communication channels and forums already existing in the Internet that was slowly emerging. Thus, the open development process and the sharing of Free Software did lead to the formation of communities who wrote Free Software and made it available by the means of their time.

Many experts in the field of software development experimented with Free Software that was available. Let me (Stefan Merten) give you a personal example. During my first job I needed a C compiler that ran on an SCO operating system. SCO delivered a C compiler as part of the operating system. It did, however, have some really bad bugs that introduced errors into your programs. Proper programming is a difficult enough task. So the last thing you need is buggy tools that even add to the errors, you yourself are making. In this desperate situation, I tried out the GNU C Compiler which was one of the initial GNU products. I was thrilled, instantly, because it worked out of the box and–most important–it had no errors, whatsoever. Like me, many experts were quickly convinced by the Free Software products available at that time.

A next important step was taken in the early 1990s, when Linus Torvalds created the Linux kernel. This kernel, together with the big amount of GNU software already at hand, led to the development of a complete operating system based exclusively on Free Software [Torvalds-LinuxAnnouncement]. For the first time in history, you were not only able to run Free programs on a proprietary platform, but also you had a system in which everything–from the very basis to the most sophisticated application–was Free Software. In contrast to the earlier Free software projects, the Linux project also employed a different development style. This style was first portrayed by Eric S. Raymond [Raymond-CathederalBazaar]. It was even more open in character than the one employed by the GNU project, and, together with the quickly growing availability of the Internet, it brought the final breakthrough for Free Software.

Though the movement gained momentum during the 1990s, it was still difficult to discern the phenomenon of Free Software. Among experts, however, Free Software, in the meantime, was already well known. Quite often, technical staff in big companies began to use Linux boxes, while the official management still swore by Microsoft and other proprietary software vendors. In 1998 then, when the Open Source movement hyped up, those same managers proudly announced, that they had already a Linux box running. Ironically enough, only a few months before, these guys didn’t even know, that Free Software existed.

While the historical development of Free Software, in itself, is interesting, in our context, it is far more important to see, that right from the early beginnings, the phenomenon of Free Software has shown the very two key features, which we consider to be crucial for a contemporary germ form and which are deeply embedded in the mode of production of Free Software. The first key feature is the non-alienated nature of the production process. When Richard M. Stallman started the GNU project, he was keen on creating the best software thinkable in order to replace the proprietary variants of the Unix operating system which existed at that time. A motivation of this kind is fundamentally different from that, wanting to sell a commodity. When you want to sell a commodity, it completely suffices to create something which is sellable. Absolute quality is not a goal [12].

This is different in the case, when interested experts work on a project. They do contribute because they are able to employ the best of their abilities and because they are proud of doing so. They do contribute to create the very best possible. In other words: When you take the effort to produce software on a voluntary basis, then you are interested in the use value of the resulting product. As mentioned above, this non-alienated relation between the effort taken and the result of that effort is an important aspect of what we call Selbstentfaltung.

The second key feature which we have been seeing all along is the unlimitedness of the project. This unlimitedness comes in two ways. Free Software projects are unlimited externally in that everyone can use and share the results of that Free Software project. However, Free Software projects are also unlimited internally in that anyone interested can make his or her contributions to the project. That, in fact, really takes place. Every user of a piece of Free Software can talk to its developers and contribute the desire for a certain feature, a bug report or even pieces of code. In fact, in many Free Software projects there is no clear division between inside and outside. Instead there is a continuum in respect to how far a contributor will or will not be involved in a certain project.

While the internal unlimitedness is usually achieved by open mailing lists, wikis, or forums, where users can present questions and contributions, the external unlimitedness is warranted by Free Software licenses. Free Software licenses are the legal means we have in order to embed the phenomenon of Free Software into a predominant capitalist environment [Merten-Licenses]. Indeed, today the GNU General Public License (GPL) is the most commonly used license; it is one of the very early achievements of the Free Software Foundation. Amusingly enough, the GPL is a genius hack which uses the logic of copyright to turn it against the idea of copyright: Whereas normal copyright restricts the use of the pertaining material, the GPL gives you a lot of rights.

Crisis step: inferior quality of proprietary software

While Free Software was slowly developing in various niches, a major crisis was evolving in the proprietary software world. This crisis mainly spread out within the area of medium sized server operating systems and the associated software. For the relatively rare and expensive mainframes–for the so called »big irons«–special operating systems were available. Customer software was developed in-house for these mainframes. On the other hand, after the invention of the IBM PC, there was a tremendous surge in the field of small personal computers, which mainly ran Microsoft operating systems, and for which there was a great variety of commercial off-the-shelf software available.

For the relatively numerous medium sized workstations, Unix was the perfect operating system. Among experts in the field, Unix, to this day, has been considered an operating system with a few, but ingenious concepts. The most ingenious characteristic is probably the building block system which allows you to build complex functionality from basic building blocks. Nevertheless, after a long open history, Unix eventually turned into a proprietary system, with different vendors having developed different versions of it. Here the problem arised, that these versions were incompatible–which is understandable, when you keep in mind, that each and every vendor strives to sell a set of unique features that differ decisively from those of his competitors.

It is commonplace that in big, coupled systems, network effects are tremendously important for market growth to take place. This is especially true for software, where a product is the more useful, the more computers you can run it on, and the more users apply it. To gain network effects, then, you need standards that unify different products to the extent that they are interoperable. Standards, however, come in two flavours: On the one hand, there are official standards which are defined by a more or less powerful standardisation institution. These standards are open because usually they are readily available and they can be followed by everyone. On the other hand, there are monopolies that define standards rather implicitly. Proprietary monopolies are not open in that they embody only a certain set of practices. These practices are not documented on a regular basis; they can change in unexpected ways and at any time, making it dangerous to rely on them at all.

Microsoft serves as a good example to demonstrate, how network effects are attained by monopolies. Even though the quality of Microsoft products has often been mediocre in comparison to those of competitors, Microsoft has managed to leverage the network effects that were set off with MS-DOS.

For the medium sized workstations, there was neither a monopoly, nor a common general standard, that could have unified the technical basis of the different vendors sufficiently. Actually, there even were attempts to create such a standard, but these never took off. As a result, eventually, the Unix market died off. As a result, in 1980s/1990s many experts in the field were worried that Microsoft would prevail by extending its monopoly to those middle sized servers.
Expansion step: Free Software today

Parallel to these developments, however, the germ form of Free Software had also grown enough to restore hope. During the 1990s, it became more and more common to use GNU software and the GNU/Linux operating system on medium sized servers. In fact, the Internet today is inconceivable without Free Software. One of the major success stories in this field is the Apache web server :

Apache has been the most popular web server on the Internet since April 1996.

—Apache home page

Side by side with the World-Wide Web, e-mail is the other most important Internet service which is firmly based on Free Software. In addition, all of the software running the basic network infrastructure is, for the most part, also Free Software.

So when we regard the ongoing expansion of the use of Free Software on the server side it seems justified to assume that Free Software has the potential to win the competition as far as the servers are concerned. Even though today we take notice of a large-scale endorsement to this view, the consequences of this development can be all but overestimated: A new mode of production embodied by Free Software is potentially able to overcome the traditional mode of production embodied by capitalism in one of the most important and advanced facilities of our time! In fact, this might be the only example of its kind since the shift from feudalism to capitalism.

Even though the server side may be considered won, the final frontier for Free Software is the desktop. As mentioned above, Microsoft has been able to leverage network effects and still holds a near-monopoly on desktop computers. However, our hopes are rising that step by step this monopoly will also be overcome by Free Software. The most interesting development during the past years is probably the Ubuntu software distribution bringing Free Software to an ever greater number of desktops. Microsoft, one of the richest companies on earth, will probably have to suffer the greatest losses if Free Software does continue to gain momentum. So it is interesting to see the reactions of this multi-national company. The history of Microsoft’s reactions can be summarised well by a quote from a well known non-violent revolutionary:

First they ignore you, then they ridicule you, then they fight you, then you win.

—Mahatma Gandhi

For a long time, at least officially, Microsoft ignored Free Software. Internally, however, studies on that new phenomenon and what it could mean to Microsoft [Raymond-Halloween], were conducted. Then, for a short period, Microsoft publicly ridiculed Free Software. Finally, Microsoft began to fight Free Software. This has been going on in many fields. It started, for instance, by comparing Free Software with cancer [Microsoft-Cancer]. Then, a great number of times, Microsoft did whatever they could to keep the governments of developing nations from adopting a Free Software strategy [Microsoft-ThirdWorld]. Today, Microsoft is beginning to embrace companies earning money by selling services associated to Free Software [Microsoft-Novell].

Free Software has been declared dead and gone ever so often, during the past 25 years; nevertheless, as always, it is still alive and doing very well. According to our own analysis, the very reason for this is that a number of fundamental principles of Free Software–namely the possibility for Selbstentfaltung and the unlimitedness–can not be copied by the capitalist logic. Capitalism can not cope with unlimitedness because you need scarcity to sell a commodity. Copyright is a way to make information goods scarce and thus subject to commodification. Copyleft turns this around and destroys scarcity. On the other hand, Selbstentfaltung is the opposite of alienation. However, the money system of capitalism is built upon one of the most massive alienations mankind has ever seen, so far, and alienation destroys Selbstentfaltung, at least to some extent. If the reasons for the success of Free Software in fact do lie in Selbstentfaltung and the unlimitedness, then capitalism will not be able to cope with this unless it relinquishes its fundamental positions. In other words: Capitalism is not able to absorb the principles underlying the success of peer production.

Today: the expansion step has been reached

In the field of software, it is safe to assume that according to germ form theory the expansion step has been reached. Today, Free Software is an important aspect of development within the prevailing old form. For other fields this can not be assumed with the same certainty; but, as mentioned, we have found a number of promising examples which point in the right direction.

Let us keep in mind, that at this stage, approximately in the middle of a long revolutionary era, according to germ form theory, we are witnessing the onset of a new mode of production with a new logic replacing the old one, step by step. Presumably, a major change in the mode of production equals with a change of paradigms–with an extremely deep impact on the further course of human history. Therefore, at this point in the middle of the process, nobody can be expected to predict its consequences or the final result. In addition, on the expansion step, the old and the new logic are both strong; thus, contradictions are not an exception or an accident, but a logical necessity.

When looking at Free Software, for instance, people are often puzzled by the fact that Free Software developers still need a job and money to make a living. Well yes, of course, that’s true. But that’s not a problem. Free Software developers, to some degree, already live in both of these worlds. At present, the process of overcoming the old structures has not yet proceeded far enough in order for us to be able to rely on the new forms completely. But this type of contradiction does not imply that it is essentially impossible for a new logic to overcome an old one–all it needs is time and effort.

We would like to emphasise again that what we see today is not the final stage. Remember: we are in the third of five steps. When capitalism started its expansion step, nobody was able to envision a concrete capitalist world 300 years later. Nonetheless, today we are part of it, and nowadays we are having a hard time imagining a world based on new and unknown fundaments such as peer production. This is especially true for the field of material production. Today it is hard to see how material production can be organised according to the logic of information goods. Today, indeed, we see a big difference between digitised information goods which are more or less non-rival by virtue of the Internet, and material products, where a single instance of a material product can be considered rival.

Looking back in history gives us a hint. The production of food in feudal times was also organised according to a completely non-capitalist logic. Then we saw that the production of food, step by step, became subject to capitalism and nowadays has reached a stage, where food production is simply an annex to industrial and capitalist production. Thus, it is perfectly possible that in a dominance or restructuring step, the problem of how to embed material production into the peer production mode will be solved by means inconceivable to us as yet.

4 Comments Germ Form Theory (2): what stage for peer production?

  1. Pingback: Boycott Novell » Links 27/11/2008: GNU/Linux Climbs to Virtually 94% Market Share in Supercomputers, KDE 4.2 @ Beta

  2. AvatarStephan Beal

    A truly interesting article. The only point i’d like to see elaborated is:

    “But this type of contradiction does not imply that it is essentially impossible for a new logic to overcome an old one–all it needs is time and effort.”

    i (respectfully) disagree with those bits (or, more precisely, i can’t *conceive* of how this particular new model can completely supplant the old one).

  3. AvatarStefan Meretz


    i can’t *conceive* of how this particular new model can completely supplant the old one

    This is a good question! We deal with it in the first part of this paper in a general sense.

    Of course, the general scheme of a five step transformation does not answer how this transformation will happen. We both, the authors of this text, agree, that it is possible to supplant the old model with a new one, and that the new one generalizes the principles of free software. But we see different ways this generalization process will take.

    So, speaking for myself, I can imagine, that the new model will be a peer economy, and the transition to a peer economy — especially the generalization of the principles of immaterial peer production into the physical world — can be described as Christian Siefkes does in his book “From Exchange to Contribution”. An introduction can be read in keimform blog, too.

    I am not sure, if this answers your question…

  4. AvatarStephan Beal

    That’s close enough to an answer – i understand there is only theory on the subject, and that few hard observations can be made. Thank you for the additional links!

Leave A Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.