The post Eleftherios Diakomichalis on Oscoin: A P2P Alternative for OSS Collaboration appeared first on P2P Foundation.
]]>When we started Oscoin, our motivation was guided by the observation that crypto-currencies could enable a new form of community-owned and operated network. The invention of digital scarcity1 made it possible to economically incentivize and remunerate network participants for their service in a simple,transparent way, without mediation by a third-party.
It was only natural for us to imagine a community of open-source developers,incentivized by a native currency distributed to the projects most valued by the community, and traded between collaborators, users and maintainers of these projects. This ecosystem, we thought, could provide a solution to the problem of open-source sustainability2, while also freeing the community from centralization risks associated with platforms such as GitHub and GitLab.
Well aware of the fraud and confusion around decentralization, we saw potential in crypto-currencies to address socio-economic problems which would allow contributors to be rewarded in a currency that also confers ownership of the network. By consolidating equity with currency, we create a fairer distribution mechanism for long term network sustainability. In such an economy, there are no second class citizens3, everyone is aligned around a single token,everyone wins and loses together.
An uncomfortable truth about our society is that apparent convenience is chosen over everything else. Centralized platforms offer this convenience seemingly for “free”, but since the explosion of the Internet in the 1990s we can observe how this pans out: critical social infrastructure is taken over by corporate interests as communities move from one centralized platform to another. Our belief is that logical centralization4 is necessary for communities to exist, but economic centralization is not.
The post Eleftherios Diakomichalis on Oscoin: A P2P Alternative for OSS Collaboration appeared first on P2P Foundation.
]]>The post Proposal: the Percloud, a permanent/personal cloud that is a REALLY usable, all-in-one alternative to Facebook, Gmail, Flickr, Dropbox… appeared first on P2P Foundation.
]]>important update, 2018/02/06: a new version of the proposal, completely rewritten to take into accounts recent developments and feedback, is HERE.
A percloud (permanent/personal cloud) is my own vision of a “REALLY usable, all-in-one alternative to Facebook, Gmail, Flickr, Dropbox…”.
I made the first percloud proposal in 2013. Very soon, however, I “froze it”, for lack of time and resources, and did not do any real work about it, for reasons I have explained in detail elsewhere. Then, at the beginning of 2017, several things happened, including but not limited to:
What I mean with the last bullet is that, thanks to projects like Sandstorm, Cloudron and several others, building what I call a “percloud” should, indeed, be easier than in 2013. “Easier” does not mean “easy” though, and I have realized several things.
First, integrating and polishing the several software components, until they are actually usable by non-geeks is still nothing one could do on his spare time (not me for sure, anyway). Second, personal clouds will be easily adopted by non-geeks ONLY if they are offered as a managed service: this means there must be web hosting providers that offer really turn-key perclouds.
Third, a real pilot/field trial of the percloud is needed. Because on one hand, we need many, ordinary Internet users to use the package, and tell us geeks if it works for them or not. On the other, we need to give wen hosting providers some real world usage data of these personal clouds, so they can figure out how much it would cost to offer them as a service.
Taken together, all these things have lead me to put together the proposal below.
Important: as I said, I’m already discussing similar cloud platforms with several groups. But I do not see this proposal in competition with the others. This is all Free as in Freedom software, and the more is shared and reused, the better! Much of what is proposed below may be directly reused in those projects, or similar ones, if not co-developed together.
Now, please look at the proposal, share it as much as you can, give feedback and, since this page may be updated often in the next weeks, follow me on Twitter to know when that happens. THANKS!
Purpose: personal, permanent, basic, online web presence and communication, that does replace {facebook+gmail+dropbox} today. Very little or nothing more. The target user is the average user of facebook, gmail, instagram, dropbox, google drive and similar services, who seldom, if ever, visits the rest of the Web. The goal is to make it possible to these people to get outside today’s walled gardens, as soon as possible. Once that happens, it will be much easier to move the same people to more advanced platforms. Advanced users for which this service is too little/too limited still need something like this for all their own non-geek contacts, if they want their communications to stay private.
(regardless of which software implements them…)
ONLY the very basic ones, that everybody would surely need, e.g.:
email, blogging, calendar and address book, basic social networking, online bookmarks, save web pages to read them later, online file storage (personal files, pictures galleries).
The several components of a personal cloud as proposed here would share user authentication, and communicate with each other, as smoothly as possible. However, they cannot have a completely homogeneous look and feel as, say, the several features of a Facebook account. Such an integration is simply outside the scope of this proposal, because the only (but crucial) purpose of the percloud is to test and offer something actually usable, as soon as possible: see the “we need it SOON” part of this post, which is even valid now than it was in 2014, to know why.
As far as “real time interactivity” goes, the percloud must offer federation, that is let “friends” who own different perclouds see what each other has published, comment it, get notification, chat, and so on. However, percloud-based social networking does not even try to achieve the same numbers and levels of interactions and notifications of Facebook or similar platforms. This is a feature, not a bug. Facebook bombards people with real time notifications (“Jim tagged you”, 3 years ago you posted this”…) because it exists to… make people stay as much as possible inside Facebook. A percloud, instead, exists to let you interact with your contacts when you need or feel like it. It does not need to be so invasive and stressful.
The contacts and discussions I had at the beginning of 2017 convinced me that a percloud available as soon as possible may still have a lot of value. The same activities also showed me that it should be done quite differently than what I imagined 4 years ago.
In order to build a percloud and test it “in the field”, together with the cloudron developers, it is necessary to have sponsors for: * adding the missing parts * integrating and documenting everything * CRUCIAL: deploy and manage a “large” scale field test/pilot in which e.g. 1000 people are given one percloud for free, for 12 months, in exchange of giving feedback on usability, etc… and allowing basic monitoring of percloud usage (e.g. number of posts and visitors per month, etc). Without this, i.e. without knowing for sure how the actual target users react to the percloud, we cannot make it succeed
As far as hosting goes, the test perclouds may be hosted on lightsail or similar platforms. But it would be great if community-oriented hosting or connectivity providers like guifi.net or mayfirst.org wanted to participate. If you know of any organization or group of organizations who may be interested in sponsoring such an activity, please let me know.
My proposal to integrate the percloud with the Eelo operating system for smartphones
Photo by kndynt2099
The post Proposal: the Percloud, a permanent/personal cloud that is a REALLY usable, all-in-one alternative to Facebook, Gmail, Flickr, Dropbox… appeared first on P2P Foundation.
]]>The post City of Barcelona Kicks Out Microsoft in Favor of Linux and Open Source appeared first on P2P Foundation.
]]>Great news from Barcelona. This article was originally posted at ItsFoss.com:
A Spanish newspaper, El País, has reported that the City of Barcelona is in the process of migrating its computer system to Open Source technologies.
According to the news report, the city plans to first replace all its user applications with alternative open source applications. This will go on until the only remaining proprietary software will be Windows where it will finally be replaced with a Linux distribution.
The City has plans for 70% of its software budget to be invested in open source software in the coming year. The transition period, according to Francesca Bria (Commissioner of Technology and Digital Innovation at the City Council) will be completed before the mandate of the present administrators come to an end in Spring 2019.
For this to be accomplished, the City of Barcelona will start outsourcing IT projects to local small and medium sized enterprises. They will also be taking in 65 new developers to build software programs for their specific needs.
One of the major projects envisaged is the development of a digital market – an online platform – whereby small businesses will use to take part in public tenders.
The Linux distro to be used may be Ubuntu as the City is already running a pilot project of 1000 Ubuntu-based desktops. The news report also reveals that Outlook mail client and Exchange Server will be replaced with Open-Xchange meanwhile Firefox and LibreOffice will take the place of Internet Explorer and Microsoft Office.
With this move, Barcelona becomes the first municipality to join the European campaign “Public Money, Public Code“.
It is an initiative of the Free Software Foundation of Europe and comes after an open letter that advocates that software funded publicly should be free. This call has been supported by more than about 15,000 individuals and more than 100 organizations. You can add your support as well. Just sign the petition and voice your opinion for open source.
The move from Windows to Open Source software according to Bria promotes reuse in the sense that the programs that are developed could be deployed to other municipalities in Spain or elsewhere around the world. Obviously, the migration also aims at avoiding large amounts of money to be spent on proprietary software.
This is a battle already won and a plus to the open source community. This was much needed especially when the city of Munich has decided to go back to Microsoft.
What is your take on the City of Barcelona going open source? Do you foresee other European cities following the suit? Share your opinion with us in the comment section.
Source: Open Source Observatory
The post City of Barcelona Kicks Out Microsoft in Favor of Linux and Open Source appeared first on P2P Foundation.
]]>The post Personal data and commons: a mapping of current theories appeared first on P2P Foundation.
]]>At the end of October, I wrote an article entitled “Evgeny Morozov and personal data as public domain” .
I got a lot of feedback, including from people who had never heard about these kinds of theories, trying to break with the individualistic or “personalist” approach based on the current law about the protection of personal data, to think/rethink about its collective dimension.
Actually, there are many theories which, I think, can be divided into four groups, as I tried to show with the mindmap below (click image for full mindmap).
The four groups of theories are as follows (some make a direct link between personal data and commons, while others establish an indirect link):
I tried to make sub-divisions for each of those four theories and to give concrete examples. If you’d like more information, you’ll find links at the end of every “branch”.
I’m not saying this typology is perfect, but it has allowed me to better apprehend the small differences between the various positions. It can be noted that some of the authors appear in different theories, which proves that they are compatible or complementary.
Personally, I tend to be part of the commoners’ family, as I have already said in this blog.
Feel free to comment if you think of more examples for this map or if you think this typology could be improved in any way.
Photo by Sarah @ pingsandneedles
The post Personal data and commons: a mapping of current theories appeared first on P2P Foundation.
]]>The post SSE and open technologies: a synergy with great potential appeared first on P2P Foundation.
]]>A technology is considered ‘open’ when it gives users the freedom (a) to study it, (b) to use it any way they wish, (c) to reproduce it and (d) modify it according to their own needs. By contrast, closed technologies are those that restrict these freedoms, limiting users’ ability to study them, reproduce them and modify them so as to adapt them to their needs. That is precisely the advantage of open technologies from the perspective of end users: whereas closed technologies limit the spectrum of possibilities of what end users can do, open technologies ‘liberate’ them, giving them the possibility to tinker with them and evolve them. Paradoxically, despite the fact that open technologies are greatly appreciated by the global technological community because of the freedoms they offer, the technology products manufactured and marketed by the vast majority of technology firms around the world are ‘closed’. This, of course, does not happen because of technological reasons: most of these companies supply their clients with closed machines and tools simply because in that way they can easily ‘lock’ them into a relationship of dependence.
It is not hard to see why this type of client-supplier relationship is particularly harmful for SSE organizations, as it implies their dependence on economic agents with diametrically opposed values and interests. To put it simply, it is very difficult, if not impossible, for SSE organizations to evolve into a vehicle for the transition to a truly social economy when they are dependent on the above economic agents for the tools they need on a daily basis. By contrast, open technologies may well be strategic resources for their autonomy and technological sovereignty. As brazilian activist-philosopher Euclides Mance remarks, SSE organizations should turn to open design and free software tools (like the Linux operating system for computers) in order to extricate themselves from the relationship of dependence they have unwillingly developed with closed technology companies.
A documentary about Sarantaporo.gr
To find the tools which fit their needs and goals, SSE agents should turn to the ‘community’ itself: in most cases, the development and the transfer of open technology to the field of its application and end-use is carried out by collaborative technology projects with the primary aim of covering needs, rather than making a profit. A great example is that of Sarantaporo.gr in Greece, which operates a modern telecom infrastructure of wireless networking in the area of Sarantaporo since 2013, through which more than twenty villages have acquired access to the Internet. The contribution of those collaborative projects – and that is crucial – is not limited to high-technology products, but extends to all kinds of tools and machines. A characteristic example is the Catalan Integral Cooperative in Catalonia and L’Atelier Paysan in France, which develop agricultural (open design) tools geared to the particular needs of small producers of their region.
The above examples show clearly the great potential of the SSE for positive change. However, for that to happen, it should have sufficient support structures for reinforcing its entrepreneurial action. That is where it is lagging behind. The SSE does not have structures analogous to the incubators for start-ups, the ‘accelerators’ and the liaison offices operating at most universities for the transfer of know-how to capitalist firms. Addressing this need is an area in which government policy could play a strategic role: in that regard, it is extremely positive that the recent action plan of the Greek Government tries to combat this problem through the development of more than a hundred cooperatively-organized support centres for the SSE across the entire country by 2023. That is precisely the kind of impetus that the SSE needs in order to grow. Of course, the capacity of these centres to support the SSE technologically will be of decisive importance: those are the structures that can and must make open technology accessible and user-friendly at local level, supplying the SSE organizations of their region with technology tools that promote the principles of the SSE and ensure its autonomy.
The post SSE and open technologies: a synergy with great potential appeared first on P2P Foundation.
]]>The post Time to Use Uber’s Weapons — Against Uber appeared first on P2P Foundation.
]]>That’s fine, as far as it goes. Local medallion taxi monopolies need to be circumvented; they are a way of limiting competition and therefore guaranteeing monopoly profits to the companies that hold such licenses. Taxi licensing has been used for years to prevent ordinary people from using the spare capacity of their own cars to earn money transporting people. Without the pernicious effects of licensing, this activity would be a low-risk, low-overhead source of additional income from self-employment. Since offering transportation services is limited to licensed taxi companies, the only way to legally make money from driving people around is to be hired for wages on terms set by a capitalist employer.
The problem is that Uber is just another employer. It is simply another monopoly that needs to be circumvented. Uber uses “intellectual property” law to enforce ownership of its proprietary app, treat drivers as de facto employees, and skim off a large percentage of drivers’ earnings.
Ideally, this service should be able to interact with Uber and pirate its database of passengers and drivers. Ironically, Uber has inadvertently provided a model of how to circumvent their own monopoly.
An open-source, pirated version of Uber — owned and controlled by the users themselves — can use something like Greyball to evade enforcement of the local cab licensing monopolies like Uber does. It can also evade Uber’s own attempts to enforce ownership rights over its proprietary walled-garden platform.
Traditional taxicab medallion systems and Uber’s proprietary platform may compete with each other, but they are really just two different versions of the same thing: the use of state power to limit competition, guarantee profits, and force ordinary people to work for capitalists instead of themselves. It’s time to follow Uber’s example and use free software to destroy not only the antiquated taxi monopolies, but Uber’s as well.
Translations for this article:
The post Time to Use Uber’s Weapons — Against Uber appeared first on P2P Foundation.
]]>The post Why producing in common is the starting point appeared first on P2P Foundation.
]]>If we study the productive reality of the last thirty years, the changes turn out to be amazing. Among all of them, the most striking, the most unexpected, the one that most strongly contradicted the idea that the great economic systems of the twentieth century had about themselves, was not that the future would be full of computers, cellphones, and electronic equipment. That idea had already appeared in the ’40s and ’50s in science fiction and popular futurism. Nor was it globalization. The idea of a world united by free trade had been part of the Anglo-Saxon liberal ideal since the Victorian era, and from the foundation of the League of Nations, between the wars, it was part of the declared objectives of the great English-speaking powers.
No, the most shocking thing was the beginning of the end of business gigantism. From the State businesses of the USSR, to shipbuilding and metallurgy in Asturias, from Welsh mining to United Steel or the big automotive companies, the oligarchs that had been the model of “enterprise” for the contemporary industrial world, stopped hiring, collapsed, and fired tens of thousands of workers. It wasn’t just “de-localization”: the new Chinese or Vietnamese plants didn’t grow indefinitely, either. Markets like electronic products expanded year after year, and yet personnel and capital global used were reduced. It was said that the new labor-intensive industries would be services, especially services connected with the new dominant form of capital: finance. But soon, banks and insurers that employed hundreds of thousands of people at the turn of the century started to reduce personnel. Today, the great banks are on track to reduce personnel by 30% over the next decade.
What had happened was, in fact, amazing. Following the Second World War, the United States had become the great provider to the world. When the war ended, US GDP was around half of the global GDP. Benefiting from the European need for reconstruction and from peace treaties that, while not reaching the level of humiliation of Versailles, were openly asymmetrical, big Anglo-Saxon businesses globalized at great speed speed. It was a dream come true for their shareholders. It wasn’t at all strange to economists. At the time, if Marxists, Keynsians, and neoliberals agreed on anything, it was that businesses were able to, and in fact tended to, grow indefinitely. But by the ’50s, it was already obvious that something was going wrong. In the USSR and the countries of the East European, you could always blame the arbitrariness of the political system or the mistakes of the planners. But in the USA, it was different. And yet, it was there, present and invisible, like an elephant in a high-society gala. The first to realize it was a economist called Kenneth Boulding. Boulding noted that American businesses were reaching the limit of their scale, the point at which inefficiencies due to having to manage a larger size were not compensated for by the benefits of being bigger. Looking at the America of his time, he also warned that big businesses would try compensate for their inefficiencies using their weight in the market and in the State. We were under pressure long before “too big to fail” in the crisis of 2008, but he could already tell that Big Businesses would not hesitate to use the power they had as a result of employing tens of thousands of people to get made-to-fit regulations and thinly-veiled monopolies. Business over-scaling, warned Boulding, could end up being a danger to the two main institutions of our society: state and market.
But what came next was even more surprising. Businesses bet on improving their systems and processes. They discovered that information was important—crucial—to avoid entering the phase in which inefficiencies grew exponentially. It also became obvious that a business size that was inefficient for one market became reasonably efficient for a larger market. As a result, they used all their power to promote a branch of technology that had shone only marginally in the great war: information. With this same objective, as soon as the opportunity arose, they pushed governments to reach commercial agreements and, above all, frameworks for the free movement of capital, since the industry that had scaled fastest and had begun to give alarming signs of inefficiency was finance. Meanwhile, the champion in business scale, the USSR and the whole Soviet bloc, collapsed, to the astonishment of the world, in an obvious demonstration that operating life wasn’t infinite.
A true revolution in support of the feasibility of large scales in crisis was implemented in the West. The political result was called “neoliberalism.” It basically consisted of the extension of free-trade agreements, which expanded markets geographically; financial deregulation, which allowed the rise of “financialization,” or extension of markets over time; and a series of rents and monopolies for certain businesses, which were assured by regulations, like the hardening of so-called “intellectual property.”
The technological result was known as the “IT revolution,” which is to say revolution of information technology. But it came with a surprise, following a series of apparent coincidences in the search for ways beyond the limits on efficiency imposed by the rigid hierarchical systems inherited from the previous century. At the end of the ’60s, the structure of networks that connected big university computers, which was financed by defense spending, took a distributed form. This would not have brought about a radical change if a new field, domestic information science, had not evolved towards small, completely autonomous computers, known as “PCs.” The result was the emergence in the ’90s of an immense capacity for distributed and interconnected calculation outside the fabric of business and government: the Internet.
The Internet brought profound changes in the division of labor, which overlapped with the ongoing reduction of optimal scales, and changed the social results expected from delocalization, the first trend in globalization.
In the ’90s, when the “end of history” seemed go hand in hand with the consolidation of a new string of industrial technology giants (Microsoft, Apple, etc.), free software, which had been a subculture until then, built the first versions of Linux. Linux is the “steam engine” of the world that is emerging: the first expression of a new way of producing and, at the same time, a tool to transform the productive system. Over the next twenty years, free software would come to be the greatest transfer of knowledge and value in the contemporary era, equivalent to several times all foreign aid to development sent from developed countries to those on the periphery since WWII.
Free software is a universal public good and, in an era in which information infrastructure is a fundamental part of any productive investment, a free form of capital. Free capital drove an even greater reduction in the optimum scale of production. But it also helped make value chains of the physical goods with strong technological component distributed. Globalization and delocalization had broken the links in value creation in thousands of products throughout the world, especially in the less-developed nations of the Pacific basin, but all those chains were re-centralized in the US, and to a lesser extent, in Japan, Germany and other central countries, where big corporations (from Apple to Nike) branded, designed, marketed, and hoarded the benefits of intellectual property. The possibility of free software was key for many of those chains to “insource” in countries like China, and produce all the elements, including those of greater value added.
The immediate result was prodigious economic development, the greatest reduction in extreme poverty in the history of humanity, the greatest increase in real wages in the history of China, and the appearance of new global centers of innovation and production in coastal cities. These cities play by a new set of rules that, not surprisingly, include an extreme relaxation of intellectual property, an accelerated reduction of scales, and production chains systems and assembly systems that allow a formidable increase in scope, which is to say, the variety of things produced.
As all these changes were set in motion in Asia, in Europe, the free software model was expanding into a whole spectrum of sectors. Soon, groups would appear that replicate the mode of production based on the commons (“the P2P mode of production“) in all kinds of immaterial content—design, books, music, video—and increasingly, in the world of advanced services—finance, consultancy—and industrial products—drinks, specialized machinery, robots, etc.
But while the “P2P mode of production” is a fascinating path for a transition from capitalism to abundance, its direct impact—how many people live directly from the commons—is relatively small. As in Asia, Europe, and the US, structural change will begin in an intermediate space that is also based on the digital commons: the Direct Economy.
The Direct Economy is all those small groups of friends—and therefore, a basically egalitarian organization—that design a product that generally incorporates software and free knowledge into itself or its process of creation, sell it in advance on a crowdfunding platform (making bank financing or “shareholders” unnecessary), produce it in short runs of a few thousands in a factory, whether in China or on the side of their house, and use the proceeds to improve the design or create a new product.
The Direct Economy is bar owners who invest 10,000 euros in equipment and begin to produce beer 100 liters at a time, or a few tens of thousands of euros and gain capacity to prepare almost 1,500 liters every 12 hours in continuous production—and then go on to bottle and begin distributing nearby and in networks of beer artisanal lovers. And of course, they will have more varieties than the big brewery in their are, higher quality, and a better quality/price ratio.
The Direct Economy is the academy or the high school that installs a MOOC or Moodle to be able offer its students services over the summer, independent app developers, the role-playing bookstore that buys a 3D printer and starts selling their own figurines, or the children’s clothes store that starts designing and producing their own strollers, toys, or maternity bags.
All of them are small-scale producers making things that, until recently, only big businesses or institutions were able to make. All of them have more scope than the scale model. All of them, at some point in the process, use free software and knowledge, which reduces their capital needs even further. All of them take advantage of the Internet to reach providers and customers for low costs—for example, by being able to reach very geographically dispersed niches or find very specialized providers. Most will not have to resort to banks or investors to finance themselves, but rather, will use pre-sale and donation systems on the network to raise money. And some of them use the “commodification” of the manufacturing industry and its flexible production chains for the process.
As for internal organizing, we’re generally looking at models that are much “flatter” and more democratic than conventional businesses. While traditional businesses are autocracies, or at best aristocracies based on hierarchical command and responsibility, the large majority of projects in the Direct Economy are “ad-hocracies,” in which the needs of the moment shape teams and responsibilities. This even happens in cases where big businesses decide to take a gamble on creating a spin-off and competing in a new field. Instead of an org chart, there are task maps. Rather than “participation in management,” there emerges the type of energy that characterizes any group of friends that make something “spontaneously.” If the legal process wasn’t still so arduous, if it didn’t require notaries and endless paperwork, we would say that the natural way to the Direct Economy is worker cooperativism.
But none of this is as important as the broader meaning of the Direct Economy to people’s possibilities in life. In Wage Labor and Capital, one of his more accessible works, Marx explained the trap in the narrative that exalts social mobility and equality of opportunities: wages can’t become capital. Or, rather, couldn’t… and it’s true that it continues to be unable to in a good part of the world and in many branches of industry. But we’re seeing something that is historically shocking—the reduction to zero of the cost of an especially valuable part of capital, which materializes directly knowledge (free software, free designs, etc.). And above all we see, almost day by day, how the optimum size of production, sector by sector, approaches or reaches the community dimension.
The possibility for the real community, the one based on interpersonal relationships and affections, to be an efficient productive unit is something radically new, and its potential to empower is far from having been developed. This means that we are lucky enough to live in a historical moment when it would seem that the whole history of technology, with all its social and political challenges, has coalesced to put us within reach of the possibility of developing ourselves in a new way and contributing autonomy to our community.
Today we have an opportunity that previous generations did not: to transform production into something done, and enjoyed, among peers. We can make work a time that is not walled off from life itself, which capitalism revealingly calls “time off.” That’s the ultimate meaning of producing in common today. That’s the immediate course of every emancipatory action. The starting point.
Translated by Steve Herrick from the original (in Spanish)
The post Why producing in common is the starting point appeared first on P2P Foundation.
]]>The post Commons: A Frame for Thinking Beyond Growth appeared first on P2P Foundation.
]]>Bio: Silke Helfrich works as an independent author, activist and scholar, with a variety of international and domestic partners. She is the editor and co-author of several books on the Commons, among them: Who Owns the World? The Rediscovery of the Commons (2009), Wealth of the Commons (2012) and Patterns of Commoning (2015). From 1996 to 1998 she was head of the Heinrich Böll Foundation Thuringia and from 1999 to 2007 head of the regional office of Heinrich Böll Foundation in San Salvador and Mexico City. She is cofounder of the Commons Strategies Group and the Commons-Institut e.V. and the primary author of the German-language CommonsBlog. This interview has been expanded upon from its original publication at the Green European Journal.
Q: In an interview with Transition Culture, you have said that the commons are more than just resources that we have to share: they are based on the notion of communities or networks that are sustainably managing and sharing collective resources which were given to us by nature, or which were produced collectively. Can you give us some successful examples of these commons?
A: Sure, but first of all, let us look at the idea of the commons. You cannot think about the commons if you don’t ask yourself at the same time who creates them, who cares for them, who protects them and who reproduces them. I used to say that commons don’t fall from the sky; they do not simply exist, but need to be “enacted”, so to speak. And this is why you cannot think about the commons without thinking about the notion of community. Community, in a very modern understanding of the word, ranging from intentional communities to global networks. Or even huge, loosely-connected networks of communities, i.e. not necessarily the kinds of small communities that are based on everyday face-to-face contact. This is one of the important aspects.
Another important thing to know is that commons are not automatically managed in a sustainable way. That would be wishful thinking. Nevertheless, research has shown that community forests, for example, or fisheries are at least as, or even more sustainable when managed as commons, in contrast to privately managed examples.
A third important aspect is that, even if imperfect, commons-based solutions are more down-to earth and more bottom-up than the ordinary management of resources. This also means that they are more democratic than solutions dependent on decisions partaken by external entities. So, all affected people have a say in the management of the resources that they need to make use of. That makes a big difference to me.
For example, the Nepalese government decided in 1990 that it would hand over – or, more precisely, give back – complete control over their forests to the communities living within and sustaining themselves through these forests. So, one could say that 100% of forest stewardship in Nepal is commons-based; while in Mexico, more than 60% of the forests and lands are community lands. There are similar endeavours in Europe: I met a researcher from Romania, currently doing a mapping of traditional commons in her country. She said she had already counted 1100 forest commons. They are called “Obstea” there.
This, however, is not obvious to most people: one of the tragedies of the commons is that they remain largely invisible to them.
Q: Commons are, according to your definition, generally not based on money, legal contracts, or bureaucratic fiat. But can they (at least temporarily) coexist with a capitalist, growth-driven and growth-oriented economy?
A: In a way, there is no choice. Commons have to coexist with capitalism, simply because we live within capitalism so, inevitably, we create our commons nested within capitalism. If we want to create a commons-based society, we need to start from where we are. So, we need to do it by growing out of capitalism. And that is why the question of protecting the commons from a takeover by market logics is so crucial. Once you have created a commons and are able to manage it as such, you need to make sure that this market logic doesn’t undermine it. You need to protect it from corruption within the commons and co-optation by external forces. This can be done through legal hacks, such as copyleft , or simply through sticking to your goal and mobilising the power of communities and networks again and again, consistently defending and protecting the commons.
Q: But hasn’t there been a history of backlash, because there are too many who would prefer to protect their current economic model?
A: Yes. That is why designing and strengthening commons at an institutional and infrastructural level is so important. We need to make sure that external forces aren’t governing that which needs to be governed by the people themselves. The good news is that if you strengthen the commons, you will have an impact on the whole. There is an interdependency: by widening the sphere of the commons you undermine the sphere of the market.
Strengthening the commons means that state powers should pay more attention to supporting a commons approach, in contrast to an approach driven by “more, better and faster than the other”.
One thing that is very helpful in understanding how “strengthening the commons” might work is to make the ongoing enclosures visible. We have been observing enclosures taking place for the last 1000 years. During the last three centuries they have principally been carried out by market and state, but, also in some cases, by the people themselves due to lack of awareness, not-knowing how to common (understood as a verb) or for simply being brain-washed. To resist enclosure, you need to make sure that you understand the very concept and its subtlety.
The use of certain technologies seems to be one of the most dangerous tactics of enclosing our opportunities for self-determined production. Enclosing by imposing certain technologies means that the devices we use are designed to prescribe specific usage. For example, if you forget your laptop’s charger you probably can’t use your friend’s one, because it doesn’t fit your computer. Making things incompatible is enclosure by design. The same happens if you use proprietary software. This will only permit you to use it for the things allowed by the software licencers. However, with the case of free software, you can copy it, share it, or further develop it without experiencing restrictions. This is a great difference, which affects the freedom and self-determination of the people (it’s called “free software”, but in fact, it’s not about the freedom of the software, but about the freedom of the people).
In many cases, it’s the technology itself – legally protected – that puts us onto a certain track of doing things. Say, using the same software over and over again. And then we get used to it, and forget that out there we had way more options and different ways of acting more in accordance with our needs and less dependent on a provider with commercial interests. So, the first thing we need to do is to make visible that enclosures are literally everywhere. There are even enclosures of our minds. They are so subtle and internalised that we don’t even perceive them as such.
Enclosures are enacted in many different ways: by transforming our language (and our minds); using politics as well as state and market/economic power; certain legal tools; and by designing deterministic technologies. Just think about the market-based terms we are used to; for example, that we don’t want to “sell ourselves short” on the labour market.
And we end up speaking a ‘marketised’ language and believing that people were born to compete.
Q: Today, private property is seen as a precondition for our autonomy. Will the commons also change how we look at private property?
A: You cannot think about commons if you don’t rethink property. Thinking about property is thinking about access, use rights, and so on. In the commons economy, we need to switch from thinking about property as connected to the notion of “dominium” in Roman law (which means complete control over something, allowing you to sell and buy a certain resource) to the notion of use rights, referring back to the concept of “possession” (meaning: people who actually need and use the resource should be the ones who have a say in their management).
In any case, it is important to understand that the commons do not imply a denial of property regimes. “Each commons is somebody else’s commons” is a sentence I learned from Vandana Shiva. Rethinking property means rethinking our relationship with these “somebody elses.”
Q: Can the commons be tools that help our societies finally think beyond growth?
A: Sure. There are many reasons why – let me just mention one of them. Growth is partly driven by debt based-money economy. So, if the way you make a living is completely based on the current monetary system, there is also a certain need to grow in order to compete and succeed within this debt-based economy. But this is contrary to what’s at the core of the commons.
The main concern in the commons economy is not to compete, but to make best use of a collective resources while finding ways to reproduce and crystallize these in such a way that no one is left behind. That implies that the main concern is not to build a business model out of the commons, but to meet the needs of as many people as possible. And if you don’t need to build a business model, everything is possible.
Q: How can we make ‘degrowth’ part of the vocabulary of the Left, if many of today’s Left-wing parties still formulate their messages by referring to economic growth?
A: Through creating ‘memes’ (for example, converting the commons into a meme). Memes spread by word of mouth and once this happens they trigger cultural change.The problem is that the Left relies on the same idea of “the economy” as most political players. They think that goods need to be produced via private entities in competition with each other, and they believe that the main thing that needs to be changed is the distribution of wealth after production. But if you really want to make degrowth or the commons a core idea, you need to think of a radical shift in the production modes, while focusing on pre-distribution instead of re-distribution. You need to start talking about the commons as a new mode of production, and as a different way of understanding the economy. In a commons economy there is, ideally, no division between production and reproduction, producer and consumer. You put, at the centre, forms of reproducing our livelihoods, which are not mediated through the market, money, or private agents competing against each other. How does this take shape? Through gifting; bartering; lending; co-using; co-producing; etc. This can be done in our community and beyond; it’s about federating the commons, so to speak. Creating Commons-Networks or Meta-Commons.
Let me give you an example: what makes community-supported agriculture structurally different from market-based food production? It produces vegetables, dairy products and the like, but it doesn’t produce “goods” or “products” to be sold on the market. In contrast, it produces “shares” distributed according to the self-determined rules of the participating community. This goes beyond sharing the harvest, as they also share the risk of production and if there is a bad harvest the whole community shares the burden. They cannot insist on “getting their product” for “the price they pay”.
Q: Is this also possible at a global scale? Can someone in Belgium share the risk with someone in Romania, or with a peasant in Nicaragua? Can we form a community with someone who is 2000 kilometres away?
A: The question points to something historically interesting. There is a new way of producing commons in the 21st century. If you think about food production, there is absolutely no need to share the risk of production with a peasant in Nicaragua. Peasants in Nicaragua can and should produce their own food. And we should too, instead of, say, importing soybeans to feed our pigs. If you think of food production or natural resource management, there is no need to share the physical means of production with people in other parts of the world. We should just get out of the way, allow them to produce their own food and protect their knowledge systems, which are tied to food-production.
However, in terms of producing machinery, hardware, cars, design, knowledge etc., we are seeing a new way of what I call “Commons Generating Peer Production”. We have digital infrastructures which allow us to follow the basic rule: “what is heavy is local; what is light is global” (Michel Bauwens, P2P Foundation). Knowledge, code and design are “light”. If you take into account that the lion’s share of the market value of cars, machinery, clothes, and so on is based on knowledge, code and design, we can share these globally. This doesn’t presuppose taking it away from someone else, as knowledge, code and design tend to become “more” when we share them. Such an approach can revolutionise production and enable local communities to produce locally what they were unable to produce in the current economy. Thus, we will see less transportation around the world.
Q: But that also means that we need to adjust our demands, as for example, we won’t be able to eat so many bananas in Romania anymore.
A: On the one hand, we need to make a distinction between real needs and what economists mean when they say “demand,” and ask ourselves: what do I really need in order to make a living; what do I really need to thrive as a human being? On the other hand, I can think of a scenario where we truly explore the option of producing locally what’s heavy and sharing globally what’s light. We might then keep still 10 or 20% of the international trade we have, without leaving the same carbon footprint that we have right now.
Q: Do you think the societies of austerity-stricken countries of the south of Europe have started to successfully embrace the potential of the commons? Could “guerilla gardening” be such a phenomenon (as argued in an article by Orestes Kolokouris)? According to Kolokouris, the rapid development of urban gardening in Greece “coincides with the rapid deterioration of living standards in Greek society in recent years due to the deep crisis.”
A: I think, that you cannot even understand how people would survive austerity if they didn’t apply commons-based solutions. How can you otherwise become almost disconnected from the flow of resources and the flow of money in the market and still make a living? People can survive because they are connected to each other and they find common(s) solutions to their problems. The terminology may vary, but still: there are commons and commoning everywhere. Commoning shows one way out of the crisis, which also disconnects us from its drivers and direct causes. In the south of Europe, there are many examples from the last few years, such as the solidarity clinics (citizen-run health clinics) in Greece.
Q: There has been, for many decades, an ongoing discussion about finding an alternative quantitative measurement to GDP; for example, gross national happiness. Even renowned economists, such as Joseph Stiglitz and Amartya Sen, have worked on this issue. Are these ideas compatible with the commons?
A: In a very deep sense, I would say that one of the major flaws of our modern way of thinking about the economy is the idea that everything can and has to be measured. And if you want to measure something, you need to make sure it is measurable. So you start “making” it measurable, which is a slow and often overlooked encroachment.
But how would you measure the commons, if they are about thriving communities, good livelihoods, autonomy and self-organisation? These are hardly measureable. The idea of alternative measurements is certainly interesting and important in order to show that the economy is about more than just stocks and flows, but it would not be the silver bullet to enable a commons-based economy and society. I very much appreciate what these researchers do, but I would appreciate it even more if they used their enormous creativity, energy and knowledge to enable a different mode of production and to rethink the whole.
The post Commons: A Frame for Thinking Beyond Growth appeared first on P2P Foundation.
]]>The post SwellRT Free Software Contest – Enter by Sept. 18 appeared first on P2P Foundation.
]]>If you like to code, free/libre/open source software, and support a decentralized Internet, the SwellRT project invites you to participate in its
Find more info at:
SwellRT is a real-time decentralized storage platform enabling real-time collaboration for Web applications. Multiple users can share and edit JavaScript objects in real-time with transparent conflict resolution (eventual consistency). Changes are distributed in real-time to any user or app accessing the shared object. SwellRT provides also out-of-the-box collaborative rich-text editing for Web applications through an extensible text editor Web component and API. SwellRT can be deployed as a decentralized network, so shared objects can be stored and synced in different federated servers in real time.
The post SwellRT Free Software Contest – Enter by Sept. 18 appeared first on P2P Foundation.
]]>The post Is GNU social decentralized or distributed? appeared first on P2P Foundation.
]]>This use of the distinction between network topologies helps us to understand how information flows through a network from one node to another, which nodes of the network are capable of retransmitting information to other nodes, whether there are nodes whose the survival the network depends on, and whether some of the nodes have the ability to filter and control the information that the others receive. In summary, the debate on network topologies addresses the autonomy of nodes and structures of power. Not coincidentally, one of the most famous slogans of the cyberpunk movement reminds us that
Under every information architecture there hides a power structure.
The search for and distinction between different network topologies played an important role in the birth of the Internet. In 1962, a nuclear confrontation seemed to be an imminent threat. So, Paul Baran received an important order. The Rand Corporation asked him to define a structure to use to set up communication systems that could survive a first strike of a nuclear attack. The main result of Baran’s work can be seen in the image shown above to the right.
In 1966, Paul Baran, in his famous report on Darpanet, presented three different network topologies and their characteristics. The main difference between the three network topologies is how robust they are during a nuclear attack or, in other words, to what extent can they tolerate disturbances without suffering a total collapse. We could have a long, drawn-out discussion on this topic and give a wide-ranging presentation on the measurement of the robustness of a network but, in summary, let’s just say that the more robust a network is, the fewer nodes are disconnected by extracting any given node.
This, plus a look at the image above, allow us easily figure out that the first two topologies, which is to say, centralized and decentralized networks, are highly dependent on the centralizing nodes — the centralized network at the global level, and the decentralized network at the local level. In the centralized network, the loss of the main node would result in the collapse of the whole network, and as a consequence, the surviving nodes would not be able to continue communicating between each other because of the lack of the node that interconnects them. In contrast, in distributed networks — the third topology that appears in the first image of this post — each node is independent and the fall of any node would not disconnect any another.
The originality of the Indianos was to use network topologies to explain the major features of social evolution since the eighteenth century as a function of the dominant media in each era (the post, the telegraph, the Internet). In the book The Power of Networks, we can read a broad historical tour through the last centuries and easily understand how technological advances gave life to new information structures which, in turn, created social changes. The key to the historical tour that we can read in The Power of Networks is in seeing people and connections between people where Baran saw computers and cables.
But, if through Baran’s view of a network topologies, we can technically measure the robustness of networks, what emerges from the view that David proposed to us a decade ago now in The Power of Networks?
This view quickly makes it clear that in distributed networks, the non-existence of central nodes not only makes it possible to have a network that is much more robust, but hierarchies also disappear, autonomy is favored and the control over others becomes impossible.
As a result, the nature of distributed networks is completely different from that of decentralized ones. A distributed network is not a more decentralized network. This is why it’s very important to answer the question of whether GNU social has a distributed or decentralized structure.
On the net, there are several descriptions of GNU social. Most of them present it as an alternative to Twitter or, more generally, as a microblogging service. Certainly, the current functions and options that GNU social offers are mostly characteristic of microblogging services. But in practice, what we find is thatconversations quickly flourish once again, and that more and more new functions appear that reduce the validity of these descriptions.
This conversation and especially the message belowput us on the track of a broader and more appropriate answer.
All microblogging and social networking sites are using selectively flawed ideas and should be transformed. Nobody needs ‘microblogging,’ they want socialization.
The desire to socialize and connect with each other shows the fact that all these systems and sites are not social networks in themselves, but tools that, like instant messaging and mail services, are used by social networks, which is to say, networks of people.
So we see that GNU social is a free tool for interconnection and communication used by different social networks. What functions will it offer, and what we will exchange through GNU social? That depends on what the social networks that use it want.
GNU social also has a particular characteristic that interests us especially, and it has to do with its structure. So, we return to the question, What is GNU social’s structure?
We’ve already presented widely on this, because it is important to answer this question. What will help us distinguish clearly between the three basic network typologies is the interdependence of the nodes that are part of the networks. Interdependence tells us whether the individual nodes depend on others to be able to communicate with others, and therefore, defines how robust they are under attack.
The nodes in a network driven by GNU social are the different installations like (lamatriz.org, loadaverage.org, quitter.se, etc.). A quick look at the image next to this paragraph us clearly shows that the nodes of GNU social do not depend on each other to communicate, and that the fall of one of them does not endanger the survival of the network at all. As a consequence, GNU social has a distributed structure.
From all this, we can draw two important conclusions. First, we realize that it is not necessary to look for a strict definition for GNU social, because what can be done with it will depend on what its users want. Secondly, GNU social has a distributed structure. This is an important distinction, because thanks to it, we see the birth of a social nature in which autonomy, privacy, and conversations are paramount.
The post Is GNU social decentralized or distributed? appeared first on P2P Foundation.
]]>