Commons Based Peer Production – P2P Foundation https://blog.p2pfoundation.net Researching, documenting and promoting peer to peer practices Fri, 14 May 2021 00:06:47 +0000 en-US hourly 1 https://wordpress.org/?v=5.5.15 62076519 What the decentralized web can learn from Wikipedia https://blog.p2pfoundation.net/what-the-decentralized-web-can-learn-from-wikipedia/2020/04/15 https://blog.p2pfoundation.net/what-the-decentralized-web-can-learn-from-wikipedia/2020/04/15#respond Wed, 15 Apr 2020 07:41:06 +0000 https://blog.p2pfoundation.net/?p=75718 By Eleftherios Diakomichalis, with Andrew Dickson & Ankur Shah Delight. Originally published in permaweird In this post, we analyze Wikipedia — a site that has achieved tremendous success and scale through crowd-sourcing human input to create one of the Internet’s greatest public goods. Wikipedia’s success is particularly impressive considering that the site is owned and... Continue reading

The post What the decentralized web can learn from Wikipedia appeared first on P2P Foundation.

]]>
By Eleftherios Diakomichalis, with Andrew Dickson & Ankur Shah Delight. Originally published in permaweird


In this post, we analyze Wikipedia — a site that has achieved tremendous success and scale through crowd-sourcing human input to create one of the Internet’s greatest public goods. Wikipedia’s success is particularly impressive considering that the site is owned and operated by a non-profit organization, and that almost all of its content is contributed by unpaid volunteers.

The non-commercial, volunteer-driven nature of Wikipedia may cause developers from the “decentralized web” to question the site’s relevance. However, these differences may be merely cosmetic: IPFS, for example, has no inherent commercial model, and most of the open source projects that underlie the decentralized web are built, at least in part, by volunteers.

We believe that a site that has managed to coordinate so many people to produce such remarkable content is well worth a look as we search for solutions to similar problems in the emerging decentralized web.

To better understand Wikipedia’s success, we first survey some key features of Wikipedia’s battle-tested (to the tune of 120,000 active volunteer editors) coordination mechanisms. Next, we present some valuable high-level lessons that blockchain projects interested in human input might learn from Wikipedia’s approach. Finally, we explore vulnerabilities inherent to Wikipedia’s suite of mechanisms, as well as the defenses it has developed to such attacks.

Wikipedia: key elements

While we cannot hope to cover all of Wikipedia’s functionality in this short post, we start by outlining a number of Wikipedia’s foundational coordination mechanisms as background for our analysis.

User and article Talk Pages

While anyone can edit an article anonymously on Wikipedia, most regular editors choose to register with the organization and gain additional privileges. As such, most editors, and all articles, have a public metadata page known as a talk page, for public conversations about the relevant user or article. Talk pages are root-level collaborative infrastructure: they allow conversations and disputes to happen frequently and publicly.

Since talk pages capture a history of each editor’s interaction — both in terms of encyclopedia content and conversational exchanges with other editors — they also provide the basis for Wikipedia’s reputation system.

Clear and accessible rules

If we think of the collection of mechanisms Wikipedia uses to coordinate its editors as a kind of “social protocol”, the heart of that protocol would surely be its List of Guidelines and List of Policies, developed and enforced by the community itself. According to the Wikipedia page on Policies and Guidelines:

“Wikipedia policies and guidelines are developed by the community… Policies are standards that all users should normally follow, and guidelines are generally meant to be best practices for following those standards in specific contexts. Policies and guidelines should always be applied using reason and common sense.”

For many coming from a blockchain background, such policies and guidelines will likely seem far too informal to be of much use, especially without monetary or legal enforcement. And yet, the practical reality is that these mechanisms have been remarkably effective at coordinating Wikipedia’s tens of thousands of volunteer editors over almost two decades, without having to resort to legal threats or economic incentives for enforcement.

Enforcement: Peer consensus and volunteer authority

Upon hearing that anyone can edit a Wikipedia page, no money is staked, no contracts are signed, and neither paid police nor smart contracts are available to enforce the guidelines, an obvious question is: why are the rules actually followed?

Wikipedia’s primary enforcement strategy is peer-based consensus. Editors know that when peer consensus fails, final authority rests with certain, privileged, volunteer authorities with long-standing reputations at stake.

Peer consensus

As an example, let’s consider three of the site’s most fundamental content policies, often referred to together. “Neutral Point of View” (NPOV), “No Original Research” (NOR), and “Verifiability” (V) evolved to guide editors towards Wikipedia’s mission of an unbiased encyclopedia.

If I modify the Wikipedia page for Mahatma Gandhi, changing his birthdate to the year 1472, or offering an ungrounded opinion about his life or work, there is no economic loss or legal challenge. Instead, because there is a large community of editors who do respect the policies (even though I do not), my edit will almost certainly be swiftly reverted until I can credibly argue that my changes meet Wikipedia’s policies and guidelines (“Neutral Point of View” and “Verifiability”, in this case).

Such discussions typically take place on talk pages, either the editor’s or the article’s, until consensus amongst editors is achieved. If I insist on maintaining my edits without convincing my disputants, I risk violating other policies, such as 3RR (explained below), and attracting the attention of an administrator.

Volunteer authority: Administrators and Bureaucrats

When peer consensus fails, and explicit authority is needed to resolve a dispute, action is taken by an experienced volunteer editor with a long and positive track record: an Administrator.

Administrators have a high degree of control over content, include blocking and unblocking users, editing protected pages, and deleting and undeleting pages. Because there are relatively few of them (~500 active administrators for English Wikipedia), being an administrator is quite an honor. Once nominated, adminship is determined through discussion on the user’s nomination page, not voting, with a volunteer bureaucrat gauging the positivity of comments at the end of the discussion. In practice, those candidates having more than 75% positive comments tend to pass.

Bureaucrats are the highest level of volunteer authority in Wikipedia, and are also typically administrators as well. While administrators have the final say for content decisions, bureaucrats hold the ultimate responsibility for adding and removing all kinds of user privileges, including adminship. Like administrators, bureaucrats are determined through community discussion and consensus. However, they are even rarer: there are currently only 18 for the entire English Wikipedia.

Since there is no hard limit to the number of administrators and bureaucrats, promotion is truly meritocratic.

Evolving governance

Another notable aspect of Wikipedia’s policies and guidelines is that they can change over time. And in principle, changing a Wikipedia policy or guideline page is no different than changing any other page on the site.

The fluidity of the policies and guidelines plays an important role in maintaining editors’ confidence in enforcing the rules. After all, people are much more likely to believe in rules that they helped create.

If we continue to think of the policies and guidelines for Wikipedia as a kind of protocol, we would say that the protocol can be amended over time and that the governance for its evolution takes place in-protocol — that is, as a part of the protocol itself.

Lessons for the decentralized web

Now that we have a little bit of background on Wikipedia’s core mechanisms, we will delve into the ways that Wikipedia’s approach to coordination differs from similar solutions in public blockchain protocols. There are three areas where we believe the decentralized web may have lessons to learn from Wikipedia’s success: cooperative games, reputation, and an iterative approach to “success”.

We also hope that these lessons may apply to our problem of generating trusted seed sets for Osrank.

Blockchain should consider cooperative games

Examining Wikipedia with our blockchain hats on, one thing that jumps out right away is that pretty much all of Wikipedia’s coordination games are cooperative rather than adversarial. For contrast, consider Proof of Work as it is used by the Bitcoin network. Because running mining hardware costs money in the form of electricity and because only one node can get the reward in each block, the game is inherently zero-sum: when I win, I earn a block reward; every other miner loses money. It is the adversarial nature of such games that leaves us unsurprised when concerns like selfish mining start to crop up.

As an even better example, consider Token Curated Registries (TCRs). We won’t spend time describing the mechanics of TCRs here, because we plan to cover the topic in more detail in a later post. But for now, the important thing to know is that TCRs allow people to place bets, with real money, on whether or not a given item will be included in a list. The idea is that, like an efficient market, the result of the betting will converge to produce the correct answer.

One problem with mechanisms like TCRs is that many people have a strong preference against playing any game in which they have a significant chance of losing — even if they can expect their gains to make up for their losses over time. In behavioral psychology, this result is known as loss aversion and has been confirmed in many real-world experiments.

In short, Proof of Work and TCRs are both adversarial mechanisms for resolving conflicts and coming to consensus. To see how Wikipedia resolves similar conflicts using cooperative solutions, let’s dive deeper into what dispute resolution looks like on the site.

Dispute resolution

So how does a dubious change to Mahatma Gandhi’s page actually get reverted? In other words, what is the process by which that work gets done?

When a dispute first arises, Wikipedia instructs the editors to avoid their instinct to revert or overwrite each other’s edits, and to take the conflict to the article’s talk page instead. Some quotes from Wikipedia’s page on Dispute Resolution point to the importance of the Talk pages:

“Talking to other parties is not a mere formality, but an integral part of writing the encyclopedia”

“Sustained discussion between the parties, even if not immediately successful, demonstrates your good faith and shows you are trying to reach a consensus.”

Editors who insist on “edit warring”, or simply reverting another editor’s changes without discussion, risk violating Wikipedia’s 3RR policy, which prohibits editors from reverting 3 changes on a given page in 24 hours. Editors who violate 3RR risk a temporary suspension of their accounts.

If initial efforts by the editors to communicate on the Talk Page fail, Wikipedia offers many additional solutions for cooperative coordination, including:

  • Editor Assistance provides one-on-one advice on how to conduct a civil, content-focused discussion from an experienced editor.
  • Moderated Discussion offers the facilitation help of an experienced moderator, and is only available after lengthy discussion on the article’s Talk page.
  • 3rd Opinion, matches the disputants with a third, neutral opinion, and is only available for disputes involving only people.
  • Community Input allows the disputants to get input from a (potentially) large number of content experts.

Binding arbitration from the Arbitration Committee is considered the option of last resort, and is the only option in which the editors are not required to come to a consensus on their own. According to Wikipedia’s index of arbitration cases, this mechanism has been invoked only 513 times since 2004 — a strong vote of confidence for its first-pass dispute resolution mechanisms.

A notable theme of all of these dispute resolution mechanisms is how uniformly cooperative they are. In particular, it is worth observing that in no case can any editor lose something of significant economic value, as they might, for instance, if a TCR was used to resolve the dispute.

What the editor does lose, if their edit does not make it into the encyclopedia, is whatever time and work she put into the edit. This risk likely incentivises editors to make small, frequent contributions rather than large ones and to discuss major changes with other editors before starting work on them.

“Losing” may not even be the right word. As long as the author of the unincluded edit believes in Wikipedia’s process as a whole, she may still view her dispute as another form of contribution to the article. In fact, reputation-wise, evidence of a well-conducted dispute only adds credibility to the user accounts of the disputants.

Reputation without real-world identity can work

Another lesson from Wikipedia relates to what volunteer editors have at stake and how the site’s policies use that stake to ensure their good behavior on the system.

Many blockchain systems require that potential participants stake something of real-world value, typically either a bond or an off-chain record of good “reputation”. For example, in some protocols, proof-of-stake validators risk losing large amount of tokens if they don’t follow the network’s consensus rules. In other networks, governors or trustees might be KYC’d with the threat of legal challenge, or public disapproval, if they misbehave.

Wikipedia appears to have found a way to incentivize participants’ attachment to their pseudonyms without requiring evidence of real-world identity. We believe this is because reputation in Wikipedia’s community is based on a long-running history of small contributions that is difficult and time-consuming to fake, outsource, or automate.

Once an editor has traded anonymity for pseudonymity and created a user account, the first type of reputation that is typically considered is their “edit count”. Edit count is the total number of page changes that the editor has made during his or her history of contributing to Wikipedia. In a sense, edit count is a human version of proof-of-work, because it provides a difficult-to-fake reference for the amount of work the editor has contributed to the site.

If edit count is the simplest quantitative measure of a user’s total reputation on the site, its qualitative analog is the user talk pages. Talk pages provide a complete record of the user’s individual edits, as well as a record of administrative actions that have been taken against the user, and notes and comments by other users. The Wikipedia community also offers many kinds of subjective awards which contribute to editor reputation.

Reputable editors enjoy privileges on Wikipedia that cannot be earned in any other way — in particular, a community-wide “benefit of the doubt”. Wikipedia: The Missing Manual’s page on vandalism and spam provides a good high-level overview, instructing editors who encounter a potentially problematic edit to first visit the author’s talk page. Talk pages with lots of edits over time indicate the author should be assumed to be acting in good faith, and notified before their questionable edit is reverted: “In the rare case that you think there’s a problem with an edit from this kind of editor, chances are you’ve misunderstood something.”

On the other hand, the same source’s recommendations for questionable edits by anonymous editors, or editors with empty talk pages, are quite different: “If you see a questionable edit from this kind of user account, you can be virtually certain it was vandalism.”

Blockchains which adopt similar reputation mechanisms might expect to see two major changes: slower evolution of governance and sticky users. And while no public blockchains that we’re aware of have made significant use of pseudonymous reputation, it’s worth noting that such mechanisms have played a significant role in the increasing adoption of the Dark Web.

Assigning power based on a long history of user edits means that the composition of the governing class necessarily changes slowly and predictably, and is therefore less subject to the “hostile takeovers” that are a fundamental risk for many token-voting-based schemes.

Sticky users are a consequence of the slow accretion of power: experienced users tend to stick to their original pseudonym precisely because it would be time-consuming to recreate a similar level of privilege (both implicit and explicit) under a new identity.

All in all, Wikipedia’s reputation system may represent an excellent compromise between designs offering total anonymity on one hand and identity models built on personally identifying information on the other. In particular, such a system has the benefit of allowing users to accrue reputation over time and resisting Sybil attacks by punishing users if and when they misbehave. At the same time, it also allows users to preserve the privacy of their real-world identities if they wish.

Iteration over finality

Wikipedia’s encyclopedic mission, by its very nature, can never be fully completed. As such, the site’s mechanisms do not attempt to resolve conflicts quickly or ensure the next version of a given page arrives at the ultimate truth, but rather, just nudge the encyclopedia one step closer to its goal. This “iterative attitude” is particularly well-suited to assembling human input. Humans often take a long time to make decisions, change their minds frequently, and are susceptible to persuasion by their peers.

What can Radicle, and other p2p & blockchain projects, learn from Wikipedia in this regard? Up to this point, many protocol designers in blockchain have had a preference for mechanisms that achieve “finality” — that is, resolve to a final state, with no further changes allowed — as quickly as possible. There are often very good reasons for this, particularly in the area of consensus mechanisms and yet, taking inspiration from Wikipedia, we might just as easily consider designs that favor slow incremental changes over fast decisive ones.

For instance, imagine a protocol in which (as with Wikipedia) it is relatively easy for any user to change the system state (e.g. propose a new trusted seed), but such a change might be equally easily reverted by another user, or a group of users with superior reputation.

Or consider a protocol in which any state change is rolled out over a long period of time. In Osrank, for instance, this might mean that trusted seeds would start out as only 10% trusted, then 20% trusted one month later, and so on. While such a design would be quite different from how Wikipedia works today, it would hew to the same spirit of slow, considered change over instant finality.

Attacks and defenses

While the previous section covered a number of ways in which Wikipedia’s mechanisms have found success up to this point, the true test of a decentralized system is how vulnerable it is to attacks and manipulation. In this section, we introduce Wikipedia’s perspective on security. We then examine some of Wikipedia’s vulnerabilities, the attacks that play upon them and the defenses the Wikipedia community has evolved.

How Wikipedia Works: Chapter 12 discusses the fact that nearly all of the security utilized by Wikipedia is “soft security”:

“One of the paradoxes of Wikipedia is that this system seems like it could never work. In a completely open system run by volunteers, why aren’t more limits required? One answer is that Wikipedia uses the principle of soft security in the broadest way. Security is guided by the community, rather than by restricting community actions ahead of time. Everyone active on the site is responsible for security and quality. You, your watchlist, and your alertness to strange actions and odd defects in articles are part of the security system.”

What does “soft security” mean? It means that security is largely reactionary, rather than preventative or broadly restrictive on user actions in advance. With a few exceptions, any anonymous editor can change any page on the site at any time. The dangers of such a policy are obvious, but the advantages are perhaps less so: Wikipedia’s security offers a level of adaptability and flexibility that is not possible with traditional security policies and tools.

Below, we discuss three kinds of attacks that Wikipedia has faced through the years: Bad Edits (vandalism and spam), Sybil Attacks, and Editing for Pay. For each attack we note the strategies and solutions Wikipedia has responded with and offer a rough evaluation of their efficacy.

Bad edits: Vandalism and spam

The fact that anyone with an internet connection can edit almost any page on Wikipedia is one of the site’s greatest strengths, but perhaps may also be its greatest vulnerability. Edits not in service of Wikipedia’s mission fall into two general categories: malicious edits (vandalism) and promotional edits (spam).

While Wikipedia reader/editors are ultimately responsible for the clarity and accuracy of the encylopedia’s content, a number of tools have been developed to combat vandalism and spam. Wikipedia: The Missing Manual gives a high-level overview:

  • Bots. Much vandalism follows simple patterns that computer programs can recognize. Wikipedia allows bots to revert vandalism: in the cases where they make a mistake, the mistake is easy to revert.
  • Recent changes patrol. The RCP is a semi-organized group of editors who monitor changes to all the articles in Wikipedia, as the changes happen, to spot and revert vandalism immediately. Most RC patrollers use tools to handle the routine steps in vandal fighting.
  • Watchlists. Although the primary focus of monitoring is often content (and thus potential content disputes, as described in Chapter 10: Resolving content disputes), watchlists are an excellent way for concerned editors to spot vandalism.

Given the incredible popularity, and perceived respectability, of Wikipedia, it’s safe to say that the community’s defenses against basic vandalism and spam are holding up quite well overall.

Sybil attacks

Sybil attacks, endemic to the blockchain ecosystem, are known as “Sockpuppets” in Wikipedia, and are used to designate multiple handles controlled by the same person. They are usually employed when one person wants to seem like multiple editors, or wants to continue editing after being blocked.

While Sockpuppets are harder to detect in an automated fashion than vandalism and spam, there is a process for opening Sockpuppet investigations and a noticeboard for ongoing investigations. Well-thought-out sockpuppetry attacks are both time-consuming to mount and defend against. While dedicated investigators (known as clerks) are well-suited to the task, it is impossible to know how much successful Sockpuppetry has yet to be discovered.

Hired guns — Editing for pay

Hired guns — editors who make changes to in exchange for pay — are becoming an increasingly serious concern for Wikipedia, at least according to a 2018 Medium post, “Wikipedia’s Top-Secret ‘Hired Guns’ Will Make You Matter (For a Price)”, in which Author Stephen Harrison writes,

“A market of pay-to-play services has emerged, where customers with the right background can drop serious money to hire editors to create pages about them; a serious ethical breach that could get worse with the rise of—wait for it—cryptocurrency payments.”

In the post, Harrison draws on a number of interviews he conducted with entrepreneurs running businesses in this controversial space. According to Harrison, businesses like What About Wiki, operate in secret, utilizing large numbers of sockpuppet accounts and do not disclose the fact that that their edits are being done in exchange for pay.

In the past, Wikipedia has prohibited all such activities and in fact, businesses like What About Wiki violate Wikipedia’s Terms of Use — a legally binding agreement. However that seems to be changing. According to Harrison,

“A 2012 investigation discovered that the public relations firm Wiki-PR was editing the encyclopedia using multiple deceptive sock-puppet accounts for clients like Priceline and Viacom. In the wake of the Wiki-PR incident, the Wikimedia Foundation changed its terms of use in 2014 to require anyone compensated for their contributions to openly disclose their affiliation.”

The upshot is that since 2014, paid editing is now allowed on the site so long as the relationship is disclosed.

And yet, major questions remain. For one thing, at least according to Harrison’s analysis, companies acting in compliance with Wikipedia’s disclosure policy represent just a small fraction of the paid editors working (illegitimately) on the site. For another, he argues that complying with Wikipedia’s policies leads to paid editors making less money, because there’s a lower chance their edits will be accepted and therefore less chance the clients will be willing to foot the bill.

This leads to a final question, which is whether paid edits can ever really be aligned with the deep values that Wikipedia holds. For instance, one of Wikipedia’s main behavior guidelines is a prohibition against editors who have a conflict of interest in working on a given page. It’s hard to imagine a clearler conflict of interest than a paid financial relationship between the editor and the subject of a page.

DAOs

Wikipedia’s success is inspirational in terms of what can be accomplished through decentralized coordination of a large group of people. While we believe that the decentralized web still has many lessons to learn from the success of Wikipedia — and we’ve tried to touch a few in this post — a great deal of work and thinking has already been done around how a large organization like Wikipedia could eventually be coordinated on-chain.

Such organizations are known as Decentralized Autonomous Organizations (DAOs), and that will be the topic of a future post.


Photo by designwebjae (Pixabay)

The post What the decentralized web can learn from Wikipedia appeared first on P2P Foundation.

]]>
https://blog.p2pfoundation.net/what-the-decentralized-web-can-learn-from-wikipedia/2020/04/15/feed 0 75718
Essay of the Day: Open and Collaborative Developments https://blog.p2pfoundation.net/essay-of-the-day-open-and-collaborative-developments/2018/12/28 https://blog.p2pfoundation.net/essay-of-the-day-open-and-collaborative-developments/2018/12/28#respond Fri, 28 Dec 2018 10:00:00 +0000 https://blog.p2pfoundation.net/?p=73856 Open and Collaborative Developments by Patrick Van Zwanenberg, Mariano Fressoli, Valeria Arza, Adrian Smith and Anabel Marin. Download PDF Experimentation with radically open and collaborative ways of producing knowledge and material artefacts can be found everywhere – from the free/libre and open-source software movement to citizen science initiatives, and from community-based fabrication labs and makerspaces to the production of... Continue reading

The post Essay of the Day: Open and Collaborative Developments appeared first on P2P Foundation.

]]>
Open and Collaborative Developments by Patrick Van Zwanenberg, Mariano Fressoli, Valeria Arza, Adrian Smith and Anabel Marin.

Download PDF

Experimentation with radically open and collaborative ways of producing knowledge and material artefacts can be found everywhere – from the free/libre and open-source software movement to citizen science initiatives, and from community-based fabrication labs and makerspaces to the production of open-source scientific hardware. Spurred on by the widespread availability of networked digital infrastructure, what such initiatives share in common is the (re)creation of knowledge commons, and an attempt to redistribute innovative agency across a much broader array of actors.

In this working paper we reflect on what these emerging practices might mean for helping to cultivate more equitable and sustainable patterns of global development. For many commentators and activists such initiatives promise to radically alter the ways in which we produce knowledge and material artefacts – in ways that are far more efficient, creative, distributed, decentralized, and democratic. Such possibilities are intriguing, but not without critical challenges too.

We argue that key to appreciating if and how collaborative, commons-based production can fulfil such promises, and contribute to more equitable and sustainable patterns of development, are a series of challenges concerning the knowledge politics and political economy of the new practices. We ask: what depths and forms of participation are being enabled through the new practices? In what senses does openness translate to the ability to use knowledge? Who is able to allocate resources to, and to capture benefits from, the new initiatives? And will open and collaborative forms of production create new relations with, or even transform, markets, states, and civil society or will they be captured by sectional interests?

Photo by CaZaTo Ma


Reposted from The Steps Centre

The post Essay of the Day: Open and Collaborative Developments appeared first on P2P Foundation.

]]>
https://blog.p2pfoundation.net/essay-of-the-day-open-and-collaborative-developments/2018/12/28/feed 0 73856
A rebellious hope https://blog.p2pfoundation.net/a-rebellious-hope/2018/12/06 https://blog.p2pfoundation.net/a-rebellious-hope/2018/12/06#respond Thu, 06 Dec 2018 09:00:00 +0000 https://blog.p2pfoundation.net/?p=73630 Cross-posted from Shareable Neal Gorenflo: The English translation for the Rural Social Innovation manifesto was not ready when Alex Giordano asked me to write the preface to it. I agreed expecting the manifesto to be like many I’ve read online, relatively short and easy to digest. I thought I could quickly write an introduction. This was not... Continue reading

The post A rebellious hope appeared first on P2P Foundation.

]]>

Cross-posted from Shareable

Neal Gorenflo: The English translation for the Rural Social Innovation manifesto was not ready when Alex Giordano asked me to write the preface to it. I agreed expecting the manifesto to be like many I’ve read online, relatively short and easy to digest. I thought I could quickly write an introduction. This was not to be. Alex and Adam have put together an impressive, unique, and in-depth manifesto packed with world-changing ideas delivered in a style that powerfully communicates the spirit of RuralHack and its partners — a rebellious hope that rests on a firm foundation of pragmatism and a love of people and place. Indeed, Rural Social Innovation manifesto is unlike any manifesto I’ve read.

For starters, it’s front loaded with and is mostly composed of a series of profiles showcasing the ideas of the people behind the Italian rural social innovation movement. In this way, it’s like the Bible’s New Testament with each disciple giving their version of the revolution at hand in a series of gospels. It says a lot about this manifesto that the people in the document come first, not the ideas. The gospel of each rural innovator not only transmits important ideas, but gives up to the reader individuals who embody the movement. These are the living symbols of the movement who are not only individual change agents themselves but representatives of their unique communities and their streams of action in the past, present, and planned into the future. This gives the manifesto a unique aliveness. It’s not a compendium of dry ideas. It’s a manifesto of flesh in motion and spirit in action.

  • There’s Roberto Covolo who has turned negative elements of Mediterranean culture into a competitive advantage through the upgrading the dell’ExFadda winery with the youth of the School of Hot Spirits.
  • There’s Simone Cicero of OuiShare testifying about the promise of the collaborative economy and how it can help rural producers capture more economic value while building solidarity.
  • There’s Jaromil Rojo who asks, “How does the design approach connect hacker culture and permaculture?”
  • There’s Christian Iaione of Labgov who is helping bring to life a new vision of government, one in which the commons is cared for by many stakeholders, not just the government.
  • And there are many more of who share their projects, hopes, and dreams. All the same Alex and Adam do the reader the favor by crystallizing the disciples’ ideas into a crisp statement of the possibilities at hand.

To extend the New Testament metaphor, the subject of these gospels isn’t a prophet, but a process, one that is birthing a new kingdom. The process is a new way to run an economy called commons-based peer production. This is a fancy phrase which simply means that people cut out rentseeking middleman and produce for and share among themselves. The time has finally arrived that through cheap production technologies, open networks, and commons-based governance models that people can actually do this.

This new way of doing things is the opposite of and presents an unprecedented challenge to the closed communities and entrenched interests that have for so long controlled the politics and economies of rural towns and regions. The old, industrial model of production concentrated wealth into the hands of the few while eroding the livelihoods, culture, and environment of rural people. It impoverished rural people in every way while pushing mass quantities of commodity products onto the global market. It exported the degradation of rural people to an unknowing public. What’s possible now is the maintenance and re-interpretation of traditional culture through a new, decentralized mode of production and social organization that places peer-to-peer interactions and open networks at the core. In short, it’s possible that a commons-based rural economy can spread the wealth and restore the rich diversity of crops, culture, and communities in rural areas.

What’s also possible is a new way for rural areas to compete in the global economy. The best way to compete is for rural areas to develop the qualities and products that make them most unique. In other words, the best way to compete is to not compete. This means a big turn away from commodity products, experiences, and places. This may only be possible through a common-based economy that’s run by, of, and for the people.

It may be the only way that rural areas can attract young people and spark a revival. Giant corporations maniacally focused on mass production, growth and profit are incapable of this. Yet many rural communities still stake their future on such firms and their exploitative, short-term, dead-end strategies. The above underscores the importance of this manifesto.

The transition to a new rural economy is a matter of life or death. The rapid out-migration from rural areas will continue if there’s no way for people to make a life there. The Italian countryside will empty out and the world will be left poorer for it. A pall of hopeless hangs over many rural areas because this process seems irreversible. While this new rural economy is coming to life, its success is uncertain. It will likely be an uneven, difficult, and slow transition if there’s a transition at all. It will take people of uncommon vision, commitment and patience to make it happen. It will take people like those profiled in the coming pages who embody the famous rallying chant of farm worker activist Dolores Huerta, “Si se Puede” or yes we can.

Editor’s note: This is a version of the preface written for the Rural Social Innovation manifesto. Read the full version here. Header image from the Rural Social Innovation manifesto

The post A rebellious hope appeared first on P2P Foundation.

]]>
https://blog.p2pfoundation.net/a-rebellious-hope/2018/12/06/feed 0 73630
Essay of the day: When Ostrom Meets Blockchain: Exploring the Potentials of Blockchain for Commons Governance https://blog.p2pfoundation.net/essay-of-the-day-when-ostrom-meets-blockchain-exploring-the-potentials-of-blockchain-for-commons-governance/2018/11/06 https://blog.p2pfoundation.net/essay-of-the-day-when-ostrom-meets-blockchain-exploring-the-potentials-of-blockchain-for-commons-governance/2018/11/06#respond Tue, 06 Nov 2018 09:00:00 +0000 https://blog.p2pfoundation.net/?p=73316 When Ostrom Meets Blockchain: Exploring the Potentials of Blockchain for Commons Governance, a working paper/preprint by David Rozas, Antonio Tenorio-Fornés, Silvia Díaz-Molina and Samer Hassan. Universidad Complutense de Madrid (UCM). Abstract Blockchain technologies have generated excitement, yet their potential to enable new forms of governance remains largely unexplored. Two confronting standpoints dominate the emergent debate around... Continue reading

The post Essay of the day: When Ostrom Meets Blockchain: Exploring the Potentials of Blockchain for Commons Governance appeared first on P2P Foundation.

]]>
When Ostrom Meets Blockchain: Exploring the Potentials of Blockchain for Commons Governance, a working paper/preprint by David Rozas, Antonio Tenorio-Fornés, Silvia Díaz-Molina and Samer Hassan. Universidad Complutense de Madrid (UCM).

Abstract

Blockchain technologies have generated excitement, yet their potential to enable new forms of governance remains largely unexplored. Two confronting standpoints dominate the emergent debate around blockchain-based governance: discourses characterised by the presence of techno-determinist and market-driven values, which tend to ignore the complexity of social organisation; and critical accounts of such discourses which, whilst contributing to identifying limitations, consider the role of traditional centralised institutions as inherently necessary to enable democratic forms of governance. Therefore the question arises, can we build perspectives of blockchain-based governance that go beyond markets and states? In this article we draw on the Nobel laureate economist Elinor Ostrom’s principles for self-governance of communities to explore the transformative potential of blockchain. We approach blockchain through the identification and conceptualisation of affordances that this technology may provide to communities. For each affordance, we carry out a detailed analysis situating each in the context of Ostrom’s principles, considering both the potentials of algorithmic governance and the importance of incorporating communities’ social practices. The relationships found between these affordances and Ostrom’s principles allow us to provide a perspective focussed on blockchain-based commons governance. By carrying out this analysis, we aim to expand the debate from one dominated by a culture of competition to one that promotes a culture of cooperation.

Introduction

In November 2008 a paper published anonymously presented Bitcoin: the first cryptocurrency based purely on a peer-to-peer system (Nakamoto, 2008). For the first time, no third parties were necessary to solve problems such as double-spending. The solution was achieved through the introduction of a data structure known as a blockchain. In simple terms, a blockchain can be understood as a distributed and append-only ledger. Data, such as the history of transactions generated by using cryptocurrencies, can be stored in a blockchain without the need to trust a third party, such as a bank server. From a technical perspective, blockchain enables the implementation of novel properties at an infrastructural level in a fully decentralised manner.

The properties most cited by blockchain enthusiasts at this infrastructural level include immutability, transparency, persistency, resilience and openness (Underwood 2016; Wright & De Filippi 2015), among others. Certainly, some technical infrastructures could previously provide these properties, e.g. the immutability and openness provided by content repositories like Github or Arxiv.org, or the persistence and resilience provided by large web services such as Amazon or Facebook. However, the implementation of these solutions relied on a trusted third party. There have been other decentralised technical infrastructures with varying degrees of success which also reflect some of these properties, e.g. the Web has been traditionally shown as an example of openness, although with uneven persistence (Koehler 1999), or BitTorrent peer-to-peer sharing networks are considered open, resilient and partially transparent (Cohen 2003). However, none of the existing decentralised technologies have enabled all these properties (and others) at once in a robust manner, while maintaining a high degree of decentralisation. It is precisely the possibility of developing technological artefacts which rely on a fully distributed infrastructure that is generating enthusiasm, or “hype” according to some authors (Reber & Feuerstein, 2014), with regards to the potential applications of blockchain.

In this article we focus on some of these potential applications of blockchain. More precisely, we reflect on the relationship between blockchain properties and the generation of potentialities which could facilitate governance processes. Particularly, we focus on the governance of Commons-Based Peer Production (CBPP) communities. The term, originally coined by Benkler (2002), refers to an emergent model of socio-economic production in which groups of individuals cooperate with each other to produce shared resources without a traditional hierarchical organisation (Benkler, 2006). There are multiple well-known examples of this phenomenon, such as Wikipedia, a project to collaboratively write a free encyclopedia; OpenStreetMap, a project to create free/libre maps of the World collaboratively; or Free/Libre Open Source Software (FLOSS) projects such as the operating system GNU/Linux or the browser Firefox. Research carried out drawing on crowdsourcing techniques (Fuster Morell et al., 2016a) found examples of the broad diversity of areas in which the collaborative work on commons is present. This includes open science, urban commons, peer funding and open design, to name but a few. Three main characteristics of this mode of production are salient in the literature on CBPP (Arvidsson et al., 2017). Firstly, CBPP is marked by decentralisation, since authority resides in individual agents rather than a central organiser. Secondly, it is commons-based because CBPP communities make frequent use of common resources, i.e. shared resources which are openly accessible and whose ownership is collectivised. These resources can be immaterial, such as source code in free software, or material, such as 3D printers shared in small-scale workshops known as Fab Labs. Thirdly, there is a prevalence of non-monetary motivations. These motivations are, however, commonly intertwined with extrinsic motivations. As a result, a wide spectrum of motivations and multiple forms of value operate in CBPP communities (Cheshire & Antin 2008), beyond monetary value, e.g. use value, reputational and ecosystemic value (Fuster Morell et al., 2016b).

The three aforementioned characteristics of peer production are in fact aligned with blockchain features. First, both CBPP and blockchain strongly rely on decentralised processes, thus, the possibility of using blockchain infrastructure to support CBPP processes arises. Secondly, the shared commons in CBPP corresponds to the shared ledger present in blockchain infrastructure, where data and rules are transparent, open, collectively owned, and in practice managed as a commons. This leads to the question if such blockchain commons could host or support commons resources, or “commonify” other features of CBPP communities, such as their rules of governance. Thirdly, CBPP relies on multi-dimensional forms of value and motivations, and blockchain enables the emergence of multiple types of non-monetary interactions (sharing, voting, reputation). This brings about the question of the new potentials for channelling CBPP community governance.

Overall, we strongly believe that the combination of CBPP and blockchain provides an exciting field for exploration, in which the use of blockchain technologies is used to support the coordination efforts of these communities. This leads us to the research question: what affordances are generated by blockchain technologies which could facilitate the governance of CBPP communities?

Read the full paper here.

Photo by mikerastiello

The post Essay of the day: When Ostrom Meets Blockchain: Exploring the Potentials of Blockchain for Commons Governance appeared first on P2P Foundation.

]]>
https://blog.p2pfoundation.net/essay-of-the-day-when-ostrom-meets-blockchain-exploring-the-potentials-of-blockchain-for-commons-governance/2018/11/06/feed 0 73316
Bringing Back The Lucas Plan https://blog.p2pfoundation.net/bringing-back-the-lucas-plan/2018/05/31 https://blog.p2pfoundation.net/bringing-back-the-lucas-plan/2018/05/31#respond Thu, 31 May 2018 08:00:00 +0000 https://blog.p2pfoundation.net/?p=71202 Continuing our coverage of the Lucas Plan as a precursor to Design Global Manufacture Local, this article explores “what the Lucas Plan could teach tech today”. By Felix Holtwell,  republished from Notes from Below.org “We got to do something now, the company are not going to do anything and we got to protect ourselves”, proclaimed... Continue reading

The post Bringing Back The Lucas Plan appeared first on P2P Foundation.

]]>
Continuing our coverage of the Lucas Plan as a precursor to Design Global Manufacture Local, this article explores “what the Lucas Plan could teach tech today”. By Felix Holtwell,  republished from Notes from Below.org

“We got to do something now, the company are not going to do anything and we got to protect ourselves”, proclaimed a shop steward at Lucas Aerospace when filmed by a 1978 documentary by the Open University.

He was explaining the rationale behind the so-called Alternative Corporate Plan, better known as the Lucas Plan. It was proposed by shop stewards in seventies England at the factories of Lucas Aerospace. To stave off pending layoffs, a shop steward committee established a plan that outlined a range of new, socially useful technologies for Lucas to build. With it, they fundamentally challenged the capitalist conception of technology design.

Essentially, they proposed that workers establish control over the design of technology. This bottom-up attempt at design, where not management and capitalists but workers themselves decided what to build, eventually failed. It was stopped by management, sidelined by struggling trade unions and the Labour Party, and eventually washed over by neoliberalism.

The seventies were a heady time, the preceding social-democratic, fordist consensus ran into its own contradictions and died in the face of a triumphant neoliberalism. With it, experiments such as the Lucas Plan died as well. Today, however, neoliberalism is in crisis and to bury it we should look back to precisely those experiments that failed decades ago.

Technology’s neoliberal crisis

One part of the crisis of neoliberalism is the crisis of its technology. The software and information technology sector, often denoted as “tech”, is facing widespread criticism and attacks, with demands for reform stretching wide across society.

Even an establishment publication such as The New York Times now publishes a huge feature headlining: The Case Against Google, about Google’s use of their near monopoly on search to bury competitors’ sites.

Other controversies revolve around companies such as Facebook, Snapchat and Twitter making use of insights into human psychology to make people interact with their products more often and more intensely. This involves everything from gamifying social interaction through likes and making the notification button on Facebook red, to the ubiquity of unlimited vertical scrolling in mobile phone apps.

This has a number of consequences. Studies show that the presence of smartphones damages cognitive capacity, that Facebook use is negatively associated with well-being and that preteens with no access to screens for some time show better social skills than those with screen time.

In public discourse, this combines with fears that social media might harmfully impact political processes (basically Russia buying Facebook ads).

Or, as ex-Facebook executive Chamath Palihapitiya stated:

The short-term, dopamine-driven feedback loops we’ve created are destroying how society works, hearts, likes, thumbs-up. No civil discourse, no cooperation; misinformation, mistruth. And it’s not an American problem — this is not about Russians ads. This is a global problem.

Early employees and execs at Facebook and Google even created the Center for Humane Tech that will propose more humanised tech design choices. Their website states:

Our world-class team of deeply concerned former tech insiders and CEOs intimately understands the culture, business incentives, design techniques, and organizational structures driving how technology hijacks our minds.

Part of this are the usual worries about intergenerational change, technology and centrism starting to fall apart, but there is a core truth in the worries about social media: design of technology is political.

Technologies are designed by capitalist firms, and they do it for capitalist purposes, not for maximising human well-being. In the case of social media, it is designed to pull as much attention as possible into the platform and the ads shown on it.

As Chris Marcellino, a former Apple engineer who worked on the iPhone, has said:

It is not inherently evil to bring people back to your product, it’s capitalism.

The Lucas Plan

This brings us back to the Lucas Plan. At a time where the design of technology is under unprecedented scrutiny, a plan that pushes for workers’ control over it might be an answer.

The Plan was a truly remarkable experiment at the time. The University of Sussex’s Adrian Smith explains:

Over the course of a year they built up their Plan on the basis of the knowledge, skills, experience, and needs of workers and the communities in which they lived. The results included designs for over 150 alternative products. The Plan included market analyses and economic argument; proposed employee training that enhanced and broadened skills; and suggested re-organising work into less hierarchical teams that bridged divisions between tacit knowledge on the shop floor and theoretical engineering knowledge in design shops.

The Financial Times described the Lucas Plan as, “one of the most radical alternative plans ever drawn up by workers for their company” (Financial Times, 23 January 1976). It was nominated for the Nobel Peace Prize in 1979. The New Statesman claimed (1st July 1977) ‘The philosophical and technical implications of the plan are now being discussed on average of twenty five times a week in international media’.

The Lucas Plan eventually failed because of opposition from management, the trade union hierarchy and the government. Lucas Aerospace subsequently had to restructure and shed much of its workforce. Nevertheless, the plan provides great lessons for our current predicament.

Technology is political, yet its design is ultimately in the hands of capitalist firms. The Lucas Plan shows that workers, particularly in the more technically-oriented layers, have the skills and resources to design alternative technologies to those proposed by shareholders and management.

Workers’ control over the design of technology is thus a way to make it more ethical. Many of the problems we encounter with modern-day information technology are caused by unrestricted capitalist control over it, and workers’ control can be a necessary counterweight to push through human-centered design choices.

Composition

So how to build a modern-day Lucas Plan? Developing a plan reminiscent of the Lucas Plan for modern times needs, first and foremost, to be based on the present-day class composition of the workers in tech.

Tech, and more precisely sectors focused on information technology and software, have a notoriously dual composition. On the one hand there are the (generally) highly paid top-end workers, mostly composed of programmers and people employed in fields such as marketing and management. On the other hand there are large armies of underpaid workers employed in functions such as moderation, electronics assembly, warehouse logistics or catering.

The first group has very peculiar characteristics. They are often taken in by the classic Silicon Valley ideology consisting of “lean startup” thinking, social liberalism, and the idea that they are improving the world. Materially, they are also different from large sections of the working class. They earn extremely high wages, are often highly educated, possess specific technical skills, are given significant stock options in their employers’ companies and are highly mobile, notorious for changing jobs very easily.

Besides that, many also have an aspiration to start their own startup one day, in line with Silicon Valley ideology. This adds a certain petty-bourgeois flavour to their composition.

Yet these workers also have their grievances. They are often employed in soul-crushing jobs at large multinationals, some of which (for example Amazon or Tesla) have the reputation of making them work as much as they can and then spitting them out, often in a state of burn-out.

On the other hand, there are subaltern sections of tech workers. These people moderate offensive content on Facebook, stack Amazon boxes in their “fulfillment centres”, drive people around on Uber and Lyft, assemble electronics such as iPhones or serve lunches at Silicon Valley corporate “campuses.”

These workers are generally underpaid, but conduct the drudging work that makes tech multinationals run. Without Facebook moderators watching horrible content all day, the platform would be flooded by it (and Facebook would have no one to train their AI on); without the fleet of elderly workers manning Amazon warehouses, packages would not get delivered; without the staff on Google and Facebook campuses, they would look a lot less utopian.

This section of workers can also be highly mobile in regards to jobs, but less from possibility and more from precarity. They also have fewer ties to the tech sector specifically— whether they work at the warehouses of a self-styled tech company like Blue Apron or the warehouses of any other company matters less for them than it does for programmers.

This bifurcation holds real problems for a modern-day Lucas Plan. If we simply move the control over the design of technology from management and shareholders to a tech worker aristocracy, it might not solve so much.

Yet there are some hopeful tendencies we can build on. Tech workers in Silicon Valley have started to bridge the divide that separates them, with organisations like the Tech Workers Coalition starting to help cafeteria workers organise.

A Guardian piece on their organising even observes some budding solidarity between these two groups arising:

Khaleed is proud of the work he does, and deeply grateful for the union. At first, he found it difficult to talk about his anxieties with coworkers at the roundtable. But he came to find it comforting: “We have solidarity, now.” A cost-of-living raise would mean more security, and a better chance of staying in the apartment where he lives. Khaleed deeply wants to be able to live near his son, and for his son to continue going to the good public school he now attends.

When I asked Khaleed how he felt about the two TWC Facebook employees he had met with, his voice faltered. “I just hope that someday I can help them like they helped me.” When I told one of the engineers, he smiled, and quoted the IWW slogan. “That’s the goal, right – one big union?”

This is precisely the basis on which a modern-day Lucas Plan should be based: solidarity between both groups of tech workers and inclusion of both. The Lucas Plan of the 1970s understood this. The main authors of the Plan were predominantly to be highly-educated engineers, but the people making the products were not. Hence they tried to bridge this gap with proposals that would humanise working conditions as well as technology, and by including common workers.

A shop steward, an engineer, would declare during a public meeting after showing how company plans decided how long bathroom breaks could be:

We say that that form of technology is unacceptable, and if that is the only way to make that technology we should be questioning whether we want to make those kinds of products in that way at all.

Furthermore, the humanisation of work inside tech companies, and not just the end product of it, would also positively impact the work of the core tech workers. In essence, it would serve as the glue to connect both groups.

A Lucas Plan today would thus analyse the composition of tech workers at both sides of the divide, include both of them and mobilise them behind a program of humanisation of labour for themselves and humanised technology for the rest of society.

How to do it?

The practical implementation of workers’ control over design decisions can base itself on already existing policies and experiences, mainly reformist co-determination schemes (where trade-union officials are given seats on corporate boards) or direct-action oriented tactics (where management power is challenged through workplace protest and where workers establish a degree of workplace autonomy).

The choice of these tactics would need to be based on local working class experiences. In some contexts co-determination would make more sense; in some cases direct action would take precedence. In most cases a combination of both will most likely be required.

The first option is a moderate one. Workers’ representation on the boards of companies has been common in industrialised economies, and particularly continental Europe. Even Conservative PM Theresa May proposed implementing it in 2017, before making a U-turn after business lobbying.

As TUC general secretary Frances O’Grady has stated:

Workers on company boards is hardly a radical idea. They’re the norm across most of Europe – including countries with similar single-tier board structures to the UK, such as Sweden. European countries with better worker participation tend to have higher investment in research and development, higher employment rates and lower levels of inequality and poverty.

Expanding the control of these boards to also deciding what products to produce and how to design them in technologically-oriented companies—both software and more traditional industrial companies—would radicalise the non-radical idea of workers representation on company boards.

A second, more radical option, is the establishment of workplace control through organising. A good example of this are the US longshoremen who at certain times of their existence controlled their own work.

As Peter Cole writes in Jacobin:

West Coast longshoremen were “lords” because they earned high wages by blue-collar standards, were paid overtime starting with the seventh hour of a shift, and had protections against laboring under dangerous conditions. They even had the right to stop working at any time if “health and safety” were imperiled. Essentially, to the great consternation of employers, the union controlled much of the workplace.

The hiring hall was the day-to-day locus of union power. Controlled by each local’s elected leadership, the hall decided who would and wouldn’t work. Crucially, under the radically egalitarian policy of “low man out,” the first workers to be dispatched were those who had worked the least in that quarter of the year.

Imagine a programmer at Facebook refusing to make a button red because research shows it would not increase the well-being of users, and being backed up in this decision by a system of workplace solidarity that stretches throughout the company.

From bees to architects

Mike Cooley, one of the key authors behind the Lucas Plan, was fired from his job in 1981 as retaliation for union organising. Afterwards, he became a key author on humanising technology. He also worked with the Greater London Council when—during the height of Thatcherism—it was controlled by the Labour left, and where current Shadow Chancellor John McDonnell earned his spurs.

Just as McDonnell bridges the earlier, failed, resistance to neoliberalism, with our current attempts to replace it, Cooley forms an inspiration for post-neoliberal technology. In an 1980 article he concluded:

The alternatives are stark. Either we will have a future in which human beings are reduced to a sort of bee-like behaviour, reacting to the systems and equipment specified for them; or we will have a future in which masses of people, conscious of their skills and abilities in both a political and a technical sense, decide that they are going to be the architects of a new form of technological development which will enhance human creativity and mean more freedom of choice and expression rather than less. The truth is, we shall have to make the profound decision as to whether we intend to act as architects or behave like bees.

These words ring true today more than ever.


About the author: Felix Holtwell In real life, Felix is a tech journalist. After dark, however, he edits the Fully Automated Luxury Communism newsletter, a newsletter about the interactions between technology and the left. You can follow him on Twitter at @AutomatedFully.

Photo by OuiShare

The post Bringing Back The Lucas Plan appeared first on P2P Foundation.

]]>
https://blog.p2pfoundation.net/bringing-back-the-lucas-plan/2018/05/31/feed 0 71202
The Lucas Plan: What can it tell us about democratising technology today? https://blog.p2pfoundation.net/the-lucas-plan-what-can-it-tell-us-about-democratising-technology-today/2018/05/24 https://blog.p2pfoundation.net/the-lucas-plan-what-can-it-tell-us-about-democratising-technology-today/2018/05/24#respond Thu, 24 May 2018 07:00:00 +0000 https://blog.p2pfoundation.net/?p=71090 Thirty-eight years ago, a movement for ‘socially useful production’ pioneered practical approaches for more democratic technology development.  It was in January 1976 that workers at Lucas Aerospace published an Alternative Plan for the future of their corporation. It was a novel response to management announcements that thousands of manufacturing jobs were to be cut in... Continue reading

The post The Lucas Plan: What can it tell us about democratising technology today? appeared first on P2P Foundation.

]]>
Thirty-eight years ago, a movement for ‘socially useful production’ pioneered practical approaches for more democratic technology development

It was in January 1976 that workers at Lucas Aerospace published an Alternative Plan for the future of their corporation. It was a novel response to management announcements that thousands of manufacturing jobs were to be cut in the face of industrial restructuring, international competition, and technological change. Instead of redundancy, workers argued their right to socially useful production.

Around half of Lucas’ output supplied military contracts. Since this depended upon public funds, as did many of the firm’s civilian products, workers argued state support be better put to developing more socially useful products.

Rejected by management and government, the Plan nevertheless catalysed ideas for the democratisation of technological development in society. In promoting their arguments, shop stewards at Lucas attracted workers from other sectors, community activists, radical scientists, environmentalists, and the Left. The Plan became symbolic for a movement of activists committed to innovation for purposes of social use over private profit.

Of course, the world is different now. The spaces and opportunities for democratising technology have altered, and so too have the forms it might take. Nevertheless, remembering older initiatives casts enduring issues about the direction of technological development in society in a different and informative light: an issue relevant today in debates as varied as industrial policy, green and solidarity economies, commons-based peer-production, and grassroots fabrication in Hackerspaces and FabLabs. The movement for socially useful production prompts questions about connecting tacit knowledge and participatory prototyping to the political economy of technology development.

In drawing up their Plan, shop stewards at Lucas turned initially to researchers at institutes throughout the UK. They received three replies. Undeterred, they consulted their own members. Over the course of a year they built up their Plan on the basis of the knowledge, skills, experience, and needs of workers and the communities in which they lived. The results included designs for over 150 alternative products. The Plan included market analyses and economic argument; proposed employee training that enhanced and broadened skills; and suggested re-organising work into less hierarchical teams that bridged divisions between tacit knowledge on the shop floor and theoretical engineering knowledge in design shops.

The Financial Times described the Lucas Plan as, ‘one of the most radical alternative plans ever drawn up by workers for their company’ (Financial Times, 23 January 1976). It was nominated for the Nobel Peace Prize in 1979. The New Statesman claimed (1st July 1977) ‘The philosophical and technical implications of the plan are now being discussed on average of twenty five times a week in international media’. Despite this attention, shop stewards suspected (correctly) that the Plan in isolation would convince neither management nor government. Even leaders in the trade union establishment were reluctant to back this grassroots initiative; wary its precedent would challenge privileged demarcations and hierarchies.

In the meantime, and as a lever to exert pressure, shop stewards embarked upon a broader political campaign for the right of all people to socially useful production. Mike Cooley, one of the leaders, said they wanted to, ‘inflame the imaginations of others’ and ‘demonstrate in a very practical and direct way the creative power of “ordinary people”’. Lucas workers organised road-shows, teach-ins, and created a Centre for Alternative Industrial and Technological Systems (CAITS) at North-East London Polytechnic. Design prototypes were displayed at public events around the country. TV programmes were made. CAITS helped workers in other sectors develop their own Plans. Activists connected with sympathetic movements in Scandinavia and Germany.

The movement that emerged challenged establishment claims that technology progressed autonomously of society, and that people inevitably had to adapt to the tools offered up by science. Activists argued knowledge and technology was shaped by social choices over its development, and those choices needed to become more democratic. Activism cultivated spaces for participatory design; promoted human-centred technology; argued for arms conversion to environmental and social technologies; and sought more control for workers, communities and users in production processes.

Material possibilities were helped when Londoners voted the Left into power at the Greater London Council (GLC) in 1981. They introduced an Industrial Strategy committed to socially useful production. Mike Cooley, sacked from Lucas for his activism, was appointed Technology Director of the GLC’s new Greater London Enterprise Board (GLEB). A series of Technology Networks were created. Anticipating FabLabs today, these community-based workshops shared machine tools, access to technical advice, and prototyping services, and were open for anyone to develop socially useful prototypes. Other Left councils opened similar spaces in the UK.

Technology Networks aimed to combine the ‘untapped skill, creativity and sheer enthusiasm’ in local communities with the ‘reservoir of scientific and innovation knowledge’ in London’s polytechnics. Hundreds of designs and prototypes were developed, including electric bicycles, small-scale wind turbines, energy conservation services, disability devices, re-manufactured products, children’s play equipment, community computer networks, and a women’s IT co-operative. Designs were registered in an open access product bank. GLEB helped co-operatives and social enterprises develop these prototypes into businesses.

Recalling the movement now, what is striking is the importance activists attached to practical engagements in technology development as part of their politics. The movement emphasised tacit knowledge, craft skill, and learning by doing through face-to-face collaboration in material projects. Practical activity was cast as ‘technological agit prop’ for mobilising alliances and debate. Some participants found such politicisation unwelcome. But in opening prototyping in this way, activists tried to bring more varied participation into debates, and enable wider, more practical forms of expression meaningful to different audiences, compared to speeches and texts evoking, say, a revolutionary agent, socially entrepreneurial state, or deliberative governance framework.

Similarly today, Hackerspaces and FabLabs, involve people working materially on shared technology projects. Social media opens these engagements in distributed and interconnected forms. Web platforms and versatile digital fabrication technologies allow people to share open-hardware designs and contribute to an emerging knowledge commons. The sheer fun participants find in making things is imbued by others with excited claims for the democratisation of manufacturingand commons-based peer production. Grassroots digital fabrication (pdf) rekindles ideas about direct participation in technology development and use.

Wherever and whenever people are given the encouragement and opportunity to develop their ideas into material activity, then creativity can and does flourish. However, remembering the Lucas Plan should make us pause and consider two issues. First, the importance placed on tacit knowledge and skills. Skilful design in social media can assist but not completely substitute face-to-face, hand-by-hand activity. Second, for the earlier generation of activists, collaborative workshops and projects were also about crafting solidarities. Project-centred discussion and activity was linked to debate and mobilisation around wider issues.

Workers at Lucas Industries, Shaftmoor Lane branch, Birmingham, 1970. Photograph: /Lucas Memories website, lucasmemories.co.uk.

With hindsight, the movement was swimming against the political and economic tide, but at the time things looked less clear-cut. The Thatcher government eventually abolished the GLC in 1986. Unionised industries declined, and union power was curtailed through legislation. In overseeing this, Thatcherism knowingly cut material and political resources for alternatives. In doing so, the diversity so important to innovation diminished. The alliances struck, the spaces created and the initiatives generated were swept aside as concern for social purpose became overwhelmed by neoliberal ideology. The social shaping of technology was left to market decision.

However, even though activism dissipated, its ideas did not disappear. Some practices had wider influence, such as in participatory design, albeit it in forms appropriated to the needs of capital rather than the intended interests of labour. Historical reflection thus prompts a third issue, which is how power relations matter and need to be addressed in democratic technology development. When making prototypes becomes accessible and fun then people can exercise a power to do innovation. But this can still struggle to exercise power over the agendas of elite technology institutions, such as which innovations attract investment for production and marketing, and under what social criteria. Alternative, more democratic spaces nevertheless for technology development and debate.

Like others before and since, the Lucas workers insisted upon a democratic development of technology. Their practical, material initiatives momentarily widened the range of ideas, debates and possibilities – some of which persist. Perhaps their argument was the most socially useful product left to us?


Adrian Smith researches the politics of technology, society and sustainability at SPRU and the STEPS Centre at the University of Sussex. He is on Twitter @smithadrianpaul. A longer paper on the Lucas Plan is available at the STEPS site.

Originally published in The Guardian.

Photo by Daniel Kulinski

The post The Lucas Plan: What can it tell us about democratising technology today? appeared first on P2P Foundation.

]]>
https://blog.p2pfoundation.net/the-lucas-plan-what-can-it-tell-us-about-democratising-technology-today/2018/05/24/feed 0 71090
Why do we need a contribution accounting system? https://blog.p2pfoundation.net/need-contribution-accounting-system/2018/01/19 https://blog.p2pfoundation.net/need-contribution-accounting-system/2018/01/19#respond Fri, 19 Jan 2018 08:00:00 +0000 https://blog.p2pfoundation.net/?p=69278 This article was first published on 3 January 2014 and last modified on 8 January 2018 ……………………………………………………………. NOTE: Before 2017 SENSORICA used the expression ”value accounting system”. The current expression in use is ”contribution accounting system”. See more on the OVN wiki. The origin of this modification is a redefinition of value, inspired by Tibi’s essay ”Scale... Continue reading

The post Why do we need a contribution accounting system? appeared first on P2P Foundation.

]]>
This article was first published on 3 January 2014 and last modified on 8 January 2018
…………………………………………………………….
NOTE: Before 2017 SENSORICA used the expression ”value accounting system”. The current expression in use is ”contribution accounting system”. See more on the OVN wiki. The origin of this modification is a redefinition of value, inspired by Tibi’s essay ”Scale of social structures”.
…………………………………………………………….

With the advent of the Internet and the development of new digital technologies, the economy is following a trend of decentralization. The most innovative environments are open source communities and peer production is on the rise. The crowd innovates and produces. But the crowd is organized in loose networks, it is geographically dispersed, and contributions to projects follow a long tail distribution. What are the possible reward mechanisms in this new economy?

Our thesis is that in order to reward all the participants in p2p economic activity, and thus to incentivise contributions and make participation sustainable for everyone, we need to do contribution accounting: record everyone’s contribution, evaluate these contributions, and calculate every participant’s fair share. This method for redistribution of benefits must be established at the beginning of the economic process, in a transparent way. It constitutes a contract among participants, and it allows them to estimate their rewards in relation with their efforts. We call this the contribution accounting system.

For the rest of this article we will try to explain why a contribution accounting system is needed in a more decentralized economy, and unavoidable in a p2p economy.

Contribution accounting and exchanges

First, we need to make a distinction between a contribution accounting system and an exchange system. Suppose that we have 3 individuals picking using one basket. The contribution accounting system keeps track of how many cherries everyone puts in the basket, so that when they sell the basket on the market they can decide to redistribute the revenue in proportion to everyone’s contribution. It describes how contributions from multiple individuals amalgamate into a product, during a co-production processes.

Once a product is created, i.e. once the basket is full and ready to go to market, it can be exchanged using an exchange system: barter, currency, etc.

The contribution accounting system is not a currency, not a barter system. It doesn’t refer to an exchange between our 3 individuals who are picking cherries, or between them and another entity like a company. They are not getting paid a salary in exchange of their work. They are collaborating, they all add cherries into the same basket, which is their product to be. The exchange might occur at a later point in time, once their basket is full and ready to go to the market. Meanwhile, they all share the risk of having their cherries being eaten by birds, or of not getting a good price for their basket.

Production processes

A production process that requires more than one individual can be based on the following 3 arrangements, or on a combination of them:

  • stigmergic coordination–  Participants don’t have aligned goals, don’t maintain a relationship other than being contributors to the same process. Ex. this is how Wikipedia is built. 
  • cooperation – The goals of participants are not necessarily aligned. Ex. in a corporation employees and business owners usually have divergent interests and goals.
  • collaboration – Requires a large degree of alignment in goals. Ex. a group of individuals climbing a mountain together.

The traditional capitalist economy is mostly about cooperation, which doesn’t require an tight alignment of interests and goals. Production is sustained through an exchange process, where workers exchange the time they spend on different tasks against wages. The exchange process transfers risk from workers to the owners of capital, but at the same time, the workers are stripped of their rights to the output of their labor. Workers cooperate (despite some inconveniences and misalignment in interests and goals) with the owners of capital in production processes because there exists an economic dependency between the two groups. Workers need money, which are by far the predominant means to acquire basic necessities. On the other side, the owners of capital need labor to generate more wealth. This economic dependency is not symmetrical and makes the system prone to abuse, which explains the existence (and necessity) of unions to counterbalance the tendency for exploitation.

In peer production we have a blend of the 3 arrangements mentioned above, mostly coordination and some stigmergic collaboration. In general, no one works for anyone else. Everyone involved is a peer, an affiliate of a peer production network. The p2p culture prescribes that the output of a collaborative and participatory process should not be owned or controlled by anyone in particular, but shared among participants in a fair way. Immaterial artifacts that are produced in such way (such as software or hardware designs) are usually released as commons (they are openly shared). Material goods can be exchanged on the market, and the revenue generated is shared among all the participants. Service-based models also exist, where services are exchanged on the market against some form of payment, which is redistributed to everyone involved in the providing the service. A good example of service-bases p2p model is the Bitcoin network. If we focus only on the mining aspect, minors form a open network of peer participants, they collectively maintain the hardware infrastructure of the entire network. Minors are rewarded in proportion to the computing power that they provide to the network.

The normal and the long tail modes of production

normal mode of production

In the traditional capitalist economy wages should be regulated by the free labor market, if we make abstraction of all sorts of mechanisms through which this market can be biased (labor unions and governmental intervention included). The market is responsible for the difference in salary between an engineer and a clerk. The notion of jobimplies that a salary is determined and agreed upon before the employee starts working (with the possibility modify the salary based on performance). Since the amount of $ per hours of work is pre-established, the capital owner needs to make sure that the employee produces enough during the work hours. Therefore, a new role is needed within the organization to guarantee this, the beloved project manager. Traditional organizations spend a lot of energy doing time management, because usually the interest of the worker is not perfectly aligned with the interest of the capital owner (see cooperative production above). Classical organizations operate on the normal mode of production (from the ”normal curve” or ”bell curve”), where the number of workers is minimized, and the majority of employees in a category of roles produce almost the same amount. Very few workers produce less than the norm, because they are eliminated (i.e. fired). Very few produce more, because there are no incentives to do so, the association with the mission of the traditional enterprise is weak, the sense of belonging is usually low (usually fabricated by the HR department), the sense of ownership is almost absent, etc.

long tail mode of production

The situation is very different in a peer production environment, which is open to participation, is decentralized in terms of allocation of resources, and uses a horizontal governance system.

In peer production, we see a log tail distribution of contributions, which means that a very large number of individuals are involved in production, only a very small percentage of those contribute a lot, the great majority of them contribute very little, and most of the production is done by those who make small contributions. A prearrangement on revenue is impossible in this context. First, because the production process is very dynamic and relations of production cannot be contract-based. Second, the process involves a great number of individuals that are distributed all over the planet, therefore it is impossible to do time management. Moreover, no one can force anyone else to work more. In this mode of production we need to evaluate rewards after the fact, based on deliverables or based on the type of activity and its potential to increase the probability of production of valuable products. A system is needed to account for everyone’s contribution, to evaluate these contributions and turn them into rewards.We call this an access to benefits algorithm.

In some sense, the access to benefits algorithm is a distributed solution to time management, which can be applied to large scale and very dynamic peer production processes. It embodies positive and negative incentives, and can contain parameters to influence individual participation and quality of contributions, it can regulate behavior, it gamifies production. For example, a reputation system can be tied to the access to benefits algorithm: a higher reputation results in a higher reward, all other things being equal, and vice versa. Moreover, it can also contain parameters to incentivise periodic and frequent contributions, and to prioritize important processes.

Contribution accounting and network resource planning

The long tail mode of production needs a contribution accounting system in order to allow fair redistribution of rewards. It allows participants to record contributions of various types and it uses an access to benefits algorithm to turn them into benefits. But this is only the first part of the story.

In the OVN model contributions are attributed to the creation of resources, which can be documents, designs, parts or full prototypes, etc. (some contributions go into infrastructure of community development and they lack clear resource or deliverables). From the resource level, contributions aggregate at the project level. A project is an open venture, or a business unit. It is the smallest unit within the OVN that can generate all sorts of benefits, including revenue.

The fact that contributions can be attributed directly to resources (not projects) is very important for commons-based peer production (CBPP), which builds on open source. On Github, pieces of open source software (OSS) can be picked up by someone and remixed into something else. Open source hardware (OSHW) development follows the same path, i.e. designs (mechanical, electronic, optical) are forked and remixed. This ability to fork and remix parts of more complex systems makes open source development a very efficient process. This explains why modularity and interoperability are very important properties of OSS and OSHW. If rewards are envisioned for the work done, CBPP needs to find a way to account for contributions at the resource level and to track the way resources are put together in different contexts (projects are considered contexts). If contributions are only recorded at the project level, projects become silos of economic activity with a reduced possibility of benefits flows between them.

Taking into consideration the structure of OSS development, the solution to the benefit/reward redistribution problem is to attach some information to individual resources created that allows their reevaluation later, when they get remixed and integrated into larger systems, in other contexts. The metrics of evaluation can vary depending on the context. This is the role of the network resource planning system NRP, which allows benefits/rewards to propagate upwards through value streams and the creation of a single resource can generate rewards from many different sources (many projects), depending on how many successful projects are using it.

This goes even further, because this same NRP also provides a growth mechanism for CBPP networks. To illustrate this, imagine that members of a CBPP community decide to attribute equity to resources that are created by other communities. (Example: SENSORICA decides to integrate a piece of open source hardware developed by another OSHW community). First, why would SENSORICA affiliates decide to diminish their revenue by giving equity to other groups when they can just copy the open source design? The economic rationale is to reduce efforts required to internalize new capacity (new knowledge and know how around that piece of open hardware) and to increase the speed of execution (a first to market advantage). CBPP networks grow by affiliation. By offering equity to other CBPP communities they are essentially building bridges to innovate faster and improve production processes. This is the higher-level structure of networks-of-networks (see the Open Alliance).

We believe that in order to sustain the CBPP we need to create infrastructure that allows attribution of value-related properties to individual resources, to allow reevaluation of these individual resources in context, and to facilitate the formation of networks-of-networks that preserve the individuality of every community part of it, but at the same time brings them together on the same economic platform.

Contribution accounting in transition models

As the economy transitions to a networked state, existing organizations are trying to adapt. We already see traditional corporations going from in-house R&D, to outsourcing R&D and more recently to crowdsourcing R&D. This movement is forced by the need to innovate fast, and by the fact that open source lowers the price to a point where traditional high-tech corporations can be put out of business. Crowdsourcing R&D means utilizing all sorts of schemes to attract the participation of the crowd into innovation processes that are sponsored by these corporations. In early crowdsourcing practices corporations tried to control the innovation by signing non-disclosure agreements with the participants. Crowdsourcing platforms were created to match corporate projects with skilled individuals. The practice was competitive, i.e. the company would chose a winner among different proposals, and usually the winner was rewarded with money. This practice gradually became more open, since the first iteration of crowdsourcing platforms were not very successful in attracting highly skilled individuals. In order to attract innovation, in order to grow open innovation communities around them, corporations need to think seriously about the reward mechanisms they put in place. It is not so difficult to understand why the early crowdfunding platforms were not very good attractors. I would not compete in a call by a company to design something for a few bucks, with a good probability of losing the race, knowing that the company will monopolize the work and probably make a lot of profits on it. The trend is to go from closed crowdsourcing to truly open source innovation, which must be accompanied by a broadening of the reward system. Since companies are going to deal with the crowd more and more, they need a contribution accounting system to account for contributions. See this presentation by SENSORICA making the distinction between competitive crowdsourcing and collaborative crowdsourcing.

In parallel to the adaptation of traditional companies we also see the creation of hybrid organizations and models. For example, in the realm or hardware, we have the emergence of ecosystems like Arduino and 3D Robotics/DIY Drones. They are composed of a traditional for-profit organization surrounded by an open source community. This post describes the situation. The difference here is that in most cases the open source community pre-existed the traditional for-profit, the later being created to manufacture and to distribute the products that are based on the innovation created by the open community. These hybrid models, the ones that are sustainable and successful, maintain an precarious equilibrium between the profit motive that can arise within the centralized traditional organization the open and sharing culture within the open innovation community. In some cases, this equilibrium is not maintained and the synergy between the two entities disappears, destroying the ecosystem. This was the case of Makerbot and the RepRap community, well captured in the Netflix documentary Print the Legend.

Photo by Muffet

The post Why do we need a contribution accounting system? appeared first on P2P Foundation.

]]>
https://blog.p2pfoundation.net/need-contribution-accounting-system/2018/01/19/feed 0 69278
Reimagine, don’t seize, the means of production https://blog.p2pfoundation.net/reimagine-dont-seize-the-means-of-production/2018/01/16 https://blog.p2pfoundation.net/reimagine-dont-seize-the-means-of-production/2018/01/16#respond Tue, 16 Jan 2018 08:00:00 +0000 https://blog.p2pfoundation.net/?p=69249 Written by Stacco Troncoso and Ann Marie Utratel: One of the most difficult systems to reimagine is global manufacturing. If we are producing offshore and at scale, ravaging the planet for short-term profits, what are the available alternatives? A movement combining digital and physical production points toward a new possibility: Produce within our communities, democratically and... Continue reading

The post Reimagine, don’t seize, the means of production appeared first on P2P Foundation.

]]>
Written by Stacco Troncoso and Ann Marie Utratel: One of the most difficult systems to reimagine is global manufacturing. If we are producing offshore and at scale, ravaging the planet for short-term profits, what are the available alternatives? A movement combining digital and physical production points toward a new possibility: Produce within our communities, democratically and with respect for nature and its carrying capacity.

You may not know it by its admittedly awkward name, but a process known as commons-based peer production (CBPP) supports much of our online life. CBPP describes internet-enabled, peer-to-peer infrastructures that allow people to communicate, self-organize and produce together. The value of what is produced is not extracted for private profit, but fed back into a knowledge, design and software commons — resources which are managed by a community, according to the terms set by that community. Wikipedia, WordPress, the Firefox browser and the Apache HTTP web server are some of the best-known examples.

If the first wave of commons-based peer production was mainly created digitally and shared online, we now see a second wave spreading back into physical space. Commoning, as a longstanding human practice that precedes commons-based peer production, naturally began in the material world. It eventually expanded into virtual space and now returns to the physical sphere, where the digital realm becomes a partner in new forms of resource stewardship, production and distribution. In other words, the commons has come full circle, from the natural commons described by Elinor Ostrom, through commons-based peer production in digital communities, to distributed physical manufacturing.

This recent process of bringing peer production to the physical world is called Design Global, Manufacture Local (DGML). Here’s how it works: A design is created using the digital commons of knowledge, software and design, and then produced using local manufacturing and automation technologies. These can include three-dimensional printers, computer numerical control (CNC) machines or even low-tech crafts tools and appropriate technology — often in combination. The formula is: What is “light” (knowledge) is global, and what is “heavy” (physical manufacture) is local. DGML and its unique characteristics help open new, sustainable and inclusive forms of production and consumption.

Imagine a process where designs are co-created, reviewed and refined as part of a global digital commons (i.e. a universally available shared resource). Meanwhile, the actual manufacturing takes place locally, often through shared infrastructures and with local biophysical conditions in mind. The process of making something together as a community creates new ideas and innovations which can feed back into their originating design commons. This cycle describes a radically democratized way to make objects with an increased capacity for innovation and resilience.

Current examples of the DGML approach include WikiHouse, a nonprofit foundation sharing templates for modular housing; OpenBionics, creating three-dimensional printed medical prosthetics which cost a fraction (0.1 to 1 percent) of the price of standard prosthetics; L’Atelier Paysan, an open source cooperative fostering technological sovereignty for small- and medium-scale ecological agriculture; Farm Hack, a farmer-driven community network sharing open source know-how amongst do-it-yourself agricultural tech innovators; and Habibi.Works, an intercultural makerspace in northern Greece where Syrian, Iraqi and Afghan refugees develop DGML projects in a communal atmosphere.

This ecologically viable mode of production has three key patterns:

1) Nonprofit: Objects are designed for optimum usability, not to create tension between supply and demand. This eliminates planned obsolescence or induced consumerism while promoting modular, durable and practical applications.

2) Local: Physical manufacturing is done in community workshops, with bespoke production adapted to local needs. These are economies of scope, not of scale. On-demand local production bypasses the need for huge capital outlays and the subsequent necessity to “keep the machines running” night and day to satisfy the expectations of investors with over-capacity and over-production. Transportation costs — whether financial or ecological — are eradicated, while maintenance, fabrication of spare parts and waste treatment are handled locally.

3) Shared: Idle resources are identified and shared by the community. These can be immaterial and shared globally (blueprints, collaboration protocols, software, documentation, legal forms), or material and managed locally (community spaces, tools and machinery, hackathons). There are no costly patents and no intellectual property regimes to enforce false scarcity. Power is distributed and shared autonomously, creating a “sharing economy” worthy of the name.

To preserve and restore a livable planet, it’s not enough to seize the existing means of production; in fact, it may even not be necessary or recommendable. Rather, we need to reinvent the means of production; to radically  reimagine the way we produce. We must also decide together what not to produce, and when to direct our productive capacities toward ecologically restorative work and the stewardship of natural systems. This includes necessary endeavors like permaculture, landscape restoration, regenerative design and rewilding.

These empowering efforts will remain marginal to the larger economy, however, in the absence of sustainable, sufficient ways of obtaining funding to liberate time for the contributors. Equally problematic is the possibility of the capture and enclosure of the open design commons, to be converted into profit-driven, peer-to-peer hybrids that perpetuate the scarcity mindset of capital. Don’t assume that global corporations or financial institutions are not hip to this revolution; in fact, many companies seem to be more interested in controlling the right to produce through intellectual property and patents, than on taking any of the costs of the production themselves. (Silicon Valley-led “sharing” economy, anyone?)

To avoid this, productive communities must position themselves ahead of the curve by creating cooperative-based livelihood vehicles and solidarity mechanisms to sustain themselves and the invaluable work they perform. Livelihood strategies like Platform and Open Cooperativism lead the way in emancipating this movement of globally conversant yet locally grounded producers and ecosystem restorers. At the same time, locally based yet globally federated political movements — such as the recent surge of international, multi-constituent municipalist political platforms — can spur the conditions for highly participative and democratic “design global, manufacture local” programs.

We can either produce with communities and as part of nature or not. Let’s make the right choice.


The post Reimagine, don’t seize, the means of production appeared first on P2P Foundation.

]]>
https://blog.p2pfoundation.net/reimagine-dont-seize-the-means-of-production/2018/01/16/feed 0 69249
Essay of the Day: Self-Organisation in Commons-Based Peer Production https://blog.p2pfoundation.net/essay-of-the-day-self-organisation-in-commons-based-peer-production/2017/12/19 https://blog.p2pfoundation.net/essay-of-the-day-self-organisation-in-commons-based-peer-production/2017/12/19#respond Tue, 19 Dec 2017 09:00:35 +0000 https://blog.p2pfoundation.net/?p=68909 A PhD Thesis: Self-organisation in Commons-Based Peer Production (Drupal: “the drop is always moving”) by David Rozas. University of Surrey, Department of Sociology, Centre for Research in Social Simulation, 2017. Abstract “Commons-Based Peer Production (CBPP) is a new model of socio-economic production in which groups of individuals cooperate with each other without a traditional hierarchical... Continue reading

The post Essay of the Day: Self-Organisation in Commons-Based Peer Production appeared first on P2P Foundation.

]]>
A PhD Thesis: Self-organisation in Commons-Based Peer Production (Drupal: “the drop is always moving”) by David Rozas. University of Surrey, Department of Sociology, Centre for Research in Social Simulation, 2017.

Abstract

“Commons-Based Peer Production (CBPP) is a new model of socio-economic production in which groups of individuals cooperate with each other without a traditional hierarchical organisation to produce common and public goods, such as Wikipedia or GNU/Linux. There is a need to understand how these communities govern and organise themselves as they grow in size and complexity. Following an ethnographic approach, this thesis explores the emergence of and changes in the organisational structures and processes of Drupal: a large and global CBBP community which, over the past fifteen years, has coordinated the work of hundreds of thousands of participants to develop a technology which currently powers more than 2% of websites worldwide. Firstly, this thesis questions and studies the notion of contribution in CBPP communities, arguing that contribution should be understood as a set of meanings which are under constant negotiation between the participants according to their own internal logics of value. Following a constructivist approach, it shows the relevance played by less visible contribution activities such as the organisation of events. Secondly, this thesis explores the emergence and inner workings of the sociotechnical systems which surround contributions related to the development of projects and the organisation of events. Two intertwined organisational dynamics were identified: formalisation in the organisational processes and decentralisation in decision-making. Finally, this thesis brings together the empirical data from this exploration of socio-technical systems with previous literature on self-organisation and organisation studies, to offer an account of how the organisational changes resulted in the emergence of a polycentric model of governance, in which different forms of organisation varying in their degree of organicity co-exist and influence each other.”

Summary (excerpted from preface)

“This thesis presents a study of self-organisation in a collaborative community focused on the development of a Free/Libre Open Source Software, named Drupal, whose model responds to the latter: a Commons-Based Peer Production community. Drupal is a content management framework, a software to develop web applications, which currently powers more than 2% of websites worldwide. Since the source code, the computer instructions, was released under a license which allow its use, copy, study and modification by anyone in 2001, the Drupal project has attracted the attention of hundreds of thousands of participants. More than 1.3 million people are registered on Drupal.org, the main platform of collaboration, and communitarian events are held every week all around the World. Thus, as the main slogan of the Drupal project reflects — “come for the software, stay for the community”, this collaborative project cannot be understood without exploring its community, which is the main focus of this thesis.

In sum, over the course of the next eleven chapters, this thesis presents the story of how hundreds of thousands of participants in a large and global Commons-Based Peer Production community have organised themselves, in what started as a small and amateur project in 2001. This is with the aim of furthering our understanding of how, coping with diverse challenges, Commons-Based Peer Production communities govern and scale up their self-organisational processes.

* Chapter 1 provides an overview of the phenomenon of Free/Libre Open Source Software and connects it with that of Commons-Based Peer Production, allowing the theoretical pillars from previous studies on both phenomena to be drawn on.

* Chapter 2 provides an overview of the main case study, the Drupal community. Throughout the second chapter the Drupal community is framed as an extreme case study of Commons-Based Peer Production on the basis of its growth, therefore offering an opportunity to improve our understanding of how self-organisational processes emerge, evolve and scale up over time in Commons-Based Peer Production communities of this type.

* Chapter 3 provides an overview of Activity Theory and its employment as an analytical tool: a lens which supports the analysis of the changes experienced in complex organisational activities, such as those from Free/Libre Open Source Software communities as part of the wider phenomenon of Commons-Based Peer Production.

* Chapter 4, explores the fundamental methodological aspects considered for this study, which draws on an ethnographic approach. The decision for this approach is reasoned on the basis of the nature of the research questions tackled in the study. Firstly, on requiring an inductive approach, which entails the assumption that topics emerge from the process of data analysis rather than vice versa. Secondly, on the necessity of drawing on a methodological approach which acknowledges the need to understand these topics from within the community.

* Chapter 5 begins the presentation of the findings of this study. It presents the findings regarding the study of contribution in the Drupal community, a notion which is fundamental for the choice of the main unit of analysis, contribution activity, in Activity Theory. The results from this study enabled the identification and consideration, throughout the subsequent chapters, not only of activities which are “officially” understood as contributions, such as those listed in the main collaboration platform, but also of those which have remained less visible in Free/Libre Open Source software and Commons-Based Peer Production communities and the literature on them.

* Chapters 6 and 7 address the study of the development of projects, activities whose main actions and operations are mostly performed through an online medium;

* Chapters 8 and 9 present the main argument that binds this thesis together: the growth experienced by the Drupal community led to a formalisation of self-organisational processes in response to a general dynamic of decentralisation of decision-making in order for these processes to scale up. This research identified these two general organisational dynamics, formalisation and decentralisation of decision-making, affecting large and global Commons-Based Peer Production communities as they grow over time. Thus, throughout these chapters, the means by which these general dynamics of formalisation and decentralisation shaped the overall systems which emerged around these different contribution activities are explored. The exploration of the organisational processes of this case study does not only show the existence of these dynamics, but it provides an in-depth account of how these dynamics relate to each other, as well as how they shaped the overall resulting system of peer production, despite the main medium of the peer production activities studied being online/offline, or the significant differences with regard to their main focus of action — writing source code or organising events. For each pair of chapters this exploration starts with the most informal systems and progresses towards the most formal respectively: custom, contributed and core projects, in chapters 6 and 7; and local events, DrupalCamps and DrupalCons, in chapters 8 and 9. After carrying out this in-depth exploration of self-organisation, the overall identified changes experienced in the self-organisational processes of the Drupal community are brought together according to general theories of self-organising communities, organisational theory and empirical studies on Commons-Based Peer Production communities, in order to connect the exploration with macro organisational aspects in chapter 10.

* Chapter 10 argues that this study provides evidence of the emergence of polycentric governance, in which the participants of this community establish a constant process of negotiation to distribute authority and power over several centres of governance with effective coordination between them. In addition, this chapter argues that the exploration carried out throughout the previous chapters provides an in-depth account of the emergence of an organisational system for peer production in which different forms of organisation, varying in their degree of organicity, simultaneously co-exist and interact with each other.

* Finally, chapter 11 summarises the main contributions of this thesis and provides a set of implications for practitioners of Commons-Based Peer Production communities.”

The full thesis is available here.

Photo by Fernan Federici

The post Essay of the Day: Self-Organisation in Commons-Based Peer Production appeared first on P2P Foundation.

]]>
https://blog.p2pfoundation.net/essay-of-the-day-self-organisation-in-commons-based-peer-production/2017/12/19/feed 0 68909
Yochai Benkler on the Benefits of an Open Source Economic System https://blog.p2pfoundation.net/yochai-benkler-on-the-benefits-of-an-open-source-economic-system/2017/12/01 https://blog.p2pfoundation.net/yochai-benkler-on-the-benefits-of-an-open-source-economic-system/2017/12/01#respond Fri, 01 Dec 2017 09:00:00 +0000 https://blog.p2pfoundation.net/?p=68754 Cross-posted from Shareable. Bart Grugeon Plana: After the breakthrough of the internet, Yochai Benkler, a law professor at Harvard University, quickly understood that new online forms of collaboration such as Wikipedia or Linux responded to a completely new economic logic. Specializing in the digital culture of the networked society, Benkler worked on a coherent economic... Continue reading

The post Yochai Benkler on the Benefits of an Open Source Economic System appeared first on P2P Foundation.

]]>
Cross-posted from Shareable.

Bart Grugeon Plana: After the breakthrough of the internet, Yochai Benkler, a law professor at Harvard University, quickly understood that new online forms of collaboration such as Wikipedia or Linux responded to a completely new economic logic. Specializing in the digital culture of the networked society, Benkler worked on a coherent economic vision that guides us beyond the old opposition between state and markets.

According to Benkler, we may be at the beginning of a global cultural revolution that can bring about massive disruption. “Private property, patents and the free market are not the only ways to organize a society efficiently, as the neoliberal ideology wants us to believe,” Benkler says. “The commons offers us the most coherent alternative today to the dead end of the last 40 years of neoliberalism.”

Bart Grugeon Plana: In the political debate today, it seems that world leaders fall back to an old discussion whether it is the free market with its invisible hand that organizes the economy best or the state with its cumbersome administration. You urge to step beyond this old paradigm.

Yochai Benkler, law professor at Harvard University: Both sides in this discussion start from an assumption that is generally accepted but fundamentally wrong, namely that people are rational beings who pursue their own interests. Our entire economic model is based on this outdated view on humanity that goes back to the ideas of Thomas Hobbes and Adam Smith, philosophers from the 17th and 18th centuries. My position is that we have to review our entire economic system from top to bottom and rewrite it according to new rules. Research of the past decades in social sciences, biology, anthropology, genetics, and psychology shows that people tend to collaborate much more than we have assumed for a long time. So it comes down to designing systems that bring out these human values.

Many existing social and economic systems — hierarchical company structures, but also many educational systems and legal systems — start from this very negative image of man. To motivate people, they use mechanisms of control, by incorporating incentives that punish or reward. However, people feel much more motivated when they live in a system based on compromise, with a clear communication culture and where people work towards shared objectives. In other words, organizations that know how to stimulate our feelings of generosity and cooperation, are much more efficient than organizations that assume that we are only driven by self-interest.

This can work within a company or an organization, but how can you apply that to the macro economy?

Over the past decade, the internet has seen new forms of creative production that hasn’t been driven by a market nor organized by the state. Open-source software such as Linux, the online encyclopedia Wikipedia, the Creative Commons licenses, various social media, and numerous online forms of cooperation have created a new culture of cooperation that ten years ago would have been considered impossible by most. They are not a marginal phenomenon, but they are the avant-garde of new social and economic tendencies. It is a new form of production that is not based on private property and patents, but on loose and voluntary cooperation between individuals who are connected worldwide. It is a form of the commons adapted to the 21st century — it is the digital commons.

What is so revolutionary about it?

Just take the example of the Creative Commons license: It is a license that allows knowledge and information to be shared under certain conditions without the author having to be paid for it. It is a very flexible system that considers knowledge as a commons, that others can use and build on. This is a fundamentally different approach than the philosophy behind private copyrights. It proves that collective management of knowledge and information is not only possible, but that it is also more efficient and leads to much more creativity than when it is “locked up” in private licenses.

In the discussion whether the economy should be organized by the state or by the markets, certainly after the fall of communism, there was a widespread belief that models starting from a collective organization necessarily led to inefficiency and tragedy, because everyone would just save their own skin. This analysis has been the responsibility for the deregulation and privatization of the economy since then, the consequences of which have been known since 2008.

The new culture of global cooperation opens up a whole new window of possibilities. The commons offer us today a coherent alternative to the neoliberal ideology, which proves to be a dead end. After all, how far can privatization go? Trump and Brexit prove where it leads to.

Image by Bart Grugeon Plana

The commons is a model for collective management, which is mainly associated with natural resources. How can this be applied to the extremely complex modern economy?

The commons are centuries old, but as an intellectual tradition it was mainly substantiated and deepened by Elinor Östrom, winner of the Nobel Prize in Economics. Over the past decades the commons have gained a new dimension through the movement of open source software and the whole culture of the Digital Commons. Östrom demonstrated on the basis of hundreds of studies that citizens can come together to manage their infrastructures and resources, often in agreement with the government, in a way that is both sustainable ecologically and economically. Commons are capable of integrating the diversity, knowledge, and wealth of the local community into the decision-making processes. They take into account the complexity of human motivations and commitments, while market logic reduces everything to a price, and is insensitive to values, or to motivations that are not inspired by profit. Östrom showed that the commons management model is superior in terms of efficiency and sustainability to models that fall back on a strong government — read: socialism or communism, or on markets and their price mechanism.

Examples of commons in the modern economy include the management model of the Wi-Fi spectrum, for example, in addition to the previously mentioned digital commons. Unlike the FM-AM radio frequencies that require user licenses, everyone is free to use the Wi-Fi spectrum, respecting certain rules, and place a router anywhere. This openness and flexibility is unusual in the telecommunications sector. It has made Wi-Fi an indispensable technology in the most advanced sectors of the economy, such as hospitals, logistics centers, or smart electricity grids.

In the academic, cultural, musical, and information world knowledge or information is increasingly treated as a commons, and freely shared. Musicians no longer derive their income from the copyright of music, but from concerts. Academic and non-fiction authors publish their works more often under Creative Commons licenses because they earn their living by teaching, consultancy, or through research funds. A similar shift also takes place in journalism.

An essential feature of the commons management model is that all members of the “common” have access to the “use” of goods or services, and that it is jointly agreed how access to those goods and services is organized. Market logic has a completely different starting point. Does this mean that markets and commons are not compatible?

Commons are the basis of every economic system. Without open access to knowledge and information, to roads, to public spaces in the cities, to public services and to communication, a society can not be organized. The markets also depend on open access to the commons to be able to exist, even though they try again and again to privatize the commons. There is a fundamental misconception about the commons. It is the essential building block of every open society. But commons and markets can coexist.

If today it is mainstream to think that a company should maximize its financial returns in order to maximize it shareholder value, it isn’t a fact of nature, it’s a product of 40 years of neoliberal politics and law intended to serve a very narrow part of the society. Wikipedia shows that people have very diverse motivations to voluntarily contribute to this global common good that creates value for the entire world community. The examples of the digital commons can inspire to set up similar projects in real economy, as happens with various digital platforms in the emerging collaborative economy.

A society that puts the commons at the center, recognizing the importance of protecting them and contributing to them, allows different economic forms of organization to co-exist, both commons and market logic, private and public, profit-oriented and non-profit-making. In this mixture, it is possible that the economy as a whole is oriented towards being socially embedded, being about the people who generate the economic activities, and who can have very different motivations and commitments. The belief that the economy would be driven by an abstract ideal of profit-oriented markets is no more than a construction of neoliberal ideology.

You seem very optimistic about the future of the commons?

I was more optimistic ten years ago than I am today. The commons are so central to the organization of a diverse economy that they must be expanded and protected in as many sectors of the economy as possible. There are many inspiring examples of self-organization according to the commons model, but it is clear that their growth will not happen automatically. Political choices will have to be made to restructure the economy beyond market logic. Regulation is necessary, with a resolute attitude towards economic concentration, and with a supportive legislative framework for commons, cooperatives and various cooperation models.

At the same time, more people need to make money with business models that build on a commons logic. The movement around “platform cooperativism” is a very interesting evolution. It develops new models of cooperatives that operate through digital platforms and that work together in global networks. They offer a counterweight to the business models of digital platforms such as Uber and Airbnb, which apply the market logic to the digital economy.

This brings us to the complex debate about future of work.

In the context of increasing automation, there is a need for a broader discussion that can see “money” and “work” as separate from each other, because the motivations to “work” can be very diverse. A general basic income is an opportunity to build a more flexible system that makes these various motivations possible, but also a shorter working week is an option.

We are facing an enormous task and we do not have a detailed manual that shows us the way. However, the current economic crisis and the declining acceptance of austerity means that the circumstances are favorable to experiment with new forms of organization.

When Wikipedia began to grow, it was told that it “only works in practice, because in theory it’s a total mess.” I believe, however, that today we have a theoretical framework that allows us to build a better life together without subjecting ourselves to the same framework that gave us oligarchic capitalism. The commons is the only genuine alternative today that allows us to build a truly participatory economic production system. The commons can cause a global cultural revolution.

This piece has been edited for length and clarity.

Photo by Ratchanee @ Gatoon

The post Yochai Benkler on the Benefits of an Open Source Economic System appeared first on P2P Foundation.

]]>
https://blog.p2pfoundation.net/yochai-benkler-on-the-benefits-of-an-open-source-economic-system/2017/12/01/feed 0 68754