Linux – P2P Foundation https://blog.p2pfoundation.net Researching, documenting and promoting peer to peer practices Fri, 14 May 2021 00:06:47 +0000 en-US hourly 1 https://wordpress.org/?v=5.5.15 62076519 Open-source licensing war: Commons Clause https://blog.p2pfoundation.net/open-source-licensing-war-commons-clause/2019/05/16 https://blog.p2pfoundation.net/open-source-licensing-war-commons-clause/2019/05/16#respond Thu, 16 May 2019 10:00:00 +0000 https://blog.p2pfoundation.net/?p=75123 A new open-source license addendum, Commons Clause, has lawyers, developers, businesses, and open-source supporters fighting with each other. Written Steven J. Vaughan-Nichols for Linux and Open Source, originally posted on ZDNet on August 28, 2018 Most people wouldn’t know an open-source license from their driver’s license. For those who work with open-source software, it’s a different story. Open-source license fights... Continue reading

The post Open-source licensing war: Commons Clause appeared first on P2P Foundation.

]]>
A new open-source license addendum, Commons Clause, has lawyers, developers, businesses, and open-source supporters fighting with each other.

Written Steven J. Vaughan-Nichols for Linux and Open Source, originally posted on ZDNet on August 28, 2018

Most people wouldn’t know an open-source license from their driver’s license. For those who work with open-source software, it’s a different story. Open-source license fights can be vicious, cost serious coin, and determine the fate of multi-million dollar companies. So, when Redis Labs added a new license clause, Commons Clause, on top of Redis, an open-source, BSD licensed, in-memory data structure store, all hell broke loose.

Why? First, you need to understand that while you may never have heard of Redis, it’s a big deal. It enables real-time applications such as advertising, gaming financial services, and IoT to work at speed. That’s because it can deliver sub-millisecond response times to millions of requests per second.

But Redis Labs has been unsuccessful in monetizing Redis, or at least not as successful as they’d like. Their executives were discovering, like the far more well-known Docker, that having a great open-source technology did not mean you’d be making millions. Redis’ solution was to embrace Commons Clause.

This license forbids you from selling the software. It also states you may not host or offer consulting or support services as “a product or service whose value derives, entirely or substantially, from the functionality of the software”.

If that doesn’t sound like open-source software to you, you have lots of company.

Simon Phipps, president of the Open Source Initiative (OSI), snapped on Twitter: “Redis just went proprietary, which sucks. No, this is not just ‘a limitation concerning fair use,’ it is an abrogation of software freedom.”

In an email, Phipps added, “Adding a significant clause to an existing license that has been approved by OSI instantly renders it non-approved, and the text of the so-called ‘Commons Clause,’ which actually fences off the commons, is clearly intended to violate clause 1 of the Open Source Definition and probably also violates clauses 3, 5 and 6. As such adding this clause to a license would be a major abrogation of software freedom removing essential rights from any affected open-source community.”

Software programmer Drew DeVault made his stance clear from his opening words: “Commons Clause will destroy open source.” Commons Clause, he continued, “presents one of the greatest existential threats to open source I’ve ever seen. It preys on a vulnerability open-source maintainers all suffer from, and one I can strongly relate to. It sucks to not be able to make money from your open-source work. It really sucks when companies are using your work to make money for themselves. If a solution presents itself, it’s tempting to jump at it. But the Commons Clause doesn’t present a solution for supporting open-source software. It presents a framework for turning open-source software into proprietary software.”

Bradley M Kuhn, president of the Software Freedom Conservancy and author of the Affero General Public License, blogged, “This proprietary software license, which is not open source and does not respect the four freedoms of free software, seeks to hide a power imbalance ironically behind the guise ‘open source sustainability.’ Their argument, once you look past their assertion that the only way to save open source is to not do open source, is quite plain: If we can’t make money as quickly and as easily as we’d like with this software, then we have to make sure no one else can as well.”

Andrew ‘Andy’ Updegrove, a founding partner of Gesmer Updegrove, a top technology law firm, and open-source legal expert, found it no surprise that many open-source supporters hate Commons Clause. He rejects the conspiracy theory, “that the Commons Clause will be some sort of virus that will deprive innocent developers of the ability to make a living, and will persuade businesses owners to avoid buying or using code that has any commons clause in it.”

Updegrove believes this is because Heather Meeker, a partner at O’Melveny law firm who drafted it, “is a respected attorney and long-term participant in open-source legal circles, so IMHO the conspiracy theory can be ignored. Note also that Kevin Wang [founder of FOSSA]and Heather have both offered the clause as text to initiate a discussion, and not something to be wholesale adopted as it stands.”

That didn’t stop Redis Labs, which is applying Commons Clause on top of the Apache license, to cover five new Redis modules. Redis is doing this, said its co-founder and CTO Yiftach Shoolman in an email, “for two reasons — to limit the monetization of these advanced capabilities by cloud service providers like AWS and to help enterprise developers whose companies do not work with AGPL licenses.”

On the Redis Labs site, the company now explains in more detail that cloud providers are taking advantage of open-source companies by repackaging their programs into competitive, proprietary-service offerings. These providers contribute very little — if anything — back to those open-source projects. Instead, they use their monopolistic nature to derive hundreds of millions of dollars in revenues from them.

Redis Labs contends that “most cloud providers offer Redis as a managed service over their infrastructure and enjoy huge income from software that was not developed by them. Redis Labs is leading and financing the development of open source Redis and deserves to enjoy the fruits of these efforts.” Shoolman insisted that “Redis is open source and will remain under a BSD license.”

Salvatore Sanfilippo, Redis’ creator, added the change just “means that basically certain enterprise add-ons, instead of being completely closed source as they could be, will be available with a more permissive license,” Commons Clauses with Apache.

Software Freedom Conservancy executive director Karen Sandler isn’t so sure. Sandler emailed that Commons Clause “highlights the fundamental problems connected to the wide adoption of non-copyleft licenses, but I think it doesn’t really solve the problem that it seeks to solve. What we really need is strong copyleft licenses where the copyrights are held diversely by individuals and functional charities to make sure that software remains free and that societally we have the rights we need to have confidence in our software in the long run.”

In an email, Wang defended Commons Clause as “mostly used to temporarily transition enterprise offering counterparts of open-source software projects to source-available”. Wang continued: “Open-source software projects are mainly funded by a proprietary offering/service counterparts. Anything to help this layer monetize is good — the fate of the OSS is directly funded by it.

“The world has changed a lot and the open-source software/cloud ecosystem has a lot too,” Wang added. “The Open Source Definition is an immensely [valuable] set of ideals, but maybe it’s outdated to the modern state of the world. … Licensing follows intent, and I certainly don’t think the clause inspires people to close their source. But sometimes people need to change their license.”

Be that as it may, Updegrove wrote Commons Clause is “simple in concept: basically, it gives a developer the right to make sure no one can make money out of her code — whether by selling, hosting, or supporting it — unless the Commons Clause code is a minor part of a larger software product”.

“In one way, that’s in the spirit of a copyleft license (i.e., a prohibition on commercial interests taking advantage of a programmer’s willingness to make her code available for free), but it also violates the ‘Four Freedoms’ of free and open-source software as well as the Open Source Definition by placing restrictions on reuse, among other issues.”

But, “adding the Commons Clause to an open-source license makes it no longer an open-source license,” Updegrove added. And, were the Commons Clause to catch on, “it could give rise to an unwelcome trend”.

“The wide proliferation of licenses in the early days of open source was unhelpful and a cause of ongoing confusion and complexity, since not all licenses were compatible with other licenses. That means that before any piece of open-source code can be added to a code base, it’s necessary to determine whether its license is compatible with the licenses of all other software in the same product. That’s a big and ongoing headache.”

That’s a big reason, Updegrove wrote, why “Bruce Perens and Eric S. Raymond created the Open Source Definition and the Open Source Initiative so that there would be a central reference point and authority to determine what was and was not an ‘Open Source License’. That definition and process has held now for 20 years — an eternity, in open-source history.”

Therefore, Updegrove sees Commons Clause as a step backward from a process point of view. Worse, “it would be a very disturbing development if the release of the Commons Clause inspired more people to come up with their own license ‘extensions’, especially if they are also not compliant with the Open Software Definition and the Four Freedoms.”

The result? Companies and programmers veering away from using any Commons Clause licensed software. That was not its creators’ intent, but it’s a realistic concern.

Updegrove adds, “Speaking as a lawyer, the fact that someone can still charge for a product that includes Commons Clause software so long as the value does not ‘derive[s], entirely or substantially, from the functionality of the software’ is certain to invite disputes. The most obvious is what does ‘substantially’ [mean]? There is no bright-line for guidance.”

Georg Greve, co-founder and president at Vereign, a blockchain-secured communication company and founder of Free Software Foundation Europe, also worried, “Overall it seems purposefully vague & misleading, probably overreaching and terribly one-sided to establish Fear, Uncertainty, and Doubt for any professional use of software licensed under it while making it terribly easy to ‘accidentally’ incorporate such components.”

Still, Updegrove thinks Commons Clause may be “a useful addition to the licensing menu, but not one that will be appropriate for use in all situations. … Developers should be clear in advance what their goals are when they’re put their fingers to their keys. Commons Clause-licensed software is not likely to get the same amount of reuse as might otherwise be the case.”

The post Open-source licensing war: Commons Clause appeared first on P2P Foundation.

]]>
https://blog.p2pfoundation.net/open-source-licensing-war-commons-clause/2019/05/16/feed 0 75123
Open Hardware Platforms in Business and Education https://blog.p2pfoundation.net/open-hardware-platforms-in-business-and-education/2019/04/25 https://blog.p2pfoundation.net/open-hardware-platforms-in-business-and-education/2019/04/25#respond Thu, 25 Apr 2019 08:00:00 +0000 https://blog.p2pfoundation.net/?p=75006 BY PhD. Paweł Buchwald | WSB UNIVERSITY The development of cheap single-board programmable systems in the recent period, has significantly facilitated the prototyping of electronic circuits. One of the first open electronic platforms was the Arduino. This hardware platform is compatible with open programming systems. The Arduino platform was based on a simple design and... Continue reading

The post Open Hardware Platforms in Business and Education appeared first on P2P Foundation.

]]>
BY PhD. Paweł Buchwald | WSB UNIVERSITY

The development of cheap single-board programmable systems in the recent period, has significantly facilitated the prototyping of electronic circuits. One of the first open electronic platforms was the Arduino. This hardware platform is compatible with open programming systems. The Arduino platform was based on a simple design and was created mainly for educational applications. The use of popular interfaces for communication with peripheral devices in the construction of the device meant that many projects in the field of control, automation and the Internet of Things were created on the basis of Arduino.  The Arduino platform was established in 2003 and is successfully used in modern projects. Another popular device for prototyping of control systems is Raspberry Pi. Raspberry Pi is a mini-computer working under the control of the Raspbian OS. This operating system is based on Debian, a popular Linux distribution. In addition to the standard interfaces known from traditional PCs, this computer has a 40 pin GPIO connector, which allows designers to use devices connected with I2C, SPI or 1-Wire bus. Thanks to this device, it is possible to integrate the designed system with many peripheral sensors and automatic control modules. There are many other platforms on the market that have interfaces to integrate with popular sensors and controls, and allow programmers to create code in high-level programming languages. Due to their low cost, these devices can be a popular alternative to the most well-known and more expensive solutions. The most popular platforms of this type are:

  • Orange Pi – Orange Pi is an open SoC computer. It has preinstalled operating system in Flash memory. This microcomputer has Allwinner H3 processor, it can run operating systems Android 4.4, Ubuntu, Raspbian. There is a version of this device equipped with a SIM card slot that allows data transmission in mobile networks.
  • NodeMCU – A device with a WiFi module, based on the ESP8266 chip. It has 10 GPIO ports (serving PWM, I2C and 1-wire). The platform has been equipped with 4 MB of Flash memory. The system has a built-in single-channel 10-bit analogue ADC converter and USB-UART converter. The NodeMCU software is installed in the device’s memory, which allows you to create programs in the Lua script language. Developers can also use the popular C language.
  • Onion Omega 2 – A single board platform for amateur use. One of the smallest IoT modules on the market that work with the Linux system. The device has a Mediatek MT7688 processor clocked at 580 MHz, has 64 MB of RAM and 16 MB of built-in Flash memory. The system has a built-in WiFi module and 15x GPIO, PWM, UART, I2C, and SPI interfaces. The module is controlled via a built-in web interface. The device manufacturer provides an account on its Cloud platform in the price of the device. This platform can be used for integration with other control systems and data sharing on the Internet.

These platforms are presented in Figure 1.

Figure 1. Platforms NodeMCU, Onion Omega 2 and Orange Pi.

These platforms can be used not only to create prototype systems, but also to build commercial data acquisition and control systems. An example of this type of solutions can be an agro-hydro-meteorological station, which was built on the basis of open hardware platforms. These platforms were used to integrate the IoT system with a professional meteorological station of the Vantage Pro company, and allowed to extend the functionality by measuring insolation, measuring evaporation and evapotranspiration, detecting the thickness of the snow cover, and measuring the water level. The constructed system is also used to generate alerts about dangerous weather events. The implemented system was installed in Krakow. It is currently used to transmit data and generate information about dangerous meteorological conditions. This system is fully autonomous due to power requirements. The physical installation of the system is shown in Figure 2. Pictures of installed stations were made available by InfoMet Katowice.

Figure 2. Meteo station based on open hardware platforms.

The relatively low price of the presented platforms allows their use also in educational projects. These platforms can be used in educational centers and schools that do not have large budget resources for their activities. One of the educational projects is a controlled robot platform based on the ESP8266 system, and a dedicated Motor Shield module. Thanks to these devices, it is possible to build an educational robot system. This system enables students to familiarize themselves with the basic problems of control, network communication and programming of embedded systems. This educational robot was also used in educational workshops as part of the ODM project. Despite the fact that the people participating in the workshop did not have any preparation in the field of computer science, they launched the presented robot platform. Participating in the workshop, he was able to familiarize himself with the methods of robot control, data transmission problems in Internet networks and programming of control systems in high level programming languages. The use of the robot in educational workshops as part of the ODM project is shown in Figure 3.

Figure 3. Educational Robot Platform.

Source: https://www.youtube.com/watch?v=cg8MelZsI2A&feature=youtu.be

The presented examples present the applications of popular single board platforms in education and commercial activities. Applications of open hardware solutions will continue to grow thanks to IoT systems. The dynamic development of independent projects such as DWeb or Ethereum will allow to create innovative solutions in the field of data processing based on open hardware and software platforms.


The post Open Hardware Platforms in Business and Education appeared first on P2P Foundation.

]]>
https://blog.p2pfoundation.net/open-hardware-platforms-in-business-and-education/2019/04/25/feed 0 75006
What to do once you admit that decentralizing everything never seems to work https://blog.p2pfoundation.net/what-to-do-once-you-admit-that-decentralizing-everything-never-seems-to-work/2018/10/24 https://blog.p2pfoundation.net/what-to-do-once-you-admit-that-decentralizing-everything-never-seems-to-work/2018/10/24#respond Wed, 24 Oct 2018 08:00:00 +0000 https://blog.p2pfoundation.net/?p=73242 Decentralization is the new disruption—the thing everything worth its salt (and a huge ICO) is supposed to be doing. Meanwhile, Internet progenitors like Vint Cerf, Brewster Kahle, and Tim Berners-Lee are trying to re-decentralize the Web. They respond to the rise of surveillance-based platform monopolies by simply redoubling their efforts to develop new and better decentralizing technologies. They... Continue reading

The post What to do once you admit that decentralizing everything never seems to work appeared first on P2P Foundation.

]]>

Decentralization is the new disruption—the thing everything worth its salt (and a huge ICO) is supposed to be doing. Meanwhile, Internet progenitors like Vint Cerf, Brewster Kahle, and Tim Berners-Lee are trying to re-decentralize the Web. They respond to the rise of surveillance-based platform monopolies by simply redoubling their efforts to develop new and better decentralizing technologies. They seem not to notice the pattern: decentralized technology alone does not guarantee decentralized outcomes. When centralization arises elsewhere in an apparently decentralized system, it comes as a surprise or simply goes ignored.

Here are some traces of the persistent pattern that I’m talking about:

  • The early decentralized technologies of the Internet and Web relied on key points of centralization, such as the Domain Name System (which Berners-Lee called the Internet’s “centralized Achilles’ heel by which it can all be brought down or controlled”) and the World Wide Web Consortium (which Berners-Lee has led for its entire history)
  • The apparently free, participatory open-source software communities have frequently depended on the charismatic and arbitrary authority of a “benevolent dictator for life,” from Linus Torvalds of Linux (who is not always so benevolent) to Guido van Rossum of Python
  • Network effects and other economies of scale have meant that most Internet traffic flows through a tiny number of enormous platforms — a phenomenon aided and exploited by a venture-capital financing regime that must be fed by a steady supply of unicorns
  • The venture capital that fuels the online economy operates in highly concentrated regions of the non-virtual world, through networks that exhibit little gender or ethnic diversity, among both investors and recipients
  • While crypto-networks offer some novel disintermediation, they have produced some striking new intermediaries, from the mining cartels that dominate Bitcoin and other networks to Vitalik Buterin’s sweeping charismatic authority over Ethereum governance

This pattern shows no signs of going away. But the shortcomings of the decentralizing ideal need not serve as an indictment of it. The Internet and the Web made something so centralized as Facebook possible, but they also gave rise to millions of other publishing platforms, large and small, which might not have existed otherwise. And even while the wealth and power in many crypto-networks appears to be remarkably concentrated, blockchain technology offers distinct, potentially liberating opportunities for reinventing money systems, organizations, governance, supply chains, and more. Part of what makes the allure of decentralization so compelling to so many people is that its promise is real.

Yet it turns out that decentralizing one part of a system can and will have other kinds of effects. If one’s faith in decentralization is anywhere short of fundamentalism, this need not be a bad thing. Even among those who talk the talk of decentralization, many of the best practitioners are already seeking balance — between unleashing powerful, feral decentralization and ensuring that the inevitable centralization is accountable and functional. They just don’t brag about the latter. In what remains, I will review some strategies of thought and practice for responsible decentralization.

Hat from a 2013 event sponsored by Zambia’s central government celebrating a decentralization process. Source: courtesy of Elizabeth Sperber, a political scientist at the University of Denver

First, be more specific

Political scientists talk about decentralization, too—as a design feature of government institutions. They’ve noticed a similar pattern as we find in tech. Soon after something gets decentralized, it seems to cause new forms of centralization not far away. Privatize once-public infrastructure on open markets, and soon dominant companies will grow enough to lobby their way into regulatory capture; delegate authority from a national capital to subsidiary regions, and they could have more trouble than ever keeping warlords, or multinational corporations, from consolidating power. In the context of such political systems, one scholar recommends a decentralizing remedy for the discourse of decentralization — a step, as he puts it, “beyond the centralization-centralization dichotomy.” Rather than embracing decentralization as a cure-all, policymakers can seek context-sensitive, appropriate institutional reforms according to the problem at hand. For instance, he makes a case for centralizing taxation alongside more distributed decisions about expenditures. Some forms of infrastructure lend themselves well to local or private control, while others require more centralized institutions.

Here’s a start: Try to be really, really clear about what particular features of a system a given design seeks to decentralize.

No system is simply decentralized, full-stop. We shouldn’t expect any to be. Rather than referring to TCP/IP or Bitcoin as self-evidently decentralized protocols, we might indicate more carefully what about them is decentralized, as opposed to what is not. Blockchains, for instance, enable permissionless entry, data storage, and computing, but with a propensity to concentration with respect to interfaces, governance, and wealth. Decentralizing interventions cannot expect to subdue every centralizing influence from the outside world. Proponents should be forthright about the limits of their enterprise (as Vitalik Buterin has sometimes been). They can resist overstating what their particular sort of decentralization might achieve, while pointing to how other interventions might complement their efforts.

Another approach might be to regard decentralization as a process, never a static state of being — to stick to active verbs like “decentralize” rather than the perfect-tense “decentralized,” which suggests the process is over and done, or that it ever could be.

Guidelines such as these may tempt us into a pedantic policing of language, which can lead to more harm than good, especially for those attempting not just to analyze but to build. Part of the appeal of decentralization-talk is the word’s role as a “floating signifier” capable of bearing various related meanings. Such capacious terminology isn’t just rhetoric; it can have analytical value as well. Yet people making strong claims about decentralization should be expected to make clear what distinct activities it encompasses. One way or another, decentralization must submit to specificity, or the resulting whack-a-mole centralization will forever surprise us.

A panel whose participants, at the time, represented the vast majority of the Bitcoin network’s mining power. Original source unknown

Second, find checks and balances

People enter into networks with diverse access to resources and skills. Recentralization often occurs because of imbalances of power that operate outside the given network. For instance, the rise of Facebook had to do with Mark Zuckerberg’s ingenuity and the technology of the Web, but it also had to do with Harvard University and Silicon Valley investors. Wealth in the Bitcoin network can correlate with such factors as propensity to early adoption of technology, wealth in the external economy, and proximity to low-cost electricity for mining. To counteract such concentration, the modes of decentralization can themselves be diverse. This is what political institutions have sought to do for centuries.

Those developing blockchain networks have tended to rely on rational-choice, game-theoretic models to inform their designs, such as in the discourse that has come to be known as “crypto-economics.” But relying on such models alone has been demonstrably inadequate. Already, protocol designers seem to be rediscovering notions like the separation of powers from old, institutional liberal political theory. As it works to “truly achieve decentralization,” the Civil journalism network ingeniously balances market-based governance and enforcement mechanisms with a central, mission-oriented foundation populated by elite journalists — a kind of supreme court. Colony, an Ethereum-based project “for open organizations,” balances stake-weighted and reputation-weighted power among users, so that neither factor alone dictates a user’s fate in the system. The jargon is fairly new, but the principle is old. Stake and reputation, in a sense, resemble the logic of the House of Lords and the House of Commons in British government — a balance between those who have a lot to lose and those who gain popular support.

As among those experimenting with “platform cooperativism,” protocols can also adapt lessons from the long and diverse legacy of cooperative economics. For instance, blockchain governance might balance market-based one-token-one-vote mechanisms with cooperative-like one-person-one-vote mechanisms to counteract concentrations of wealth. The developers of RChain, a computation protocol, have organized themselves in a series of cooperatives, so that the oversight of key resources is accountable to independent, member-elected boards. Even while crypto-economists adopt market-based lessons from Hayek, they can learn from the democratic economics of “common-pool resources” theorized by Elinor Ostrom and others.

Decentralizing systems should be as heterogeneous as their users. Incorporating multiple forms of decentralization, and multiple forms of participation, can enable each to check and counteract creeping centralization.

Headquarters of the Internet Archive, home of the Decentralized Web conferences: Wikimedia Commons

Third, make centralization accountable

More empowering strategies for decentralization, finally, may depend on not just noticing or squashing the emergence of centralized hierarchy, but embracing it. We should care less about whether something is centralized or decentralized than whether it is accountable. An accountable system is responsive to both the common good for participants and the needs of minorities; it sets consistent rules and can change them when they don’t meet users’ needs.

Antitrust policy is an example of centralization (through government bureaucracy) on behalf of decentralization (in private sector competition). When the government carrying out such a policy holds a democratic mandate, it can claim to be accountable, and aggressive antitrust enforcement frequently enjoys broad popularity. Such centralized government power, too, may be the only force capable of counteracting the centralized power of corporations that are less accountable to the people whose lives they affect. In ways like this, most effective forms of decentralization actually imply some form of balance between centralized and decentralized power.

While Internet discourses tend to emphasize their networks’ structural decentralization, well-centralized authorities have played critical roles in shaping those networks for the better. Internet progenitors like Vint Cerf and Tim Berners-Lee not only designed key protocols but also established multi-stakeholder organizations to govern them. Berners-Lee’s World Wide Web Consortium (W3C), for instance, has been a critical governance body for the Web’s technical standards, enabling similar user experience across servers and browsers. The W3C includes both enormously wealthy corporations and relatively low-budget advocacy organizations. Although its decisions have sometimes seemedto choose narrow business interests over the common good, these cases are noteworthy because they are more the exception than the rule. Brewster Kahle has modeled mission-grounded centralization in the design of the nonprofit Internet Archive, a piece of essential infrastructure, and has even attempted to create a cooperative credit union for the Internet. His centralizing achievements are at least as significant as his calls for decentralizing.

Blockchain protocols, similarly, have tended to spawn centralized organizations or companies to oversee their development, although in the name of decentralization their creators may regard such institutionalization as a merely temporary necessity. Crypto-enthusiasts might admit that such institutions can be a feature, not a bug, and design them accordingly. If they want to avoid a dictator for life, as in Linux, they could plan ahead for democracy, as in Debian. If they want to avoid excessive miner-power, they could develop a centralized node with the power to challenge such accretions.

The challenge that entrepreneurs undertake should be less a matter of How can I decentralize everything? than How can I make everything more accountable? Already, many people are doing this more than their decentralization rhetoric lets on; a startup’s critical stakeholders, from investors to developers, demand it. But more emphasis on the challenge of accountability, as opposed to just decentralization, could make the inevitable emergence of centralization less of a shock.

What’s so scary about trust?

In a February 2009 forum post introducing Bitcoin, Satoshi Nakamoto posited, “The root problem with conventional currency is all the trust that’s required to make it work.” This analysis, and the software accompanying it, has spurred a crusade for building “trustless” systems, in which institutional knowledge and authority can be supplanted with cryptographic software, pseudonymous markets, and game-theoretic incentives. It’s a crusade analogous to how global NGOs and financial giants advocated mechanisms to decentralize power in developing countries, so as to facilitate international investment and responsive government. Yet both crusades have produced new kinds of centralization, in some cases centralization less accountable than what came before.

For now, even the minimal electoral accountability over the despised Federal Reserve strikes me as preferable to whoever happens to be running the top Bitcoin miners.

Decentralization is not a one-way process. Decentralizing one aspect of a complex system can realign it toward complex outcomes. Tools meant to decentralize can introduce novel possibilities — even liberating ones. But they run the risk of enabling astonishingly unaccountable concentrations of power. Pursuing decentralization at the expense of all else is probably futile, and of questionable usefulness as well. The measure of a technology should be its capacity to engender more accountable forms of trust.

Learn more: ntnsndr.in/e4e

If you want to read more about the limits of decentralization, here’s a paper I’m working on about that. If you want to read about an important tradition of accountable, trust-based, cooperative business, here’s a book I just published about that.

Photo by CIFOR

The post What to do once you admit that decentralizing everything never seems to work appeared first on P2P Foundation.

]]>
https://blog.p2pfoundation.net/what-to-do-once-you-admit-that-decentralizing-everything-never-seems-to-work/2018/10/24/feed 0 73242
Are the Digital Commons condemned to become “Capital Commons”? https://blog.p2pfoundation.net/are-the-digital-commons-condemned-to-become-capital-commons/2018/08/03 https://blog.p2pfoundation.net/are-the-digital-commons-condemned-to-become-capital-commons/2018/08/03#respond Fri, 03 Aug 2018 08:00:00 +0000 https://blog.p2pfoundation.net/?p=72035 By Calimaq; original article in French translated by Maïa Dereva (with DeepL) and edited by Ann Marie Utratel Last week, Katherine Maher, the executive director of the Wikimedia Foundation, published a rather surprising article on the Wired site entitled: “Facebook and Google must do more to support Wikipedia”. The starting point of her reasoning was... Continue reading

The post Are the Digital Commons condemned to become “Capital Commons”? appeared first on P2P Foundation.

]]>
By Calimaq; original article in French translated by Maïa Dereva (with DeepL) and edited by Ann Marie Utratel


Last week, Katherine Maher, the executive director of the Wikimedia Foundation, published a rather surprising article on the Wired site entitled: “Facebook and Google must do more to support Wikipedia”. The starting point of her reasoning was to point out that Wikipedia content is increasingly being used by digital giants, such as Facebook or Google:

You may not realise how ubiquitous Wikipedia is in your everyday life, but its open, collaboratively-curated data is used across semantic, search and structured data platforms  on the web. Voice assistants such as Siri, Alexa and Google Home source Wikipedia articles for general knowledge questions; Google’s knowledge panel features Wikipedia content for snippets and essential facts; Quora contributes to and utilises the Wikidata open data project to connect topics and improve user recommendations.

More recently, YouTube and Facebook have turned to Wikipedia for a new reason: to address their issues around fake news and conspiracy theories. YouTube said that they would begin linking to Wikipedia articles from conspiracy videos, in order to give users additional – often corrective – information about the topic of the video. And Facebook rolled out a feature using Wikipedia’s content to give users more information about the publication source of articles appearing in their feeds.

With Wikipedia being solicited more and more by these big players, Katherine Maher believes that they should contribute in return to help the project to guarantee its sustainability:

But this work isn’t free. If Wikipedia is being asked to help hold back the ugliest parts of the internet, from conspiracy theories to propaganda, then the commons needs sustained, long-term support – and that support should come from those with the biggest monetary stake in the health of our shared digital networks.

The companies which rely on the standards we develop, the libraries we maintain, and the knowledge we curate should invest back. And they should do so with significant, long-term commitments that are commensurate with our value we create. After all, it’s good business: the long-term stability of the commons means we’ll be around for continued use for many years to come.

As the non-profits that make the internet possible, we already know how to advocate for our values. We shouldn’t be afraid to stand up for our value.

An image that makes fun of a famous quote by Bill Gates who had described the Linux project as “communist”. But today, it is Capital that produces or recovers digital Commons – starting with Linux – and maybe that shouldn’t make us laugh..

Digital commons: the problem of sustainability

There is something strange about the director of the Wikimedia Foundation saying this kind of thing. Wikipedia is in fact a project anchored in the philosophy of Free Software and placed under a license (CC-BY-SA) that allows commercial reuse, without discriminating between small and large players. The “SA”, for Share Alike, implies that derivative works made from Wikipedia content are licensed under the same license, but does not prohibit commercial reuse. For Wikidata data, things go even further since this project is licensed under CC0 and does not impose any conditions on reuse, not even mentioning the source.

So, if we stick strictly to the legal plan, players like Facebook or Google are entitled to draw from the content and data of Wikimedia projects to reuse them for their own purposes, without having to contribute financially in return. If they do, it can only be on a purely voluntary basis and that is the only thing Katherine Maher can hope for with her platform: that these companies become patrons by donating money to the Wikimedia Foundation. Google has already done so in the past, with a donation of $2 million in 2010 and another $1 million last year. Facebook, Apple, Microsoft and Google have also put in place a policy whereby these companies pledge to pay the Wikimedia Foundation the same amount as their individual employees donate.

Should digital giants do more and significantly address the long-term sustainability of the Digital Commons that Wikipedia represents? This question refers to reciprocity for the Commons, which is both absolutely essential and very ambivalent. If we broaden the perspective to free software, it is clear that these Commons have become an essential infrastructure without which the Internet could no longer function today (90% of the world’s servers run on Linux, 25% of websites use WordPress, etc.) But many of these projects suffer from maintenance and financing problems, because their development depends on communities whose means are unrelated to the size of the resources they make available to the whole world. This is shown very well in the book, “What are our digital infrastructures based on? The invisible work of web makers”, by Nadia Eghbal:

Today, almost all commonly used software depends on open source code, created and maintained by communities of developers and other talents. This code can be taken up, modified and used by anyone, company or individual, to create their own software. Shared, this code thus constitutes the digital infrastructure of today’s society…whose foundations threaten, however, to yield under demand!

Indeed, in a world governed by technology, whether Fortune 500 companies, governments, large software companies or startups, we are increasing the burden on those who produce and maintain this shared infrastructure. However, as these communities are quite discreet, it has taken a long time for users to become aware of this.

Like physical infrastructure, however, digital infrastructure requires regular maintenance and servicing. Faced with unprecedented demand, if we do not support this infrastructure, the consequences will be many.

This situation corresponds to a form of tragedy of the Commons, but of a different nature from that which can strike material resources. Indeed, intangible resources, such as software or data, cannot by definition be over-exploited and they even increase in value as they are used more and more. But tragedy can strike the communities that participate in the development and maintenance of these digital commons. When the core of individual contributors shrinks and their strengths are exhausted, information resources lose quality and can eventually wither away.

The progression of the “Capital Commons”

Market players are well aware of this problem, and when their activity depends on a Digital Commons, they usually end up contributing to its maintenance in return. The best known example of this is Linux software, often correctly cited as one of the most beautiful achievements of FOSS. As the cornerstone of the digital environment, the Linux operating system was eventually integrated into the strategies of large companies such as IBM, Samsung, Intel, RedHat, Oracle and many others (including today Microsoft, Google, Amazon and Facebook). Originally developed as a community project based on contributions from volunteer developers, Linux has profoundly changed in nature over time. Today, more than 90% of the contributions to the software are made by professional developers, paid by companies. The Tragedy of the Commons “by exhaustion” that threatens many Open Source projects has therefore been averted with regard to Linux, but only by “re-internalizing” contributors in the form of employees (a movement that is symmetrically opposite to that of uberization).

Main contributors to Linux in 2017. Individual volunteer contributors (none) now represent only 7.7% of project participants…

However, this situation is sometimes denounced as a degeneration of contributing projects that, over time, would become “Commons of capital” or “pseudo-Commons of capital”. For example, as Christian Laval explained in a forum:

Large companies create communities of users or consumers to obtain opinions, opinions, suggestions and technical improvements. This is what we call the “pseudo-commons of capital”. Capital is capable of organizing forms of cooperation and sharing for its benefit. In a way, this is indirect and paradoxical proof of the fertility of the common, of its creative and productive capacity. It is a bit the same thing that allowed industrial take-off in the 19th century, when capitalism organised workers’ cooperation in factories and exploited it to its advantage.

If this criticism can quite legitimately be addressed to actors like Uber or AirBnB who divert and capture collaborative dynamics for their own interests, it is more difficult to formulate against a project like Linux. Because large companies that contribute to software development via their employees have not changed the license (GNU-GPL) under which the resource is placed, they can never claim exclusivity. This would call into question the shared usage rights allowing any actor, commercial or not, to use Linux. Thus, there is literally no appropriation of the Common or return to enclosure, even if the use of the software by these companies participates in the accumulation of Capital.

On the other hand, it is obvious that a project which depends more than 90% on the contributions of salaried developers working for large companies is no longer “self-governed” as understood in Commons theory. Admittedly, project governance always formally belongs to the community of developers relying on the Linux Foundation, but you can imagine that the weight of the corporations’ interests must be felt, if only through the ties of subordination weighing on salaried developers. This structural state of economic dependence on these firms does make Linux a “common capital”, although not completely captured and retaining a certain relative autonomy.

How to guarantee the independence of digital Commons?

For a project like Wikipedia, things would probably be different if firms like Google or Facebook answered the call launched by Katherine Maher. The Wikipedia community has strict rules in place regarding paid contributions, which means that you would probably never see 90% of the content produced by employees. Company contributions would likely be in the form of cash payments to the Wikimedia Foundation. However, economic dependence would be no less strong; until now, Wikipedia has ensured its independence basically by relying on individual donations to cover the costs associated with maintaining the project’s infrastructure. This economic dependence would no doubt quickly become a political dependence – which, by the way, the Wikimedia Foundation has already been criticised for, regarding a large number of personalities with direct or indirect links with Google included on its board, to the point of generating strong tensions with the community. The Mozilla Foundation, behind the Firefox browser, has sometimes received similar criticism. Their dependence on Google funding may have attracted rather virulent reproach and doubts about some of its strategic choices.

In the end, this question of the digital Commons’ state of economic dependence is relatively widespread. There are, in reality, very few free projects having reached a significant scale that have not become more or less “Capital Commons”. This progressive satellite-isation is likely to be further exacerbated by the fact that free software communities have placed themselves in a fragile situation by coordinating with infrastructures that can easily be captured by Capital. This is precisely what just happened with Microsoft’s $7.5 billion acquisition of GitHub. Some may have welcomed the fact that this acquisition reflected a real evolution of Microsoft’s strategy towards Open Source, even that it could be a sign that “free software has won”, as we sometimes hear.

Microsoft was already the firm that devotes the most salaried jobs to Open Source software development (ahead of Facebook…)

But, we can seriously doubt it. Although free software has acquired an infrastructural dimension today – to the point that even a landmark player in proprietary software like Microsoft can no longer ignore it – the developer communities still lack the means of their independence, whether individually (developers employed by large companies are in the majority) or collectively (a lot of free software depends on centralized platforms like GitHub for development). Paradoxically, Microsoft has taken seriously Platform Cooperativism’s watchwords, which emphasize the importance of becoming the owner of the means of production in the digital environment in order to be able to create real alternatives. Over time, Microsoft has become one of the main users of GitHub for developing its own code; logically, it bought the platform to become its master. Meanwhile – and this is something of a grating irony – Trebor Scholz – one of the initiators, along with Nathan Schneider, of the Platform Cooperativism movement – has accepted one million dollars in funding from Google to develop his projects. This amounts to immediately making oneself dependent on one of the main actors of surveillance capitalism, seriously compromising any hope of building real alternatives.

One may wonder if Microsoft has not better understood the principles of Platform Cooperativism than Trebor Scholtz himself, who is its creator!

For now, Wikipedia’s infrastructure is solidly resilient, because the Wikimedia Foundation only manages the servers that host the collaborative encyclopedia’s contents. They have no title to them, because of the free license under which they are placed. GitHub could be bought because it was a classic commercial enterprise, whereas the Wikimedia Foundation would not be able to resell itself, even if players like Google or Apple made an offer. The fact remains that Katherine Maher’s appeal for Google or Facebook funding risks weakening Wikipedia more than anything else, and I find it difficult to see something positive for the Commons. In a way, I would even say that this kind of discourse contributes to the gradual dilution of the notion of Commons that we sometimes see today. We saw it recently with the “Tech For Good” summit organized in Paris by Emmanuel Macron, where actors like Facebook and Uber were invited to discuss their contribution “to the common good”. In the end, this approach is not so different from Katherine Maher’s, who asks that Facebook or Google participate in financing the Wikipedia project, while in no way being able to impose it on them. In both cases, what is very disturbing is that we are regressing to the era of industrial paternalism, as it was at the end of the 19th century, when the big capitalists launched “good works” on a purely voluntary basis to compensate for the human and social damage caused by an unbridled market economy through philanthropy.

Making it possible to impose reciprocity for the Commons on Capital

The Commons are doomed to become nothing more than “Commons of Capital” if they do not give themselves the means to reproduce autonomously without depending on the calculated generosity of large companies who will always find a way to instrumentalize and void them of their capacity to constitute a real alternative. An association like Framasoft has clearly understood that after its program “Dégooglisons Internet”, aimed at creating tools to enable Internet users to break their dependence on GAFAMs, has continued with the Contributopia campaign. This aims to raise public awareness of the need to create a contribution ecosystem that guarantees conditions of long-term sustainability for both individual contributors and collective projects. This is visible now, for example, with the participatory fundraising campaign organized to boost the development of PeerTube, a software allowing the implementation of a distributed architecture for video distribution that could eventually constitute a credible alternative to YouTube.

But with all due respect to Framasoft, it seems to me that the classic “libriste” (free culture activist) approach remains mired in serious contradictions, of which Katherine Maher’s article is also a manifestation. How can we launch a programme such as “Internet Negotiations” that thrashes the model of Surveillance Capitalism, and at the same time continue to defend licences that do not discriminate according to the nature of the actors who reuse resources developed by communities as common goods? There is a schizophrenia here due to a certain form of blindness that has always marked the philosophy of the Libre regarding its apprehension of economic issues. This in turn explains Katherine Maher’s – partly understandable – uneasiness at seeing Wikipedia’s content and data reused by players like Facebook or Google who are at the origin of the centralization and commodification of the Internet.

To escape these increasingly problematic contradictions, we must give ourselves the means to defend the digital Commons sphere on a firmer basis than free licenses allow today. This is what actors who promote “enhanced reciprocity licensing” are trying to achieve, which would prohibit lucrative commercial entities from reusing common resources, or impose funding on them in return. We see this type of proposal in a project like CoopCycle for example, an alternative to Deliveroo; or Uber Eats, which refuses to allow its software to be reused by commercial entities that do not respect the social values it stands for. The aim of this new approach, defended in particular by Michel Bauwens, is to protect an “Economy of the Commons” by enabling it to defend its economic independence and prevent it from gradually being colonised and recovered into “Commons of Capital”.

.

With a project like CHATONS, an actor like Framasoft is no longer so far from embracing such an approach, because to develop its network of alternative hosts, a charter has been drawn up including conditions relating to the social purpose of the companies participating in the operation. It is a first step in the reconciliation between the Free and the SSE, also taking shape through a project like “Plateformes en Communs”, aiming to create a coalition of actors that recognize themselves in both Platform Cooperativism and the Commons. There has to be a way to make these reconciliations stronger, and lead to a clarification of the contradictions still affecting Free Software.

Make no mistake: I am not saying that players like Facebook or Google should not pay to participate in the development of free projects. But unlike Katherine Maher, I think that this should not be done on a voluntary basis, because these donations will only reinforce the power of the large centralized platforms by hastening the transformation of the digital Commons into “Capital Commons”. If Google and Facebook are to pay, they must be obliged to do so, just as industrial capitalists have come to be obliged to contribute to the financing of the social state through compulsory contributions. This model must be reinvented today, and we could imagine states – or better still the European Union – subjecting major platforms to taxation in order to finance a social right to the contribution open to individuals. It would be a step towards this “society of contribution” Framasoft calls for, by giving itself the means to create one beyond surveillance capitalism, which otherwise knows full well how to submit the Commons to its own logic and neutralize their emancipatory potential.

Photo by Elf-8

The post Are the Digital Commons condemned to become “Capital Commons”? appeared first on P2P Foundation.

]]>
https://blog.p2pfoundation.net/are-the-digital-commons-condemned-to-become-capital-commons/2018/08/03/feed 0 72035
City of Barcelona Kicks Out Microsoft in Favor of Linux and Open Source https://blog.p2pfoundation.net/city-of-barcelona-kicks-out-microsoft-in-favor-of-linux-and-open-source/2018/01/24 https://blog.p2pfoundation.net/city-of-barcelona-kicks-out-microsoft-in-favor-of-linux-and-open-source/2018/01/24#comments Wed, 24 Jan 2018 09:00:00 +0000 https://blog.p2pfoundation.net/?p=69346 Brief: Barcelona city administration has prepared the roadmap to migrate its existing system from Microsoft and proprietary software to Linux and Open Source software. Great news from Barcelona. This article was originally posted at ItsFoss.com: A Spanish newspaper, El País, has reported that the City of Barcelona is in the process of migrating its computer... Continue reading

The post City of Barcelona Kicks Out Microsoft in Favor of Linux and Open Source appeared first on P2P Foundation.

]]>

Brief: Barcelona city administration has prepared the roadmap to migrate its existing system from Microsoft and proprietary software to Linux and Open Source software.

Great news from Barcelona. This article was originally posted at ItsFoss.com:

A Spanish newspaper, El País, has reported that the City of Barcelona is in the process of migrating its computer system to Open Source technologies.

According to the news report, the city plans to first replace all its user applications with alternative open source applications. This will go on until the only remaining proprietary software will be Windows where it will finally be replaced with a Linux distribution.

Barcelona will go open source by Spring 2019

The City has plans for 70% of its software budget to be invested in open source software in the coming year. The transition period, according to Francesca Bria (Commissioner of Technology and Digital Innovation at the City Council) will be completed before the mandate of the present administrators come to an end in Spring 2019.

Migration aims to help local IT talent

For this to be accomplished, the City of Barcelona will start outsourcing IT projects to local small and medium sized enterprises. They will also be taking in 65 new developers to build software programs for their specific needs.

One of the major projects envisaged is the development of a digital market – an online platform – whereby small businesses will use to take part in public tenders.

Ubuntu is the choice for Linux distributions

The Linux distro to be used may be Ubuntu as the City is already running a pilot project of 1000 Ubuntu-based desktops. The news report also reveals that Outlook mail client and Exchange Server will be replaced with Open-Xchange meanwhile Firefox and LibreOffice will take the place of Internet Explorer and Microsoft Office.

Barcelona becomes the first municipality to join “Public Money, Public Code” campaign

With this move, Barcelona becomes the first municipality to join the European campaign “Public Money, Public Code“.

It is an initiative of the Free Software Foundation of Europe and comes after an open letter that advocates that software funded publicly should be free. This call has been supported by more than about 15,000 individuals and more than 100 organizations. You can add your support as well. Just sign the petition and voice your opinion for open source.

Money is always a factor

The move from Windows to Open Source software according to Bria promotes reuse in the sense that the programs that are developed could be deployed to other municipalities in Spain or elsewhere around the world. Obviously, the migration also aims at avoiding large amounts of money to be spent on proprietary software.

What do you think?

This is a battle already won and a plus to the open source community. This was much needed especially when the city of Munich has decided to go back to Microsoft.

What is your take on the City of Barcelona going open source? Do you foresee other European cities following the suit? Share your opinion with us in the comment section.

Source: Open Source Observatory

The post City of Barcelona Kicks Out Microsoft in Favor of Linux and Open Source appeared first on P2P Foundation.

]]>
https://blog.p2pfoundation.net/city-of-barcelona-kicks-out-microsoft-in-favor-of-linux-and-open-source/2018/01/24/feed 1 69346
Yochai Benkler on the Benefits of an Open Source Economic System https://blog.p2pfoundation.net/yochai-benkler-on-the-benefits-of-an-open-source-economic-system/2017/12/01 https://blog.p2pfoundation.net/yochai-benkler-on-the-benefits-of-an-open-source-economic-system/2017/12/01#respond Fri, 01 Dec 2017 09:00:00 +0000 https://blog.p2pfoundation.net/?p=68754 Cross-posted from Shareable. Bart Grugeon Plana: After the breakthrough of the internet, Yochai Benkler, a law professor at Harvard University, quickly understood that new online forms of collaboration such as Wikipedia or Linux responded to a completely new economic logic. Specializing in the digital culture of the networked society, Benkler worked on a coherent economic... Continue reading

The post Yochai Benkler on the Benefits of an Open Source Economic System appeared first on P2P Foundation.

]]>
Cross-posted from Shareable.

Bart Grugeon Plana: After the breakthrough of the internet, Yochai Benkler, a law professor at Harvard University, quickly understood that new online forms of collaboration such as Wikipedia or Linux responded to a completely new economic logic. Specializing in the digital culture of the networked society, Benkler worked on a coherent economic vision that guides us beyond the old opposition between state and markets.

According to Benkler, we may be at the beginning of a global cultural revolution that can bring about massive disruption. “Private property, patents and the free market are not the only ways to organize a society efficiently, as the neoliberal ideology wants us to believe,” Benkler says. “The commons offers us the most coherent alternative today to the dead end of the last 40 years of neoliberalism.”

Bart Grugeon Plana: In the political debate today, it seems that world leaders fall back to an old discussion whether it is the free market with its invisible hand that organizes the economy best or the state with its cumbersome administration. You urge to step beyond this old paradigm.

Yochai Benkler, law professor at Harvard University: Both sides in this discussion start from an assumption that is generally accepted but fundamentally wrong, namely that people are rational beings who pursue their own interests. Our entire economic model is based on this outdated view on humanity that goes back to the ideas of Thomas Hobbes and Adam Smith, philosophers from the 17th and 18th centuries. My position is that we have to review our entire economic system from top to bottom and rewrite it according to new rules. Research of the past decades in social sciences, biology, anthropology, genetics, and psychology shows that people tend to collaborate much more than we have assumed for a long time. So it comes down to designing systems that bring out these human values.

Many existing social and economic systems — hierarchical company structures, but also many educational systems and legal systems — start from this very negative image of man. To motivate people, they use mechanisms of control, by incorporating incentives that punish or reward. However, people feel much more motivated when they live in a system based on compromise, with a clear communication culture and where people work towards shared objectives. In other words, organizations that know how to stimulate our feelings of generosity and cooperation, are much more efficient than organizations that assume that we are only driven by self-interest.

This can work within a company or an organization, but how can you apply that to the macro economy?

Over the past decade, the internet has seen new forms of creative production that hasn’t been driven by a market nor organized by the state. Open-source software such as Linux, the online encyclopedia Wikipedia, the Creative Commons licenses, various social media, and numerous online forms of cooperation have created a new culture of cooperation that ten years ago would have been considered impossible by most. They are not a marginal phenomenon, but they are the avant-garde of new social and economic tendencies. It is a new form of production that is not based on private property and patents, but on loose and voluntary cooperation between individuals who are connected worldwide. It is a form of the commons adapted to the 21st century — it is the digital commons.

What is so revolutionary about it?

Just take the example of the Creative Commons license: It is a license that allows knowledge and information to be shared under certain conditions without the author having to be paid for it. It is a very flexible system that considers knowledge as a commons, that others can use and build on. This is a fundamentally different approach than the philosophy behind private copyrights. It proves that collective management of knowledge and information is not only possible, but that it is also more efficient and leads to much more creativity than when it is “locked up” in private licenses.

In the discussion whether the economy should be organized by the state or by the markets, certainly after the fall of communism, there was a widespread belief that models starting from a collective organization necessarily led to inefficiency and tragedy, because everyone would just save their own skin. This analysis has been the responsibility for the deregulation and privatization of the economy since then, the consequences of which have been known since 2008.

The new culture of global cooperation opens up a whole new window of possibilities. The commons offer us today a coherent alternative to the neoliberal ideology, which proves to be a dead end. After all, how far can privatization go? Trump and Brexit prove where it leads to.

Image by Bart Grugeon Plana

The commons is a model for collective management, which is mainly associated with natural resources. How can this be applied to the extremely complex modern economy?

The commons are centuries old, but as an intellectual tradition it was mainly substantiated and deepened by Elinor Östrom, winner of the Nobel Prize in Economics. Over the past decades the commons have gained a new dimension through the movement of open source software and the whole culture of the Digital Commons. Östrom demonstrated on the basis of hundreds of studies that citizens can come together to manage their infrastructures and resources, often in agreement with the government, in a way that is both sustainable ecologically and economically. Commons are capable of integrating the diversity, knowledge, and wealth of the local community into the decision-making processes. They take into account the complexity of human motivations and commitments, while market logic reduces everything to a price, and is insensitive to values, or to motivations that are not inspired by profit. Östrom showed that the commons management model is superior in terms of efficiency and sustainability to models that fall back on a strong government — read: socialism or communism, or on markets and their price mechanism.

Examples of commons in the modern economy include the management model of the Wi-Fi spectrum, for example, in addition to the previously mentioned digital commons. Unlike the FM-AM radio frequencies that require user licenses, everyone is free to use the Wi-Fi spectrum, respecting certain rules, and place a router anywhere. This openness and flexibility is unusual in the telecommunications sector. It has made Wi-Fi an indispensable technology in the most advanced sectors of the economy, such as hospitals, logistics centers, or smart electricity grids.

In the academic, cultural, musical, and information world knowledge or information is increasingly treated as a commons, and freely shared. Musicians no longer derive their income from the copyright of music, but from concerts. Academic and non-fiction authors publish their works more often under Creative Commons licenses because they earn their living by teaching, consultancy, or through research funds. A similar shift also takes place in journalism.

An essential feature of the commons management model is that all members of the “common” have access to the “use” of goods or services, and that it is jointly agreed how access to those goods and services is organized. Market logic has a completely different starting point. Does this mean that markets and commons are not compatible?

Commons are the basis of every economic system. Without open access to knowledge and information, to roads, to public spaces in the cities, to public services and to communication, a society can not be organized. The markets also depend on open access to the commons to be able to exist, even though they try again and again to privatize the commons. There is a fundamental misconception about the commons. It is the essential building block of every open society. But commons and markets can coexist.

If today it is mainstream to think that a company should maximize its financial returns in order to maximize it shareholder value, it isn’t a fact of nature, it’s a product of 40 years of neoliberal politics and law intended to serve a very narrow part of the society. Wikipedia shows that people have very diverse motivations to voluntarily contribute to this global common good that creates value for the entire world community. The examples of the digital commons can inspire to set up similar projects in real economy, as happens with various digital platforms in the emerging collaborative economy.

A society that puts the commons at the center, recognizing the importance of protecting them and contributing to them, allows different economic forms of organization to co-exist, both commons and market logic, private and public, profit-oriented and non-profit-making. In this mixture, it is possible that the economy as a whole is oriented towards being socially embedded, being about the people who generate the economic activities, and who can have very different motivations and commitments. The belief that the economy would be driven by an abstract ideal of profit-oriented markets is no more than a construction of neoliberal ideology.

You seem very optimistic about the future of the commons?

I was more optimistic ten years ago than I am today. The commons are so central to the organization of a diverse economy that they must be expanded and protected in as many sectors of the economy as possible. There are many inspiring examples of self-organization according to the commons model, but it is clear that their growth will not happen automatically. Political choices will have to be made to restructure the economy beyond market logic. Regulation is necessary, with a resolute attitude towards economic concentration, and with a supportive legislative framework for commons, cooperatives and various cooperation models.

At the same time, more people need to make money with business models that build on a commons logic. The movement around “platform cooperativism” is a very interesting evolution. It develops new models of cooperatives that operate through digital platforms and that work together in global networks. They offer a counterweight to the business models of digital platforms such as Uber and Airbnb, which apply the market logic to the digital economy.

This brings us to the complex debate about future of work.

In the context of increasing automation, there is a need for a broader discussion that can see “money” and “work” as separate from each other, because the motivations to “work” can be very diverse. A general basic income is an opportunity to build a more flexible system that makes these various motivations possible, but also a shorter working week is an option.

We are facing an enormous task and we do not have a detailed manual that shows us the way. However, the current economic crisis and the declining acceptance of austerity means that the circumstances are favorable to experiment with new forms of organization.

When Wikipedia began to grow, it was told that it “only works in practice, because in theory it’s a total mess.” I believe, however, that today we have a theoretical framework that allows us to build a better life together without subjecting ourselves to the same framework that gave us oligarchic capitalism. The commons is the only genuine alternative today that allows us to build a truly participatory economic production system. The commons can cause a global cultural revolution.

This piece has been edited for length and clarity.

Photo by Ratchanee @ Gatoon

The post Yochai Benkler on the Benefits of an Open Source Economic System appeared first on P2P Foundation.

]]>
https://blog.p2pfoundation.net/yochai-benkler-on-the-benefits-of-an-open-source-economic-system/2017/12/01/feed 0 68754
Michel Bauwens on the pitfalls of start-up culture https://blog.p2pfoundation.net/michel-bauwens-on-the-pitfalls-of-start-up-culture-2/2017/09/20 https://blog.p2pfoundation.net/michel-bauwens-on-the-pitfalls-of-start-up-culture-2/2017/09/20#respond Wed, 20 Sep 2017 07:00:00 +0000 https://blog.p2pfoundation.net/?p=67720 Guerrilla Translation’s transcript of the 2013 C-Realm Podcast Bauwens/Kleiner/Trialogue prefigures many of the directions the P2P Foundation has taken in later years. To honor its relevance we’re curating special excerpts from each of the three authors. In this second extract, Michel Bauwens talks about the disconnect between young idealistic developers and the business models many... Continue reading

The post Michel Bauwens on the pitfalls of start-up culture appeared first on P2P Foundation.

]]>
Guerrilla Translation’s transcript of the 2013 C-Realm Podcast Bauwens/Kleiner/Trialogue prefigures many of the directions the P2P Foundation has taken in later years. To honor its relevance we’re curating special excerpts from each of the three authors. In this second extract, Michel Bauwens talks about the disconnect between young idealistic developers and the business models many of them default to, unaware that there’s better options.

Michel Bauwens

Michel Bauwens: I’d like to start with outlining the issue, the problem around the emergence of peer production within the current neoliberal capitalist form of society and economy that we have. We now have a technology which allows us to globally scale small group dynamics, and to create huge productive communities, self-organized around the collaborative production of knowledge, code, and design. But the key issue is that we are not able to live from that, right?

The situation is that we have created communities consisting of people who are sometimes paid, sometimes volunteers, and by using open licenses, we are actually creating commonses – think about Linux, Wikipedia, Arduino, those kinds of things. But what is the problem? The problem is I can only make a living by still working for capital. So, there is an accumulation of the commons on the one side, we are effectively producing a commons, but we don’t have what Marx used to call social reproduction. We cannot create our own livelihood within that sphere. The solution that I propose is related to the work of Dmytri Kleiner – Dmytri proposed some years ago to create a peer production license. I’ll give you my interpretation of it; you can only use our commons if you reciprocate to some degree. So, instead of having a totally open commons, which allows multinationals to use our commons and reinforce the system of capital, the idea is to keep the accumulation within the sphere of the commons. Imagine that you have a community of producers, and around that you have an entrepreneurial coalition of cooperative, ethical, social, solidarity enterprise.

The idea is that you would have an immaterial commons of codes and knowledge, but then the material work, the work of working for clients and making a livelihood, would be done through co-ops. The result would be a type of open cooperative-ism, a kind of synthesis or convergence between peer production and cooperative modes of production. That’s the basic idea. I think that a number of things are happening around that, like solidarity co-ops, and other new forms of cooperative-ism.

The young people, the developers in open source or free software, the people who are in co-working centers, hacker spaces, maker spaces. When they are thinking of making a living, they think startups. They have been very influenced by this neoliberal atmosphere that has been dominant in their generation. They have a kind of generic reaction, “oh, let’s do a startup”, and then they look for venture funds. But this is a very dangerous path to take. Typically, the venture capital will ask for a controlling stake, they have the right to close down your start up whenever they feel like it, when they feel that they’re not going to make enough money. They forbid you to continue to work in the same sector after your company has failed, and you have a gag order, so you don’t even have free speech to talk about your negative experience. This is a very common experience. Don’t forget that with venture capital, only 1 out of 10 companies will actually make it, and they may be very rich, but it’s a winner-take-all system.

There is a real lack of knowledge within the young generation that there are other forms of enterprise possible. I think that the other way is also true. A lot of co-ops have been neo-liberalizing, as it were, have become competitive enterprises competing against other companies but also against other co-ops, and they don’t share their knowledge. They don’t have a commons of design or code, they privatize and patent, just like private competitive enterprise, their knowledge. They’re also not aware that there’s a new way of becoming more competitive through increased cooperation of open knowledge commons. This is the human side of it, and we need to work on the knowledge and mutual experience of these two sectors. Both are growing at the same time; after the crisis of 2008, we’ve had an explosion of the sharing economy and the peer production economy on the one side, but also a revitalization of the cooperative sector.

The post Michel Bauwens on the pitfalls of start-up culture appeared first on P2P Foundation.

]]>
https://blog.p2pfoundation.net/michel-bauwens-on-the-pitfalls-of-start-up-culture-2/2017/09/20/feed 0 67720
You CAN make a lot of money with Open Source https://blog.p2pfoundation.net/can-make-lot-money-open-source/2016/12/08 https://blog.p2pfoundation.net/can-make-lot-money-open-source/2016/12/08#comments Thu, 08 Dec 2016 13:23:56 +0000 https://blog.p2pfoundation.net/?p=62022 Actually, you can even make more money with Open Source than what you could make in an average well-paid Silicon Valley job. That’s no exaggeration or any hypothetical. Its as real as it gets with numbers, with actual people and small businesses to prove it. In the past two decades, Open Source has become an... Continue reading

The post You CAN make a lot of money with Open Source appeared first on P2P Foundation.

]]>
Actually, you can even make more money with Open Source than what you could make in an average well-paid Silicon Valley job.

That’s no exaggeration or any hypothetical. Its as real as it gets with numbers, with actual people and small businesses to prove it.

In the past two decades, Open Source has become an immense ecosystem which empowers its participants and liberates them from their constraints in all respects – not only from proprietary, controlled platforms but also especially financially.

Let’s examine how big has Open Source become first, and then look into how people are making money in a sharing economy.

A wide, wide world

Today, Open Source now has many business models to make money and develop in sustainable fashion. You are not obliged to await donations anymore. One of the many Open Source business models will surely fit your application and your future vision about your business.

To top that, the Open Source ecosystem is now huge. It became so huge that its not far off to say that it is practically the entirety of Internet. From server infrastructure to application space, Open Source has become the new norm of I.T.

The choice for OS for servers is today Linux. Not only tech giants like Google run their applications on Linux server farms, but also Linux is de facto OS practically in all datacenters/hosting corporations which provide Internet space to end-users. You will be hard pressed to find a IIS host today.

Linux has even gotten into devices – leave aside handheld devices in which Linux based OSes became ubiquitous – Linux is being used in many more places ranging from Smart TVs to Home Routers as well.

What’s even more stunning, an entire ~80% of Websites/applications on Internet run on Open Source PHP, ~20% of all websites on Internet are on WordPress, and ~20% of Ecommerce websites are built with one single plugin for WordPress, WooCommerce. WordPress uses GPLv3, which is even more hardcore copyleft than GPL.

Real people making real money in a huge ecosystem

Even without talking about the Ecommerce/Business conducted on WordPress platform, solely the WordPress ecosystem of Themes, Plugins, Services sports over $1 billion dollar market by itself.

540M Active Plugins Makes WordPress a Billion Dollar Market

Together with Themes, it becomes a massive ecosystem:

Just How Big is WordPress Exactly?;

And this is actual cold hard cash – not valuations or estimates, with no investors, no financial schemes. And majority of those who are making money are single programmers, working alone solely on WordPress:

2014 in review – Pippins Plugins;

Pippin Williams, a lone programmer who just recently took on a few team members, made over $700,000 in revenue in 2014. Majority of this money is profit, and it is cold hard cash. A year earlier he broke the $300,000 revenue mark alone. And he did that with only 50,000 active installations of his plugin, Easy Digital Downloads.;

Small software corporations which produce WordPress themes are making multi-million dollars every year.

Leading Premium WordPress Theme Providers Compared;

Whereas WPMU Dev, as a major Plugin development company, is on par.

How Real Businesses Are Making Very Real Money Using WordPress? and the Numbers to Prove It

As easily demonstrated above, Open Source is doing whoop-ass amounts of cash for its developers, and users are quite, quite happy. No programmer in no institution can imagine what Pippin did, by single handedly reaching $700,000 in revenues from tens of thousands of direct-user customers and have a thriving software business – not in Google, not anywhere in Silicon Valley, not in Academia. Yeah, if you are very lucky, you may come up with a ground-breaking piece of software and then get some investors to pay you some good hard cash, but as what you can understand from the trend in current venture capital business, it will either take ~10 years to get there, if you ever get there at all. You won’t get there working for Apple, for sure. The catch here is that there are many like Pippin, even though not everyone makes $700,000/year.

But how does this work? Which business model?

Taking WordPress as an example, its mainly SaaS, with variations:

You can give away your software free, and charge for premium version and its updates.

Many small WordPress businesses use this format. It works pretty well. Free version is posted on WordPress org, and this ends up being advertising/distribution for free. You get thousands of users, whereas a decent percentage of them convert into Premium users because they want specialized/professional features that are required for their particular activity. The free users create an ecosystem of support and also market the product through word of mouth.

The updates are subscriptions, they are charged generally yearly – so its recurring revenue – not one time sale. The software constantly funds itself.

You can give your software free, and sell addons

Recently this is the most popular – the software is given away free, and many addons exist for specialized purposes. Users customize their installation as they need, allowing them minimum cost and maximum specialized functionality for their purpose. Incidentally the addon revenues become significant – because dozens of addons surpass the value from which you could sell a premium version. And its less bloated as well. You can serve paid and free addons at the same time.

Like Premium version method, the addons are also on a subscription basis, with users paying yearly for updates and new features. Its recurring revenue. This is the method Pippin’s Plugins used with Easy Digital Downloads.

You can give a free plugin which provides a specific SaaS

Like how Automattic’s own Akismet plugin does – the free plugin enables a SaaS service – spam control, accounting, any kind of API, social login – whatever you can imagine.

Naturally it is charged as a subscription, making the revenue recurring.

You will give support service in addition to above

All of the items above will incur support needs. Developers generally provide both community support through forums, and premium priority support. Support revenue becomes considerable – Pippin William’s plugin Easy Digital Downloads sells support subscriptions for $299/year, for example. This is a business oriented plugin, hence the complicated-ness and the high price. But in general in WordPress ecosystem the average support subscription ranges from anywhere in between 1 to 3 months to 1 year, with support price being $45 on average. You can offer $45/month support subscription as well as $45/year support subscription depending on the complexity of your plugin.

Premium services, development

These are no joke either – even with free plugins, a vast range of custom development requests materialize and these can very well supply a small software house with projects for a long time. Surely, for this option to be viable, your plugin needs to be widely used or be a very niche plugin and needs to have a sufficiently complex application. But it happens.

Automattic, the company behind WordPress, provides managed enterprise level WordPress hosting for major names like CNN, Reuters, Forbes, New York Times.

Notable WordPress Users

Moreover, WordPress hosting is a specialized hosting area, with the average monthly hosting fee being around $20 to $45 – much higher than average web hosting industry rates. Many plugin developers offer hosted version of their plugin as well.

Donations and Sponsors

This works if you are big project like Linux. For medium projects it can have some impact, but it is mostly nonexistent when it comes to small scale.

Anything else that you can imagine

There are many more Open Source business models than what we examined here.

The spectrum and amount of activity in a large open source ecosystem are massive. Thus, any way to legally make money from anything you can imagine is legitimate. What is not known not practiced today, is discovered as a method tomorrow – just like how free + paid addon model was virtually unknown until a few pioneers applied it to much success. Tomorrow there will be new models discovered, new methods applied. A lively, thriving ecosystem is something that develops, enlarges and maintains itself. While enlarging, it also creates its own sub-spaces and sub-specializations, like how it happened with WordPress security, hosting, theme development, plugin development, site administration and the like.

Open Source is an ever-developing world which constantly creates new opportunities as people who participate create new things.

As demonstrated, there are multiple ways to make a lot of money with Open Source today, and they work pretty well. As the ecosystem grew in the past decade, the amount of jobs or customers did not decline – they increased. More developers enlarge the ecosystem, which makes it easier for more users to enter the ecosystem and do whatever they want in it. Many Open Source applications spawned an expertise area in themselves, moreover they spawned expertise areas inside themselves – WordPress Theme development, WordPress Plugin development, WordPress hosting, Administration are all their own specialties for example. Its the nature of software – as it grows and becomes more complex, it creates worlds inside itself.

Of course, not every Open Source ecosystem is as large as Linux or WordPress. However, innumerable projects exist, which have sizable ecosystems that create considerable revenue for their developers – ranging from shopping cart applications to CMSes. These developers make as much money as they could make working for any corporate behemoth as a wage slave. And their jobs are not on the line at any given moment like a corporate developer who could be laid off at the whim of any exec or any economic downturn. Throughout economic crisis, the Open Source ecosystems stayed mostly untouched – millions of websites offering immense array of services still needed their software and needed them working in good condition.

So, you can make money with Open Source. And good money, at that.

The post You CAN make a lot of money with Open Source appeared first on P2P Foundation.

]]>
https://blog.p2pfoundation.net/can-make-lot-money-open-source/2016/12/08/feed 8 62022
Three non-technological ways in which blockchains may still “fail” https://blog.p2pfoundation.net/three-non-technological-ways-blockchains-may-still-fail/2016/11/10 https://blog.p2pfoundation.net/three-non-technological-ways-blockchains-may-still-fail/2016/11/10#comments Thu, 10 Nov 2016 10:00:51 +0000 https://blog.p2pfoundation.net/?p=61308 ‘Fail’, because failing is obviously relative. By now, there is no real doubt that blockchains deliver on their technological promise: tamper-proof distributed permissionless ledgers. But they may very well fail to deliver on their promise as a new shiny class of peer-to-peer technology disintermediating all those pesky central authorities into oblivion. 1. Poor usability for... Continue reading

The post Three non-technological ways in which blockchains may still “fail” appeared first on P2P Foundation.

]]>
‘Fail’, because failing is obviously relative. By now, there is no real doubt that blockchains deliver on their technological promise: tamper-proof distributed permissionless ledgers. But they may very well fail to deliver on their promise as a new shiny class of peer-to-peer technology disintermediating all those pesky central authorities into oblivion.

1. Poor usability for non-experts

Several generations of peer-to-peer technologies have promised a lot, delivered quite much, but still left a lingering taste of underachievement. While GNU/Linux — an operating system crucially dependent on a p2p development model — is clearly one of the resounding successes of open source, it still did not fulfill its promise in one crucial area (which in its early days was seen as one of the most important): desktop computers. Linux powers anything from toasters to supercomputers, but it hasn’t liberated the masses from Windows or Mac OS. In most of smartphones, Linux is in the shackles of Android.

There are many reasons for why GNU/Linux hasn’t taken over, vendor lock-in being one of the major ones. But there is another issue that may be relevant to blockchains. When hackers write software for themselves — scratching their own itch — it is ready when it delivers what is needed. And this point of being ready for use is very different for a hacker and for a regular user. For too long, the installation and use of a Linux distribution was too hard for ordinary users. Even if Ubuntu and similar systems have largely solved that bottleneck now, the lesson stands: superior technology, if polished only to the point where it is good enough for hackers and early adopters, will not escape that ghetto. Let’s be honest: just the visual look of a Bitcoin address “13ktXxaJTPvBPfSyS7XALTP1i7nAeR2oZ9” is going to keep a big chunk of potential users away. At the moment, the user experience of even the most advanced blockchain apps is abysmal.

2. Domestication

The second danger is domestication, or, maybe better yet “commoditization”. As Robert Herian writes in Critical Legal Thinking:

“Disruption, so-called and preached by many of the major global banks, to the extent that IBM are now claiming that more than half of those banks will be using the technology in the next three years, is anything but disruption because it leaves unchanged the conditions (norms and expectations) in which it occurs, namely those in which global financial capital has exclusive dominion over the social.”

It is clear that the way the banks use blockchains in effectivising their databases and other back-office oprations, does very little for a peer-to-peer future. Furthermore, as Herian continues to argue, there is the

Beyond the public and transparent blockchain, and thus any hope of preserving a common space if not exactly or politically-speaking a “commons”, we see a potent indication of the victories of normative liberal and, to a greater extent, global financial capitalism over the blockchain narrative. An ideological victory which is in no small part manifesting itself through the proliferation of permissioned enclosed ledgers which are altering the dynamic of blockchain development […]

Most of the resources in terms of money are certainly going to permissioned and private blockchain development and that will, for sure, lend its flavor to what blockchains are all about in the public mind. Moreover, as Herian indicates, this trend is in a worrying way reminiscent of the way in which other technological developments have encroached digital commons. However, is it so bad that banks and other institutions want to use permissioned blockchains? We are still allowed to use permissionless blockchains and build on them, right?

3. Marginalisation

Domestiction becomes a real problem when combined with another non-technological threat: marginalisation. Again, let’s look at recent history. Torrent technology is a superior way for distributing digital content. However, since its first and most prominent uses were related to illegal file-sharing, legislation and public PR campaigns have pushed the technology to the fringe (can you believe that PirateBay is still the most popular torrent tracking site?). Torrents are, of course, used for legal purposes, too, in many forms of content distribution, but again the full promise of the technology has been curtailed by pushing it into a socio-cultural margin.

All of the three threats ­– marginalisation, domestication and ghettoised user experience — loom large over blockchains. Moreover, the three collude in forming an evil circle, reinforcing each other. There is no silver bullet agaist any of them. A lot of education, both for regulators and the general public, is needed in order to counteract marginalisation. Against ghettoisation, the most urgent need are real-world uses cases that are not limited to currency speculation or to transactions with high counterparty risk. The more diverse the community involved, the greater the possibility of avoiding marginalisation and pushing for overall usability. The free software and open source movements, for instance, have a history of initiatives and procedures for increasing the diversity of the communities and lowering barriers of entry. They can be reused, while at the same time looking for new ways, such as ethical design, of broadening the horizons of p2p technology development.


Originally posted at Medium:

The post Three non-technological ways in which blockchains may still “fail” appeared first on P2P Foundation.

]]>
https://blog.p2pfoundation.net/three-non-technological-ways-blockchains-may-still-fail/2016/11/10/feed 1 61308
Ian Murdock In His Own Words: What Made Debian Such A Community Project https://blog.p2pfoundation.net/ian-murdock-words-made-debian-community-project/2016/06/23 https://blog.p2pfoundation.net/ian-murdock-words-made-debian-community-project/2016/06/23#respond Thu, 23 Jun 2016 08:00:03 +0000 https://blog.p2pfoundation.net/?p=57187 "The package system was not designed to manage software. It was designed to facilitate collaboration" Ian Murdock (1973-2015)

The post Ian Murdock In His Own Words: What Made Debian Such A Community Project appeared first on P2P Foundation.

]]>
Source: Article by Gabriella Coleman for Techdirt

As you may have heard, there was some tragic news a few weeks back, when the founder of Debian Linux, Ian Murdock, passed away under somewhat suspicious circumstances. Without more details, we didn’t have much to report on concerning his passing, but Gabriella Coleman put together this wonderful look at how Murdock shaped the Debian community, and why it became such a strong and lasting group and product.

Ian Murdock in his Own Words: “The package system was not designed to manage software. It was designed to facilitate collaboration” Ian Murdock (1973-2015).

Peering in from the outside, the Debian operating system — founded in 1993 by Ian Murdock, then a twenty-two-year-old college student — might appear to have been created with hardcore, technologically-capable power users in mind. After all, it is one of the most respected distributions of Linux: as of this writing, the current Debian stable distribution, Jessie, has 56,865 individual open source projects packaged (in native Debian parlance software is referred to as packages), and Debian itself has functioned as the basis for over 350 derivative distributions. Debian developers are so dedicated to the pursuit of technical excellence that the project is simultaneously revered and criticized for its infrequent release cycle — the project only releases a new version roughly every two years or so, when its Release Team deems it fit for public use. As its developers are fond of saying, “it will be released when it’s ready.”

But if you take a closer look, what is even more striking about Debian is that its vibrant community of developers are as committed to an array of ethical and legal principles as they are to technical excellence. These principles are enshrined in a bevy of documents — a manifesto, a constitution, a social contract, and a set of legal principles — which guide what can (and cannot) be done in the project. Its Social Contract, for instance, stipulates a set of crystal clear promises to the broader free software public, including a commitment to their users and transparency.

In 2001, I began anthropological fieldwork on free software in pursuit of my Ph.D. Debian’s institutional model of software development and rich ethical density attracted me to it immediately. The ethical life of Debian is not only inscribed in its discursive charters, but manifests also in the lively spirit of deliberation and debate found in its mailing lists. Ian Murdock, who passed away tragically last week, had already left the endeavor when my research began, but his influence was clear. He had carefully nursed the project from inception to maturity during its first three years. As my research wrapped up in 2004, I was fortunate enough to meet Ian at that year’s Debconf. Held annually, that year’s conference was hosted in Porto Alegre, Brazil, and it was the first year he had ever attended. Given his fortuitous presence, I took the opportunity to organize a roundtable. Alongside a couple of long-time Debian developers, Ian reflected on the project’s early history and significance.

By this time, many developers had already spoken to me in great (and fond) detail about Ian’s early contributions to Debian: they were essential, many insisted, in creating the fertile soil that allowed the project to grow its deepest roots and sprout into the stalwart community that it is today. In the fast-paced world of the Internet, where a corporate giant like AOL can spectacularly rise and fall in a decade, Debian is strikingly unique for its staying power: it has thrived for a remarkable twenty-three years (and though I am not fond of predictions, I expect it will be around throughout the next twenty as well).

It was well-known that Ian established the project’s moral compass, and also provided an early vision and guidance that underwrote many of the processes responsible for Debian’s longevity. But witnessing Ian, and other early contributors, such as Bdale Garbee, articulate and reflect on that early period was a lot more potent and powerful than hearing it second hand. In honor of his life and legacy, I am publishing the interview here (it has been slightly edited for readability). Below, I want to make two points about Ian’s contributions and do so by highlighting a selection of his most insightful remarks drawn from the roundtable discussion and his blog — comments that demonstrate how he helped sculpt Debian into the dynamic project it is today. …

Continue reading the full article on Techdirt

Photo by Ilya Schurov

The post Ian Murdock In His Own Words: What Made Debian Such A Community Project appeared first on P2P Foundation.

]]>
https://blog.p2pfoundation.net/ian-murdock-words-made-debian-community-project/2016/06/23/feed 0 57187