knowledge commons – P2P Foundation https://blog.p2pfoundation.net Researching, documenting and promoting peer to peer practices Wed, 15 Apr 2020 07:14:06 +0000 en-US hourly 1 https://wordpress.org/?v=5.5.15 62076519 Pooling Knowledge: Private Medicine vs. Public Health? https://blog.p2pfoundation.net/pooling-knowledge-private-medicine-vs-public-health/2020/04/15 https://blog.p2pfoundation.net/pooling-knowledge-private-medicine-vs-public-health/2020/04/15#respond Wed, 15 Apr 2020 07:13:49 +0000 https://blog.p2pfoundation.net/?p=75715 The Coronavirus and the Need for Systems Change, Pt.1 The Coronavirus pandemic raises many questions about public health, global diseases and the way we produce and distribute cures and treatments. Who pays for the corona vaccine and how? How is that innovation organised? Who profits? Commons Network has been an advocate in this domain (‘access... Continue reading

The post Pooling Knowledge: Private Medicine vs. Public Health? appeared first on P2P Foundation.

]]>
The Coronavirus and the Need for Systems Change, Pt.1

The Coronavirus pandemic raises many questions about public health, global diseases and the way we produce and distribute cures and treatments. Who pays for the corona vaccine and how? How is that innovation organised? Who profits?

Commons Network has been an advocate in this domain (‘access to medicines’) for years. In the next few months, we will publish a series of articles about the problems with the current system and the ideas and visions that exist to change this. Today, we discuss the proposal for a Covid-19 Knowledge Pool.

COVID-19 is a global health crisis that demands an immediate global response. But this crisis also lays bare many other crises in our societies. In many Western countries, the response to the virus has shown the vulnerabilities in our public health systems and other essential sectors of society. One major issue that the coronavirus exposes is the dire state of our biomedical system and the role that pharmaceutical companies play in that system.

  • In The Netherlands, for instance, hospitals didn’t have enough test kits because Roche, the world’s largest biotech company, initially refused to hand over the recipe that is needed to perform these tests.
  • In the United States, Trump’s ‘corona-minister’ Alex Azar released a statement saying that the government could not guarantee that a potential cure for Covid-19 would be affordable, because the innovation that is needed for that cure would only be spurred by high profits.
  • The rush to create a vaccine was delayed for up to two or three years, because in most countries, pharmaceutical companies had sold their vaccine research facilities. And the companies that still had the capabilities to do the research had effectively scaled down their coronavirus research because there was no money to be made.
  • Scientists were close to a coronavirus vaccine years ago, and then the money dried up.
  • The vaccine market was even called ‘an oligopoly’ by Wall Street analysts at AB Bernstein. In fact, after countries abandoned infectious disease research, most companies also moved away from investing in this field, according to DNDi director Bernard Pecoul.
  • In France, it was debated why a testing kit for coronavirus should cost 135 euro, eventhough the production costs are only 10 euros. The sub-optimal availability of tests was cited as a major reason for not testing in the fight against the pandemic in many European countries, leading many people to ask if this had economic reasons as well.

More and more people have now come to realise that the global race to find a cure for Covid-19 and a vaccine is slowed down considerably by the fact that the system we have now runs on market incentives and patent monopolies. Instead of shielding essential knowledge, companies could work together, share research results and new insights.

Moving away from a deficient system

The pharmaceutical industry is driven by profit and guided by shareholders. The research and innovation that is needed to come up with cures and treatments is monopolised. A system of patents and licenses is fine-tuned to produce the maximum wealth for a few multi-billion euro corporations. This is how we have organised the world of medicines today. Our system is not driven by public health needs but by profit and the only logic that counts is that of capitalism.

Our system is not driven by public health needs but by profit and the only logic that counts is that of capitalism

This model is based on the belief that the flow of biomedical knowledge should be privatized and protected through intellectual property rights in order to stimulate innovation. This monopoly model gives pharmaceutical companies the freedom to charge as much as they can get away with. It also stifles innovation where we most need it, like in the area of infectious diseases, because there is no money to be made. And finally, this system makes us, the people, pay three times: once to fund the universities and research facilities that create a lot of the knowledge needed for pharmaceutical innovation, once to pay these companies to produce and distribute, and once to our governments to fund our health care system.

It’s hard to estimate how many medicines are not invented, how much talent is wasted and how many people have to suffer because of what not is being researched and developed. This sytem limits the ability to collaborate, share knowledge and build on each other’s work. The public good of scientific medical knowledge and health related technologies has been transformed into a highly protected, privatized commodity.

The COVID-19 crisis marks a critical moment for generating the change we need. But how do we go from this neoliberal capitalist logic to something else, towards a system that is driven by the needs of the public and the health of the people?

Knowledge commons

The proposal to build  a global knowledge pool for rights on data, knowledge and technologies that was presented by Costa Rica is a great example of a step in the right direction, towards transformational change. On March 23rd, the government of Costa Rica sent a letter to the World Health Organization, calling for a Global Covid-19 Knowledge Pool1. In his letter to the WHO, the president of Costa Rica demands a global program to “pool rights to technologies that are useful for the detection, prevention, control and treatment of the COVID-19 pandemic.” It now also enjoys the support of the WHO as well as from the UK parliament and the Dutch government and civil society, which has announced their support the idea of a COVID-19 pool as well.

Why do we need a knowledge pool and why is it transformational?

As mentioned above, under our current system the privatization of knowledge limits the ability to collaborate, share knowledge and build on each other’s work. This really is artificial because knowledge is by nature abundant and shareable. Hence the current handling of medical technologies not only limits access to the ensuing treatments, it also limits innovation.

The Covid-19 Poll would pool relevant knowledge & data to combat Covid-19, creating a global knowledge commons2. It is a proposal to create a pool of rights to tests, medicines and vaccines with free access or licensing on reasonable and affordable terms for all countries. This would allow for a collaborative endeavor, and could accelerate innovation. It would be global, open and offer non discriminatory licenses to all relevant technologies and rights. As such the pool would offer both innovation and access.

Inputs could come from governments, as well as from universities, private companies and charities. This could be done on a voluntary basis but not only. Public institutions around the world are investing massively in Covid-19 technologies and all results could be automatically shared with this pool, meaning this could be a condition attached to public financing.

So, placing knowledge in a commons does not just mean sharing data and knowledge without regard for their social use, access and preservation. It means introducing a set of democratic rules and limits to assure equitable and sustainable sharing for health-related resources. As such it allows for equitable access, collaborative innovation and democratic governance of knowledge. At the same time knowledge commons could facilitate open global research and local production adapted to local context.

Placing knowledge in a commons does not just mean sharing data and knowledge without regard for their social use, access and preservation. It means introducing a set of democratic rules and limits to assure equitable and sustainable sharing

If we consider the COVID-19 pool holistic initiative that treats the knowledge as a commons, not only to accelerate innovation but also recognizing this knowledge as a public good for humanity which should be managed in a way to ensure affordable access for all, it could be transformational. In contrast to the existing Medicines Patent Pool this pool would be global and not primarily focus on providing access to exitisting technologies, but more also on innovation: developing diagnostics, medicines and vaccines.

Transformational change

Instead of proposing tweaks it is now time to challenge the idea of handling medicines principally as a commodity or product, and to propose structural changes in order to approach health as a common good.  This means referring to our collective responsibility for – and the governance of health when reframing biomedical knowledge production. Instead of leaving it entirely to markets and monopoly based business models.

For this we should move to an approach based on knowledge sharing, cooperation, stewardship, participation and social equity – in practice, this means shifting to a public interest biomedical system based on knowledge commons and open source research, open access, alternative incentives and a greater role for the public sector. Knowledge pools are a crucial piece of the puzzle.

The current COVID-19 pandemic demonstrates how it is possible to make transformational changes overnight when acting in times of an emergency. Let us use this crisis to acknowledge the failures of today’s biomedical research model and usher in the systemic change needed. The world after Corona will require the consideration of alternative paradigms –  it is indeed, as Costa Rica, Tedros and now the Netherlands as well rightfully confirmed – time for the knowledge commons to flourish now.

For some more background about commons thinking in the field of biomedical R&D and possible alternatives to ensure access to medicines for all, read our our policy paper ‘From Lab to Commons’. See also last year’s work on ‘The People’s Prescription’ by our allies in the UK, in cooperation with professor of Economics Mariana Mazzucato.

  1. The idea of a knowledge pool is to organise the governance of knowledge by pooling intellectual property, data and other knowledge. This can accelerate the development of health technologies and thus stimulate affordable access to the public. In 2010 the Medicines Patent Pool was set up as a response to the unequal access to HIV/AIDS treatments in developing countries. It has proven to be a great success and now functions as a United Nations-backed public health organisation working to increase access to medicines for HIV, Hepatis c and Tuberculosis.
  2. Knowledge commons refer to the institutionalized community governance of the sharing and, in some cases, creation, of information, science, knowledge, data, and other types of intellectual and cultural resources.

The post Pooling Knowledge: Private Medicine vs. Public Health? appeared first on P2P Foundation.

]]>
https://blog.p2pfoundation.net/pooling-knowledge-private-medicine-vs-public-health/2020/04/15/feed 0 75715
Distributed Curation: the commons handling complexity https://blog.p2pfoundation.net/distributed-curation-the-commons-handling-complexity/2018/07/02 https://blog.p2pfoundation.net/distributed-curation-the-commons-handling-complexity/2018/07/02#comments Mon, 02 Jul 2018 11:33:06 +0000 https://blog.p2pfoundation.net/?p=71573 A story about a wiki Let me open by saying this is only a sketch – Michel Bauwens would probably want to elaborate, but I would like to mention only the very barest details here. Back around 2006, Michel started putting his notes about Peer-to-Peer and related ideas on the P2P Foundation wiki, and opened... Continue reading

The post Distributed Curation: the commons handling complexity appeared first on P2P Foundation.

]]>
A story about a wiki

Let me open by saying this is only a sketch – Michel Bauwens would probably want to elaborate, but I would like to mention only the very barest details here. Back around 2006, Michel started putting his notes about Peer-to-Peer and related ideas on the P2P Foundation wiki, and opened it up to trusted others to contribute as well. Naturally, after more than 12 years of committed input, there are thousands of pages, which have received millions of page views. Like many wikis, this can be seen as an information commons.

Can one person maintain, as well as continue contributing to, such a growing resource? At some point, any such venture can become a full time occupation, and at a later point, simply unfeasible for one person alone. Thus, from time to time, Michel has invited others to help organise and contribute to the pages, and the wiki as a whole. Leaving out personal details, this has not all been sweetness and light. It is all too easy to fall into the trap of wishing to impose one’s own personal structure, one’s own worldview, on any resource of which one shares control.

Beyond the wiki pages themselves, the wiki (running on software similar to Wikipedia) pages can be given categories, and over the years Michel has written guide pages for many of these categories.

A story about community resources

Again, I will sketch out only the barest details, taken directly from life. The houses in the cohousing community that I live in are marvellously well-insulated, but small, and with little storage space: no lofts, garages or garden sheds. Coming from larger homes in an individualistic society, many of us bring literal baggage along with the habit of keeping collections of things that might be useful some time. The community does share guest rooms, a large dining and living space, a garden tool store, etc., so there are several areas where we don’t need to keep our own stuff.

But what about stuff like: books; envelopes; bags; fabrics and materials; glass jars; plastic containers; DIY tools and materials; boxes; camping equipment or any of the many things other people keep in their lofts, garages or garden sheds? We are committed to a low-energy future, where reuse and re-purposing are valued. But there is not enough space for us to keep more than a fraction of what we could potentially reuse. Can we make more of a material commons around these day-to-day resources, even if they look unimportant politically?

How are wikis like stuff we keep? Where are the commons here?

The truth is, in any highly complex system, each of us has at best only a partial and personal understanding of that complexity. We may be experts in our own field (however small) but know little of other people’s fields, and have only a vague overview. Or we may be the people with an overview of everything, but the more we devote ourselves to holding the overview in mind, the less mental space we have for all the details. So, are commons simple or complex? While each part of a commons may be simple enough to grasp, my guess is that, when taken together, the sum total of our potential commons is indeed highly complex, and far beyond the scope of what any one person can fully comprehend.

The lack of space in our homes simply serves to highlight the fact that in any case, most of us don’t have the time or energy to keep a well organised collection of jars, bottles, tools, equipment, and potentially reusable resources of all kinds. When we delve into the richness of a wiki like the P2P Foundation’s, the links in the chain rapidly lead us to areas where we know very little. That’s why it is useful! We gather and store information, as we do physical materials, not knowing when something might be useful. But can we find it (the material resource, the information) when we want to?

My proposition is that, first, we grasp that essential truth that this same pattern is increasingly common in our complex world. And, second, we recognise that we can do something very constructive about it. But it needs coordination, trust, and, maybe, something like a ‘commons’ mindset.

The sad version of the ending

Returning to our stories, what might happen next? It’s easy to imagine awkward, frustrating futures. The information we stored is no longer up to date. The links lead to 404 pages. The summaries, useful in their time, omit last year’s game-changing developments. Visitors don’t find them useful, and so they are not motivated to join in the curation. Our information commons initiative, once so promising and useful, gradually loses its value, and sooner or later it is effectively abandoned. We turn back to the monetised sources of information that are controlled by global capital.

We overfill our small homes with stuff that might come in handy one day. But because we don’t really have the proper space to organise the stuff, when we want something we can’t find it anyway. And we have less room in our heads, as well as our houses, trying to keep track of all the stuff. No one else can help us quickly, because they all suffer from the same difficulties. And no one has thought to keep those rare whatever-they-are-called things.

Alternatively, the space we use collectively to store our stuff gets fuller and fuller, and everything is harder to find. No one knows where everything is. People start moving other people’s stuff just to help them organise some other stuff. Either way, we don’t find what we’re looking for. So we go and order a new one. More consumption of energy, more resource depletion, worse environment, more climate change …

Articulating the commons of information and physical materials

So, let’s try for more positive narratives.

Anyone who turns up to use our information commons resource is invited to get to know someone here already. Soon we have an idea of what particular knowledge our newcomer has, in which areas. Through personal contact and discussion, and seeing some reliable behaviour, trust develops. We give them the task of revising the most out-of-date resource that is within their area of competence, interest, energy or enthusiasm. They make a good job of it. They get appreciative feedback, which motivates them to take on more, looking after a whole category. The resource, the commons, grows in real value, and more people come. ‘They’ become one of us. Repeat.

My neighbours and I get together to talk over our resources, and soon every kind of stuff has one or two people who volunteer to look after that kind of stuff. Now that I can trustingly pass on my unused books, my DIY materials, my plastic bottles and containers, and all the other ‘junk’ I have accumulated, I have enough space for a really well-organised collection of glass jars. Anyone with spare glass jars gives them to me. I know which ones there is demand for, and I pass the others on for recycling. When anyone has a sudden urge to make jam, I have plenty of jars ready for the occasion. I even keep a few unusual ones just in case, because I have the space. Every now and then, someone is really astonished that just what they need is there!

Let me, finally, try to describe the common pattern here, and contrast it with other possible patterns.

It’s different from having one big heap of resources which is everyone’s responsibility equally. No one knows which resources or areas they should take responsibility for, and there is anxiety about entrusting other people to look after other areas, because no one is clear how much attention is being given to what, and how much energy is being wasted looking over other people’s shoulders.

It’s different from a hierarchical control structure, because the people at the ‘top’ are less likely to have the on-the-ground feedback to know what a manageable, coherent collection is. Yes, perhaps it is possible to emulate a good commons with an enlightened hierarchical structure, but how do you know that some agent of global capital isn’t going to come right in and completely change the way things are done, imposing a confusing, alien world view, and promptly syphoning off the surplus value?

The common pattern – the pattern I am suggesting for complex commons – could be called “distributed curation”, and the vision is of a commons governed by consensus, and maintained through a culture that promotes the development of trust, along with the development of people to be worthy of that trust. It relies on personal knowledge and trust between people curating neighbouring areas, so that they can gracefully shift their mutual boundaries when times change, or allow a new area to grow between them. It relies on the natural, spontaneous differences in people’s interests, as well as the motivation for people to take on responsibility for deepening their own areas of knowledge within a community context, when trusted, encouraged, and given positive feedback and support by the community; and when they see the natural feedback of their actions benefiting other people.

I’m left with the question, how do we get there? My answers are few, and need much elaboration. Yes, we need to get to know each other, but how can we arrange to introduce people who will enjoy getting to know each other? Yes, we need to build up trust, but what kinds of activities can we do so that trust is built most reliably? Yes, we need to identify and negotiate people’s different patches of service and responsibility, but just how can we do that? Yes, we need to inspire people with a vision of distributed curation, but what language, and which media, are going to communicate that vision effectively?

Some discussion of this post is taking place on the Commons Transition Loomio Group

Photo by Simón73 melancólico

The post Distributed Curation: the commons handling complexity appeared first on P2P Foundation.

]]>
https://blog.p2pfoundation.net/distributed-curation-the-commons-handling-complexity/2018/07/02/feed 3 71573
Bringing Back The Lucas Plan https://blog.p2pfoundation.net/bringing-back-the-lucas-plan/2018/05/31 https://blog.p2pfoundation.net/bringing-back-the-lucas-plan/2018/05/31#respond Thu, 31 May 2018 08:00:00 +0000 https://blog.p2pfoundation.net/?p=71202 Continuing our coverage of the Lucas Plan as a precursor to Design Global Manufacture Local, this article explores “what the Lucas Plan could teach tech today”. By Felix Holtwell,  republished from Notes from Below.org “We got to do something now, the company are not going to do anything and we got to protect ourselves”, proclaimed... Continue reading

The post Bringing Back The Lucas Plan appeared first on P2P Foundation.

]]>
Continuing our coverage of the Lucas Plan as a precursor to Design Global Manufacture Local, this article explores “what the Lucas Plan could teach tech today”. By Felix Holtwell,  republished from Notes from Below.org

“We got to do something now, the company are not going to do anything and we got to protect ourselves”, proclaimed a shop steward at Lucas Aerospace when filmed by a 1978 documentary by the Open University.

He was explaining the rationale behind the so-called Alternative Corporate Plan, better known as the Lucas Plan. It was proposed by shop stewards in seventies England at the factories of Lucas Aerospace. To stave off pending layoffs, a shop steward committee established a plan that outlined a range of new, socially useful technologies for Lucas to build. With it, they fundamentally challenged the capitalist conception of technology design.

Essentially, they proposed that workers establish control over the design of technology. This bottom-up attempt at design, where not management and capitalists but workers themselves decided what to build, eventually failed. It was stopped by management, sidelined by struggling trade unions and the Labour Party, and eventually washed over by neoliberalism.

The seventies were a heady time, the preceding social-democratic, fordist consensus ran into its own contradictions and died in the face of a triumphant neoliberalism. With it, experiments such as the Lucas Plan died as well. Today, however, neoliberalism is in crisis and to bury it we should look back to precisely those experiments that failed decades ago.

Technology’s neoliberal crisis

One part of the crisis of neoliberalism is the crisis of its technology. The software and information technology sector, often denoted as “tech”, is facing widespread criticism and attacks, with demands for reform stretching wide across society.

Even an establishment publication such as The New York Times now publishes a huge feature headlining: The Case Against Google, about Google’s use of their near monopoly on search to bury competitors’ sites.

Other controversies revolve around companies such as Facebook, Snapchat and Twitter making use of insights into human psychology to make people interact with their products more often and more intensely. This involves everything from gamifying social interaction through likes and making the notification button on Facebook red, to the ubiquity of unlimited vertical scrolling in mobile phone apps.

This has a number of consequences. Studies show that the presence of smartphones damages cognitive capacity, that Facebook use is negatively associated with well-being and that preteens with no access to screens for some time show better social skills than those with screen time.

In public discourse, this combines with fears that social media might harmfully impact political processes (basically Russia buying Facebook ads).

Or, as ex-Facebook executive Chamath Palihapitiya stated:

The short-term, dopamine-driven feedback loops we’ve created are destroying how society works, hearts, likes, thumbs-up. No civil discourse, no cooperation; misinformation, mistruth. And it’s not an American problem — this is not about Russians ads. This is a global problem.

Early employees and execs at Facebook and Google even created the Center for Humane Tech that will propose more humanised tech design choices. Their website states:

Our world-class team of deeply concerned former tech insiders and CEOs intimately understands the culture, business incentives, design techniques, and organizational structures driving how technology hijacks our minds.

Part of this are the usual worries about intergenerational change, technology and centrism starting to fall apart, but there is a core truth in the worries about social media: design of technology is political.

Technologies are designed by capitalist firms, and they do it for capitalist purposes, not for maximising human well-being. In the case of social media, it is designed to pull as much attention as possible into the platform and the ads shown on it.

As Chris Marcellino, a former Apple engineer who worked on the iPhone, has said:

It is not inherently evil to bring people back to your product, it’s capitalism.

The Lucas Plan

This brings us back to the Lucas Plan. At a time where the design of technology is under unprecedented scrutiny, a plan that pushes for workers’ control over it might be an answer.

The Plan was a truly remarkable experiment at the time. The University of Sussex’s Adrian Smith explains:

Over the course of a year they built up their Plan on the basis of the knowledge, skills, experience, and needs of workers and the communities in which they lived. The results included designs for over 150 alternative products. The Plan included market analyses and economic argument; proposed employee training that enhanced and broadened skills; and suggested re-organising work into less hierarchical teams that bridged divisions between tacit knowledge on the shop floor and theoretical engineering knowledge in design shops.

The Financial Times described the Lucas Plan as, “one of the most radical alternative plans ever drawn up by workers for their company” (Financial Times, 23 January 1976). It was nominated for the Nobel Peace Prize in 1979. The New Statesman claimed (1st July 1977) ‘The philosophical and technical implications of the plan are now being discussed on average of twenty five times a week in international media’.

The Lucas Plan eventually failed because of opposition from management, the trade union hierarchy and the government. Lucas Aerospace subsequently had to restructure and shed much of its workforce. Nevertheless, the plan provides great lessons for our current predicament.

Technology is political, yet its design is ultimately in the hands of capitalist firms. The Lucas Plan shows that workers, particularly in the more technically-oriented layers, have the skills and resources to design alternative technologies to those proposed by shareholders and management.

Workers’ control over the design of technology is thus a way to make it more ethical. Many of the problems we encounter with modern-day information technology are caused by unrestricted capitalist control over it, and workers’ control can be a necessary counterweight to push through human-centered design choices.

Composition

So how to build a modern-day Lucas Plan? Developing a plan reminiscent of the Lucas Plan for modern times needs, first and foremost, to be based on the present-day class composition of the workers in tech.

Tech, and more precisely sectors focused on information technology and software, have a notoriously dual composition. On the one hand there are the (generally) highly paid top-end workers, mostly composed of programmers and people employed in fields such as marketing and management. On the other hand there are large armies of underpaid workers employed in functions such as moderation, electronics assembly, warehouse logistics or catering.

The first group has very peculiar characteristics. They are often taken in by the classic Silicon Valley ideology consisting of “lean startup” thinking, social liberalism, and the idea that they are improving the world. Materially, they are also different from large sections of the working class. They earn extremely high wages, are often highly educated, possess specific technical skills, are given significant stock options in their employers’ companies and are highly mobile, notorious for changing jobs very easily.

Besides that, many also have an aspiration to start their own startup one day, in line with Silicon Valley ideology. This adds a certain petty-bourgeois flavour to their composition.

Yet these workers also have their grievances. They are often employed in soul-crushing jobs at large multinationals, some of which (for example Amazon or Tesla) have the reputation of making them work as much as they can and then spitting them out, often in a state of burn-out.

On the other hand, there are subaltern sections of tech workers. These people moderate offensive content on Facebook, stack Amazon boxes in their “fulfillment centres”, drive people around on Uber and Lyft, assemble electronics such as iPhones or serve lunches at Silicon Valley corporate “campuses.”

These workers are generally underpaid, but conduct the drudging work that makes tech multinationals run. Without Facebook moderators watching horrible content all day, the platform would be flooded by it (and Facebook would have no one to train their AI on); without the fleet of elderly workers manning Amazon warehouses, packages would not get delivered; without the staff on Google and Facebook campuses, they would look a lot less utopian.

This section of workers can also be highly mobile in regards to jobs, but less from possibility and more from precarity. They also have fewer ties to the tech sector specifically— whether they work at the warehouses of a self-styled tech company like Blue Apron or the warehouses of any other company matters less for them than it does for programmers.

This bifurcation holds real problems for a modern-day Lucas Plan. If we simply move the control over the design of technology from management and shareholders to a tech worker aristocracy, it might not solve so much.

Yet there are some hopeful tendencies we can build on. Tech workers in Silicon Valley have started to bridge the divide that separates them, with organisations like the Tech Workers Coalition starting to help cafeteria workers organise.

A Guardian piece on their organising even observes some budding solidarity between these two groups arising:

Khaleed is proud of the work he does, and deeply grateful for the union. At first, he found it difficult to talk about his anxieties with coworkers at the roundtable. But he came to find it comforting: “We have solidarity, now.” A cost-of-living raise would mean more security, and a better chance of staying in the apartment where he lives. Khaleed deeply wants to be able to live near his son, and for his son to continue going to the good public school he now attends.

When I asked Khaleed how he felt about the two TWC Facebook employees he had met with, his voice faltered. “I just hope that someday I can help them like they helped me.” When I told one of the engineers, he smiled, and quoted the IWW slogan. “That’s the goal, right – one big union?”

This is precisely the basis on which a modern-day Lucas Plan should be based: solidarity between both groups of tech workers and inclusion of both. The Lucas Plan of the 1970s understood this. The main authors of the Plan were predominantly to be highly-educated engineers, but the people making the products were not. Hence they tried to bridge this gap with proposals that would humanise working conditions as well as technology, and by including common workers.

A shop steward, an engineer, would declare during a public meeting after showing how company plans decided how long bathroom breaks could be:

We say that that form of technology is unacceptable, and if that is the only way to make that technology we should be questioning whether we want to make those kinds of products in that way at all.

Furthermore, the humanisation of work inside tech companies, and not just the end product of it, would also positively impact the work of the core tech workers. In essence, it would serve as the glue to connect both groups.

A Lucas Plan today would thus analyse the composition of tech workers at both sides of the divide, include both of them and mobilise them behind a program of humanisation of labour for themselves and humanised technology for the rest of society.

How to do it?

The practical implementation of workers’ control over design decisions can base itself on already existing policies and experiences, mainly reformist co-determination schemes (where trade-union officials are given seats on corporate boards) or direct-action oriented tactics (where management power is challenged through workplace protest and where workers establish a degree of workplace autonomy).

The choice of these tactics would need to be based on local working class experiences. In some contexts co-determination would make more sense; in some cases direct action would take precedence. In most cases a combination of both will most likely be required.

The first option is a moderate one. Workers’ representation on the boards of companies has been common in industrialised economies, and particularly continental Europe. Even Conservative PM Theresa May proposed implementing it in 2017, before making a U-turn after business lobbying.

As TUC general secretary Frances O’Grady has stated:

Workers on company boards is hardly a radical idea. They’re the norm across most of Europe – including countries with similar single-tier board structures to the UK, such as Sweden. European countries with better worker participation tend to have higher investment in research and development, higher employment rates and lower levels of inequality and poverty.

Expanding the control of these boards to also deciding what products to produce and how to design them in technologically-oriented companies—both software and more traditional industrial companies—would radicalise the non-radical idea of workers representation on company boards.

A second, more radical option, is the establishment of workplace control through organising. A good example of this are the US longshoremen who at certain times of their existence controlled their own work.

As Peter Cole writes in Jacobin:

West Coast longshoremen were “lords” because they earned high wages by blue-collar standards, were paid overtime starting with the seventh hour of a shift, and had protections against laboring under dangerous conditions. They even had the right to stop working at any time if “health and safety” were imperiled. Essentially, to the great consternation of employers, the union controlled much of the workplace.

The hiring hall was the day-to-day locus of union power. Controlled by each local’s elected leadership, the hall decided who would and wouldn’t work. Crucially, under the radically egalitarian policy of “low man out,” the first workers to be dispatched were those who had worked the least in that quarter of the year.

Imagine a programmer at Facebook refusing to make a button red because research shows it would not increase the well-being of users, and being backed up in this decision by a system of workplace solidarity that stretches throughout the company.

From bees to architects

Mike Cooley, one of the key authors behind the Lucas Plan, was fired from his job in 1981 as retaliation for union organising. Afterwards, he became a key author on humanising technology. He also worked with the Greater London Council when—during the height of Thatcherism—it was controlled by the Labour left, and where current Shadow Chancellor John McDonnell earned his spurs.

Just as McDonnell bridges the earlier, failed, resistance to neoliberalism, with our current attempts to replace it, Cooley forms an inspiration for post-neoliberal technology. In an 1980 article he concluded:

The alternatives are stark. Either we will have a future in which human beings are reduced to a sort of bee-like behaviour, reacting to the systems and equipment specified for them; or we will have a future in which masses of people, conscious of their skills and abilities in both a political and a technical sense, decide that they are going to be the architects of a new form of technological development which will enhance human creativity and mean more freedom of choice and expression rather than less. The truth is, we shall have to make the profound decision as to whether we intend to act as architects or behave like bees.

These words ring true today more than ever.


About the author: Felix Holtwell In real life, Felix is a tech journalist. After dark, however, he edits the Fully Automated Luxury Communism newsletter, a newsletter about the interactions between technology and the left. You can follow him on Twitter at @AutomatedFully.

Photo by OuiShare

The post Bringing Back The Lucas Plan appeared first on P2P Foundation.

]]>
https://blog.p2pfoundation.net/bringing-back-the-lucas-plan/2018/05/31/feed 0 71202
The Lucas Plan: What can it tell us about democratising technology today? https://blog.p2pfoundation.net/the-lucas-plan-what-can-it-tell-us-about-democratising-technology-today/2018/05/24 https://blog.p2pfoundation.net/the-lucas-plan-what-can-it-tell-us-about-democratising-technology-today/2018/05/24#respond Thu, 24 May 2018 07:00:00 +0000 https://blog.p2pfoundation.net/?p=71090 Thirty-eight years ago, a movement for ‘socially useful production’ pioneered practical approaches for more democratic technology development.  It was in January 1976 that workers at Lucas Aerospace published an Alternative Plan for the future of their corporation. It was a novel response to management announcements that thousands of manufacturing jobs were to be cut in... Continue reading

The post The Lucas Plan: What can it tell us about democratising technology today? appeared first on P2P Foundation.

]]>
Thirty-eight years ago, a movement for ‘socially useful production’ pioneered practical approaches for more democratic technology development

It was in January 1976 that workers at Lucas Aerospace published an Alternative Plan for the future of their corporation. It was a novel response to management announcements that thousands of manufacturing jobs were to be cut in the face of industrial restructuring, international competition, and technological change. Instead of redundancy, workers argued their right to socially useful production.

Around half of Lucas’ output supplied military contracts. Since this depended upon public funds, as did many of the firm’s civilian products, workers argued state support be better put to developing more socially useful products.

Rejected by management and government, the Plan nevertheless catalysed ideas for the democratisation of technological development in society. In promoting their arguments, shop stewards at Lucas attracted workers from other sectors, community activists, radical scientists, environmentalists, and the Left. The Plan became symbolic for a movement of activists committed to innovation for purposes of social use over private profit.

Of course, the world is different now. The spaces and opportunities for democratising technology have altered, and so too have the forms it might take. Nevertheless, remembering older initiatives casts enduring issues about the direction of technological development in society in a different and informative light: an issue relevant today in debates as varied as industrial policy, green and solidarity economies, commons-based peer-production, and grassroots fabrication in Hackerspaces and FabLabs. The movement for socially useful production prompts questions about connecting tacit knowledge and participatory prototyping to the political economy of technology development.

In drawing up their Plan, shop stewards at Lucas turned initially to researchers at institutes throughout the UK. They received three replies. Undeterred, they consulted their own members. Over the course of a year they built up their Plan on the basis of the knowledge, skills, experience, and needs of workers and the communities in which they lived. The results included designs for over 150 alternative products. The Plan included market analyses and economic argument; proposed employee training that enhanced and broadened skills; and suggested re-organising work into less hierarchical teams that bridged divisions between tacit knowledge on the shop floor and theoretical engineering knowledge in design shops.

The Financial Times described the Lucas Plan as, ‘one of the most radical alternative plans ever drawn up by workers for their company’ (Financial Times, 23 January 1976). It was nominated for the Nobel Peace Prize in 1979. The New Statesman claimed (1st July 1977) ‘The philosophical and technical implications of the plan are now being discussed on average of twenty five times a week in international media’. Despite this attention, shop stewards suspected (correctly) that the Plan in isolation would convince neither management nor government. Even leaders in the trade union establishment were reluctant to back this grassroots initiative; wary its precedent would challenge privileged demarcations and hierarchies.

In the meantime, and as a lever to exert pressure, shop stewards embarked upon a broader political campaign for the right of all people to socially useful production. Mike Cooley, one of the leaders, said they wanted to, ‘inflame the imaginations of others’ and ‘demonstrate in a very practical and direct way the creative power of “ordinary people”’. Lucas workers organised road-shows, teach-ins, and created a Centre for Alternative Industrial and Technological Systems (CAITS) at North-East London Polytechnic. Design prototypes were displayed at public events around the country. TV programmes were made. CAITS helped workers in other sectors develop their own Plans. Activists connected with sympathetic movements in Scandinavia and Germany.

The movement that emerged challenged establishment claims that technology progressed autonomously of society, and that people inevitably had to adapt to the tools offered up by science. Activists argued knowledge and technology was shaped by social choices over its development, and those choices needed to become more democratic. Activism cultivated spaces for participatory design; promoted human-centred technology; argued for arms conversion to environmental and social technologies; and sought more control for workers, communities and users in production processes.

Material possibilities were helped when Londoners voted the Left into power at the Greater London Council (GLC) in 1981. They introduced an Industrial Strategy committed to socially useful production. Mike Cooley, sacked from Lucas for his activism, was appointed Technology Director of the GLC’s new Greater London Enterprise Board (GLEB). A series of Technology Networks were created. Anticipating FabLabs today, these community-based workshops shared machine tools, access to technical advice, and prototyping services, and were open for anyone to develop socially useful prototypes. Other Left councils opened similar spaces in the UK.

Technology Networks aimed to combine the ‘untapped skill, creativity and sheer enthusiasm’ in local communities with the ‘reservoir of scientific and innovation knowledge’ in London’s polytechnics. Hundreds of designs and prototypes were developed, including electric bicycles, small-scale wind turbines, energy conservation services, disability devices, re-manufactured products, children’s play equipment, community computer networks, and a women’s IT co-operative. Designs were registered in an open access product bank. GLEB helped co-operatives and social enterprises develop these prototypes into businesses.

Recalling the movement now, what is striking is the importance activists attached to practical engagements in technology development as part of their politics. The movement emphasised tacit knowledge, craft skill, and learning by doing through face-to-face collaboration in material projects. Practical activity was cast as ‘technological agit prop’ for mobilising alliances and debate. Some participants found such politicisation unwelcome. But in opening prototyping in this way, activists tried to bring more varied participation into debates, and enable wider, more practical forms of expression meaningful to different audiences, compared to speeches and texts evoking, say, a revolutionary agent, socially entrepreneurial state, or deliberative governance framework.

Similarly today, Hackerspaces and FabLabs, involve people working materially on shared technology projects. Social media opens these engagements in distributed and interconnected forms. Web platforms and versatile digital fabrication technologies allow people to share open-hardware designs and contribute to an emerging knowledge commons. The sheer fun participants find in making things is imbued by others with excited claims for the democratisation of manufacturingand commons-based peer production. Grassroots digital fabrication (pdf) rekindles ideas about direct participation in technology development and use.

Wherever and whenever people are given the encouragement and opportunity to develop their ideas into material activity, then creativity can and does flourish. However, remembering the Lucas Plan should make us pause and consider two issues. First, the importance placed on tacit knowledge and skills. Skilful design in social media can assist but not completely substitute face-to-face, hand-by-hand activity. Second, for the earlier generation of activists, collaborative workshops and projects were also about crafting solidarities. Project-centred discussion and activity was linked to debate and mobilisation around wider issues.

Workers at Lucas Industries, Shaftmoor Lane branch, Birmingham, 1970. Photograph: /Lucas Memories website, lucasmemories.co.uk.

With hindsight, the movement was swimming against the political and economic tide, but at the time things looked less clear-cut. The Thatcher government eventually abolished the GLC in 1986. Unionised industries declined, and union power was curtailed through legislation. In overseeing this, Thatcherism knowingly cut material and political resources for alternatives. In doing so, the diversity so important to innovation diminished. The alliances struck, the spaces created and the initiatives generated were swept aside as concern for social purpose became overwhelmed by neoliberal ideology. The social shaping of technology was left to market decision.

However, even though activism dissipated, its ideas did not disappear. Some practices had wider influence, such as in participatory design, albeit it in forms appropriated to the needs of capital rather than the intended interests of labour. Historical reflection thus prompts a third issue, which is how power relations matter and need to be addressed in democratic technology development. When making prototypes becomes accessible and fun then people can exercise a power to do innovation. But this can still struggle to exercise power over the agendas of elite technology institutions, such as which innovations attract investment for production and marketing, and under what social criteria. Alternative, more democratic spaces nevertheless for technology development and debate.

Like others before and since, the Lucas workers insisted upon a democratic development of technology. Their practical, material initiatives momentarily widened the range of ideas, debates and possibilities – some of which persist. Perhaps their argument was the most socially useful product left to us?


Adrian Smith researches the politics of technology, society and sustainability at SPRU and the STEPS Centre at the University of Sussex. He is on Twitter @smithadrianpaul. A longer paper on the Lucas Plan is available at the STEPS site.

Originally published in The Guardian.

Photo by Daniel Kulinski

The post The Lucas Plan: What can it tell us about democratising technology today? appeared first on P2P Foundation.

]]>
https://blog.p2pfoundation.net/the-lucas-plan-what-can-it-tell-us-about-democratising-technology-today/2018/05/24/feed 0 71090
Summer of Commoning 3: The Assembly of the Commons of Grenoble https://blog.p2pfoundation.net/summer-of-commoning-3-the-assembly-of-the-commons-of-grenoble/2017/11/29 https://blog.p2pfoundation.net/summer-of-commoning-3-the-assembly-of-the-commons-of-grenoble/2017/11/29#respond Wed, 29 Nov 2017 09:00:00 +0000 https://blog.p2pfoundation.net/?p=68707 During the summer of 2017, I travelled throughout France. Now I am sharing the stories of the commons I met along the way, never knowing what I would find in advance. These articles were originally published in French here: Commons Tour 2017. The English translations are also compiled in this Commons Transition article. The Assembly of... Continue reading

The post Summer of Commoning 3: The Assembly of the Commons of Grenoble appeared first on P2P Foundation.

]]>
During the summer of 2017, I travelled throughout France. Now I am sharing the stories of the commons I met along the way, never knowing what I would find in advance. These articles were originally published in French here: Commons Tour 2017. The English translations are also compiled in this Commons Transition article.

The Assembly of the Commons of Grenoble: building the city together

It was with great pleasure that I met Anne-Sophie and Antoine during my journey, while taking a break in the beautiful city of Grenoble. We happily shared the practices of the Lille and Grenoble assemblies of the commons over a coffee at a sidewalk cafe.

Anne-Sophie and Antoine were both elected to positions in city hall. They shared stories with me of citizens engaged in a dynamic of counterpower and, after being elected in 2014, of their difficulty in taking on an institutional posture. Changing culture is not always easy! But this is what also makes the Grenoble Assembly of the Commons so special, born of the meeting of two dynamics.

The first of these two comes from Nuit Debout, within which a “Commission of the Commons” was created in 2016. The idea was to discuss the management of commons as a common responsibility: not only the responsibility of public authorities, but also of the area’s inhabitants.

The second dynamic, on the part of city hall, was the philosophically interesting idea of investing in a space between the private and the public, to make room for citizens in the public debate. The key here is that this idea has not been abandoned at all, in fact it unites activists and elected representatives in the same assembly today.

Last March, during the Biennale of Cities in Transition, partners and associations were invited to the assembly. About fifty people from various backgrounds participated in this first assembly, including Sylvia Fredricksson and Michel Briand, both well-known French commoners who came to share their experiences.

What the elected representatives underline is that even if they have the will to make a difference in the direction of greater citizen involvement in public life, it is not so simple. Legislation is not adapted at all, particularly with regard to risk management (the insurance framework does not exist). On top of that, officials are not so aware, and not trained to work directly with citizens. Faced with this, the elected representatives asked the services to work on these points and advance the texts and practices.

Nevertheless, among the completed projects at the town hall level, there have been agreements created for occupying public spaces such as shared gardens, for example. The assembly also discussed the idea of writing a charter on housing, a bit like in Bologna (Italy), where a charter of urban commons was drafted and signed by some forty Italian cities.

The city also participates in a “migrants’ platform” to accompany reception initiatives.There are also participatory budgets: every year, 800K€ in investment is opened to citizens’ projects. 106 projects proposed by the Grenoble region were selected in 2017. On the cultural side, we can cite the desire to take art out of museums with the Street Art Festival, whose traces can be found all over the city walls.

To date, the Assembly of the Commons has set up four separate working groups which meet asynchronously at regular intervals:

  • Natural Commons
  • Knowledge Commons
  • Urban Commons
  • Commons of Health and Well-being

The spirit of commons in Grenoble has a long history. After the Second World War, unlike many other places, the city had, for quite a while, retained its own operators to manage electricity and water, which made it a very special case.

After being privatized in the 1980s, water came back into the public domain after a citizens’ lengthy legal battle with certain elected environmental officials and some employees of the water authority. This was the first battle won in France for water municipalization, along with the first French users’ committee to make the citizens’ involvement in water management last. The whole world visits Grenoble for its water management model. And on the electricity and gas side, the operator is a mixed-economy company but the public (the city of Grenoble) is still the majority shareholder.

This civic expertise and spirit of solidarity continue today, and are embodied in the city’s desire to be part of a concrete, lasting relationship between two communities that “do with others”, all the others…

The post Summer of Commoning 3: The Assembly of the Commons of Grenoble appeared first on P2P Foundation.

]]>
https://blog.p2pfoundation.net/summer-of-commoning-3-the-assembly-of-the-commons-of-grenoble/2017/11/29/feed 0 68707
Designing a fair and sustainable system of academic publishing https://blog.p2pfoundation.net/designing-a-fair-and-sustainable-system-of-academic-publishing/2016/07/28 https://blog.p2pfoundation.net/designing-a-fair-and-sustainable-system-of-academic-publishing/2016/07/28#respond Thu, 28 Jul 2016 10:00:00 +0000 https://blog.p2pfoundation.net/?p=58424 TL;DR: Almost everyone thinks academic publishing needs to change. What would a better system look like? Economist Elinor Ostrom gave us design principles for an alternative – a knowledge commons, a sustainable approach to sharing research more freely. This approach exemplifies using economic principles to design a digital platform.  A special contribution by Jimmy Tidey, cross-posted from... Continue reading

The post Designing a fair and sustainable system of academic publishing appeared first on P2P Foundation.

]]>

TL;DR: Almost everyone thinks academic publishing needs to change. What would a better system look like? Economist Elinor Ostrom gave us design principles for an alternative – a knowledge commons, a sustainable approach to sharing research more freely. This approach exemplifies using economic principles to design a digital platform. 

A special contribution by Jimmy Tidey, cross-posted from his blog:

Why is this relevant right now? 

The phrase ‘Napster Moment’ has been used to describe the current situation. Napster made MP3 sharing so easy that the music industry was forced to change it’s business model. The same might be about happen to academic publishing.

In a recent Science Magazine reader poll (admittedly unrepresentative), 85% of respondents thought pirating papers from illicit sources was morally acceptable, and about 25% said they did so weekly.

Elsevier – the largest for-profit academic publisher – is fighting back. They are pursuing the SciHub website through the courts. SciHub is the most popular website offering illegal downloads, and has virtually every paper ever published.

In another defensive move, Elsevier has recently upset everyone by buying Social Science Research Network – a highly successful not-for-profit website that allowed anyone to read papers for free.

Institutions that fund research are pushing for change, fed up with a system where universities pay for research, but companies like Elsevier make a profit from it. Academic publishers charge universities about $10Bn a year, and make unusually large profits.

In the longer term, the fragmentation of research publishing may be unsustainable. Over a million papers are published every year, and research increasingly requires academics to understand multiple fields. New search tools are desperately needed, but they are impossible to build when papers are locked away behind barriers.

How should papers be published? Who should pay the costs, and who should get the access? Economist and Nobel laureate Elinor Ostrom pioneered the idea of a knowledge commons to think about these questions.

What is a knowledge commons? 

A commons is a system where social conventions and institutions govern how people contribute to and take from some shared resource. In a knowledge commons that resource is knowledge.

You can think of knowledge, embodied in academic papers, as an economic resource just like bread, shoes or land. Clearly knowledge has some unique properties, but this assumption is a useful starting point.

When we are thinking about how to share a resource, Elinor Ostrom, in common with other economists, asks us to think about whether the underlying resource is ‘excludable’, or ‘rivalrous’.

If I bake a loaf of bread, I can easily keep it behind a shop counter until someone agrees to pay money in exchange for it – it is excludable. Conversely, if I build a road it will be time consuming and expensive for me to stop other people from using it without paying – it is non-excludable.

If I sell the bread to one person, I cannot sell the same loaf to another person – it is rivalrous. However, the number of cars using a road makes only a very small difference to the cost of providing it. Roads are non-rivalrous (at least until traffic jams take effect).

Excludable Non-excludable
Rivalrous Market GoodsBread, shoes, cars Common Pool ResourcesFish stocks, water 
Non-rivalrous Club GoodsGyms, toll roads,
(academic papers) 
Public GoodsNational defence, street lighting
(academic papers)

Most economists think markets (where money is used to buy and sell, top left in the grid) are a good systems for providing non-rivalrous, non-excludable private goods – bread, clothes, furniture etc. – perhaps with social security in the background to provide for those who cannot afford necessities.

But if a good is non-rivalrous, non-exclusionary, or both, things get a bit more complicated, and less effective. This is why roads are usually provided by a government rather than a market – though for profit toll roads do exist.

The well known ‘tragedy of the commons’ is a example of this logic playing out. The ‘tragedy of the commons’ thought experiment concerns a rivalrous, non-excludable natural resource – often the example given is a village with a common pasture land shared by everyone. Each villager has an incentive to graze as many sheep as they can on the shared pasture because then they will have nice fat sheep and plenty of milk. But if everyone behaves this way, unsustainably massive flocks of sheep will collectively eat all the grass and destroy the common pasture.

The benefit accrues to the individual villager, but the cost to the community as a whole. The classic economic solution is to put fences up and make the resource into an excludable, market-based system. Each villager gets an section of the common to own privately, which they can buy and sell as they choose.

Building and maintaining fences can be very expensive – if the resource is something like a fishing ground, it might even be impossible. The view that building a market is the only good solution has been distilled into an ideology, and, as is discussed later, that ideology lead to the existence of the commercial academic publishing industry. As the rest of this post will explain, building fences around knowledge has turned out to be very expensive.

Ostrom positioned herself directly against the ‘have to build a market’ point of view. She noticed that in the real world, many communities do successfully manage commons.

Ostrom’s Law: A resource arrangement that works in practice can work in
theory.

She developed a framework for thinking about social norms that allow effective resource management across a wide range of non-market systems, a much more nuanced approach than the stylised tragedy of the commons thought experiment. Her analysis calls for a more realistic model of the villagers, who might realise that the common is being overgrazed, call a meeting, and agree a rule how many sheep each person is allowed to graze. They are designing a social institution.

If this approach can be made to work, it saves the cost of maintaining the fences, but avoids the overgrazing that damages the common land.

The two by two grid above has the ‘commons’ as only one among four strategies. In reality, rivalry and excludability are questions of degree, and can be changed by making different design choices.

For this analysis, it’s useful to use the word ‘commons’ as a catchall for non-market solutions.

Ostrom and Hess published a book of essays, Understanding Knowledge as a Commons, arguing that we should use exactly this approach to understand and improve academic publishing. They argue for a ‘knowledge commons’.

The resulting infrastructure would likely be one or more web platforms. The design of these platforms will have to take into account the questions of incentives, rivalry and exclusion discussed above.

What would a knowledge commons look like? 

Through extensive real world research, Ostrom and her Bloomington School derived a set of design principles for effectively sharing common resources:

  1. Define clear group boundaries.
  2. Match rules governing use of common goods to local needs and conditions.
  3. Ensure that those affected by the rules can participate in modifying the rules.
  4. Make sure the rule-making rights of community members are respected by outside authorities.
  5. Develop a system, carried out by community members, for monitoring members’ behavior.
  6. Use graduated sanctions for rule violators.
  7. Provide accessible, low-cost means for dispute resolution.
  8. Build responsibility for governing the common resource in nested tiers from the lowest level up to the entire interconnected system.

These principles can help design a system where there is free access while preventing collapse from abusive treatment.

Principle 1 is already well addressed by the existence of universities, which give us a clear set of internationally comparable rules about who is officially a researcher in what area – doctorates, professorships etc. These hierarchies could also indicate who should participate in discussions about designing improvements to the knowledge commons, in accordance with 2 and 3. This is not to say that non-academic would be excluded, but that there is an existing structure which could help with decisions such as who is qualified to carry out peer review.

In a knowledge commons utopia, all the academic research ever conducted would be freely available on the web, along with all the related metadata – authors, dates, who references whom, citation counts etc. A slightly more realistic scenario might have all the metadata open, plus papers published from now forward.

This dataset would allow innovations that could address many of these design principles. In particular, in accordance with 5, it would allow for the design of systems measuring ‘demand’ and ‘supply’.  Linguistic analysis of papers might start to shine a light on who really supplies ideas to the knowledge commons, by following the spread of ideas through the discourse. The linked paper describes how to discover who introduces a new concept into a discourse, and track when that concept is widely adopted.

This could augment crude citation counts, helping identify those who provide a supply of new ideas to the commons. What if we could find out what papers people are searching for, but not finding? Such data might proxy for ‘demand’ – telling researches where to focus their creative efforts.

Addressing principle 6, there is much room for automatically detecting low quality ‘me-too’ papers, or outright plagiarism. Or perhaps it would be appropriate to establish a system where new authors have to be sponsored by existing authors with a good track record – a system which the preprint site arXiv currently implements. (Over publication is interestingly similar to overgrazing of a common pasture, abusing the system for personal benefit at the cost of the group.)

Multidisciplinary researchers could benefit from new ways aggregating papers that do not rely on traditional journal based categories, visualisations of networks of papers might help us orient ourselves in new territory quicker.

All of these innovations, and many others that we cannot foresee, require a clean, easily accessible data set to work with.

These are not new ideas. IBM’s Watson is already ingesting huge amounts of medical research to deliver cancer diagnosis and generate new research questions. But the very fact that only companies with the resources IBM can get to this data confirms the point about the importance of the commons. Even then, they are only able to look at a fraction of the total corpus of research.

But is the knowledge commons feasible?

How, in practical terms, could a knowledge commons be built?

Since 1665, the year the Royal Society was founded, about 50 million research papers have been published. As a back of an envelope calculation, that’s about 150 terabytes of data – that would cost $4,500 a month to store on Amazon’s cloud servers. Obviously just storing the data is not enough – so is there a real world example of running this kind of operation?

Wikipedia stores a similar total amount of data (about 40 million pages). It also has functionality that supports about 10 edits to those pages every second, and is one of the 10 most popular sites on the web. Including all the staffing and servers, it costs about $5o million per year.

That is less than 5% of what the academic publishing industry charges every year. If the money that universities spend on access to journals was saved for a single year, it would be enough to fund an endowment that would make academic publishing free in perpetuity – a shocking thought.

What’s the situation at the moment? 

Universities pay for the research that results in academic papers. Where papers are peer-reviewed, the reviewing is mostly done salaried university staff who don’t charge publishers for their time. Therefore, the cost of producing a paper to an academic publisher is, more or less, typesetting plus the admin.

Yet publishers charge what are generally seen as astronomical fees. An ongoing annual licenses to access a journal often costs many thousands of pounds. University libraries, which may have access to thousands of journals, pay millions each year in these fees. As a member of the public, you can download a paper for about $30 – and a single paper is often valueless without the network of papers it references. The result is an industry worth about $10bn a year, with profit margins that are often estimated at 40%. (Excellent detailed description here.)

I’ve heard stories of academics having articles published in journals their university does not have access to. They can write the paper, but their colleagues cannot subsequently read it – which is surely the opposite of publishing.  There are many papers that I cannot access from my desk at the Royal College of Art, because the university has not purchased access. But RCA has an arrangement with UCL allowing me to use their system. So I have to go across town just to log onto the Internet via UCL’s wifi. This cannot make sense for anyone.

I’m not aware of any similar system. It’s a hybrid of public funding plus a market mechanism. Tax payers money is spent producing what looks like a classic public or commons good (knowledge embodied in papers), free to everyone, non-rivalry and non-exclusionary. That product is then handed over a to private company, for free, and the private company makes a profit by selling that product back to the organisation that produced it. Almost no one (except the publishers) believes this represents value for money.

Overall, in addition to being a drain on the public purse, the current system fragments papers and associated metadata behind meaningless artificial barriers.

How did it get like that?

Nancy Kranich, in her essay for the book Understanding Knowledge as a Commons, gives useful history. She highlights the Reagan era ideological belief (mentioned earlier) that the private sector is always more efficient, plus the short-term incentives of the one-time profit you get by selling your in house journal. That’s seems to be about the end of the story, although in another essay in the same book Peter Suber points out that many high level policy makers often do not know how the system works – which might also be a factor.

If we look to Ostrom’s design principles, we cannot be surprised at what has happened. Virtually all the principles (especially 4,7 and 8) are violated when you have a commons with a small number of politically powerful, for-profit institutions who rely on appropriating resources from that commons. It’s analogous to the way industrial fishing operations are able to continuously frustrate legislation designed to prevent ecological disaster in overstrained fishing grounds by lobbying governments.

What are the current efforts to change the situation?

In 2003 the Bethesda Statement on Open Access indicated the Howard Hughes Medical Institute and the Wellcome trust, which between them manage an endowment of about $40bn, wanted research funded by them to be published Open Access – and that they would cover the costs. This seems to have set the ball rolling, although the situation internationally is too complex to easily unravel.

Possibly, charities lead the way because they are free of the ideological commitments of governments, as described by Kranich, and less vulnerable to lobbying efforts by publishers.

Focusing on the UK, Since 2013, the Research Council (which disperses about £3bn to universities each year) has insisted that work that it funds should be published Open Access. The details, however, make this rule considerably weaker than you might expect. RCUK recognises two kinds of Open Access publishing.

With Gold Route publishing, a commercial publisher will make make the paper free to access online, and publish it under a creative commons licence that allows others to do whatever they like with it – as long as the original authors are credited. The commercial publisher will only do this if they are paid – rates vary but it can be up to £5000 per paper. RCUK has made a £16 million fund available to cover these costs.

Green Route publishing is a much weaker type of Open Access. The publisher grants the academics who produced the paper the right to “self archive” – ie make their paper available through their university’s website. It is covered by a creative commons license that allows other people to use if for any non-commercial purpose, as long as they credit the author. However there can be an embargo of up to three years before the academic is allowed to ‘self-publish’ their paper. There are also restrictions on what sites they can publish the paper on – for example they cannot publish it to a site that mimics a conventional journal. Whether sites such as Academic.edu are acceptable is currently the subject of debate.

Is it working?

In 1995, Forbes predicted that commercial academic publishers had a business model that was imminently about to be destroyed by the web. That makes sense, after all, the web was literally invented to share academic papers. Here we are, 21 years later, and academic publishers exist, and still have enormous valuations. Their shareholders clearly don’t think they are going anywhere.

Elsevier is running an effective operation to prevent innovation by purchasing competitors (mendeley.com) or threatening them with copyright actions (academia.edu and SciHub). Even if newly authored papers are published open access, the historical archive will remain locked away. However, there is change.

Research Council UK carried out an independent review in 2014 where nearly all universities were able to report publishing at least 45% of papers as open access (via green or gold routes) – though the report is at pains to point out that most universities don’t keep good records of how their papers are published, so this figure could be inaccurate.

In fact the UK is doing a reasonable job of pursuing open access, and globally things are slowly moving in the right direction. Research is increasingly reliant on pre-prints hosted on sites like ArXiv, rather than official Journals, which move too slowly.

Once a database of the 50 million academic papers is gathered in one place (which SciHub may soon achieve) it’s hard to see how the genie can be put back in the bottle.

If this is a ‘Napster moment’, the question is what happens next. Many people thought that MP3 sharing was going to be the end of the commercial music industry. Instead, Apple moved in and made a service so cheap and convenient that it displaced illicit file sharing. Possibly commercial publishers could try the same trick, though they show no signs of making access cheaper or more convenient.

Elinor Ostrom’s knowledge commons shows us that there a sustainable, and much preferable alternative. An alternative that opens the worlds knowledge to everyone with an Internet connection, and provides an open platform for innovations that can help us deal with the avalanche of academic papers published every year.

The post Designing a fair and sustainable system of academic publishing appeared first on P2P Foundation.

]]>
https://blog.p2pfoundation.net/designing-a-fair-and-sustainable-system-of-academic-publishing/2016/07/28/feed 0 58424
A New Frontier: Book Publishing as a Commons https://blog.p2pfoundation.net/a-new-frontier-book-publishing-as-a-commons/2015/12/29 https://blog.p2pfoundation.net/a-new-frontier-book-publishing-as-a-commons/2015/12/29#comments Tue, 29 Dec 2015 10:07:12 +0000 http://blog.p2pfoundation.net/?p=53224 For authors and their reader-communities, has conventional book publishing become obsolete or at least grossly inefficient and overpriced?  I say yes — at least for those of us who are not writing mass-audience books. The good news is that authors, their reader-communities and small presses are now developing their own, more satisfying alternative models for... Continue reading

The post A New Frontier: Book Publishing as a Commons appeared first on P2P Foundation.

]]>
For authors and their reader-communities, has conventional book publishing become obsolete or at least grossly inefficient and overpriced?  I say yes — at least for those of us who are not writing mass-audience books. The good news is that authors, their reader-communities and small presses are now developing their own, more satisfying alternative models for publishing books.

Let me tell my own story about two experiments in commons-based book publishing.  The first involves Patterns of Commoning, the new anthology that Silke Helfrich and I co-edited and published two months ago, with the crucial support of the Heinrich Böll Foundation. The second experiment involves the Spanish translation for my 2014 book Think Like a Commoner. 

Whereas the German version of Patterns of Commoning was published with transcript-Verlag, a publisher we consider a strong partner in spreading the word on the commons, for the English version, we decided to bypass commercial publishers.  We realized that none of them would be interested – or that they would want to assert too much control at too high of a price.

We learned these lessons when we tried to find a publisher for our 2013 anthology, The Wealth of the Commons.  About a dozen publishers rejected our pitches.  They said things like:  “It’s an anthology, and anthologies don’t sell.”  “It doesn’t have any name-brand authors.”  “It’s too international in focus.”  “What’s the commons?  No one knows about that.” 

It became clear that the business models of publishers – even the niche political presses that share our values – were not prepared to support a well-edited, path-breaking volume on the commons.

In general, conventional book publishing has trouble taking risks with new ideas, authors and subject matter because it has very small economic margins to play with.  One reason is that commercial book distributors in the US – the companies that warehouse books and send them to various retailers – take 60% of the cover price, with little of the risk. They are the expensive middlemen who control the distribution infrastructure. Their cut leaves about 40% of the cover price or less for the publisher, author and retailer to split.

This arrangement means that book prices have to be artificially higher, relative to actual production costs, to cover all the costs of so many players:  editors, marketers, publicists, distributors, retailers.

So how did we bypass this costly apparatus and assert control of the publishing process?  How did we produce an affordable, highly shareable 400-page book? By looking to our international community of commoners.

We did a private crowdfunding outreach to solicit orders for advance bulk orders — $10/copy in increments of ten.  This raised enough money to finance about half of the cost of the print run of 2,000 copies. Silke and I personally paid for the rest of the print run, which we expect to recoup after selling a few hundred copies. We were able to reclaim control over what happens with our book and avoid the strict limitations imposed by conventional publishing business models.

We built on the logic of commoning:  First, build community (which took years of work), then support each other.  This is both more efficient and socially satisfying than relying on highly consolidated corporate markets that require ever-escalating prices, control and sales.

We had had a great experience in publishing The Wealth of the Commons in 2012 via Levellers Press, a regional press that took a chance on our book. Levellers was started a few years ago by its parent company, Collective Copies, a worker-owned, movement-friendly photocopy business in Amherst, Massachusetts, US.

So when it came time to publish Patterns of Commoning, we could have published with Levellers, but decided this time to go with Levellers’ self-publishing arm, Off the Common Books.  The big difference was that we, as authors/editors, put up the money ourselves to print and distribute the books.  Off the Common Books then sells and ships the books for a modest per-book fee.

Publishing Patterns of Commoning ourselves has been a wonderful liberation from the costly, unresponsive machinery of traditional publishing.  Even though our book is not available through most bricks-and-mortar bookstores, that’s okay; very few books are.  Patterns of Commoning can be bought directly through the Off the Common Books website  – our preferred source – or through Amazon (not preferred, but it’s hard to reach general book buyers otherwise).

Because our overhead costs are so low, we were able to keep the price of our book at $15 – much lower than a conventional publisher would charge – while pocketing higher revenues than typical publishing deals (a scant 7-10% of the cover price).  We can break even sooner, and enjoy fewer risks and costs because we have a smaller press run.

The Power of Commoning Over Marketing

Then there is marketing.  The authors of books usually end up doing most of the marketing because they know their reader-community better than most publishers.  Authors are motivated to reach out to readers, but US publishers entering a publishing season often have “more important” titles to promote than one’s own book.  “Lesser titles” are often left to fend for themselves.

When I published a book (with coauthor Burns Weston) with the esteemed Cambridge University Press, several people cycled through the job of marketing my book in the course of only two years.  The Press initially charged $85 for the hardback copy (now $55) because it apparently sees university libraries as its primary market for hardcover sales. When it was time to publish a cheaper ($35) paperback, the Press refused to correct the typos (“too expensive”) or even include an errata sheet.

Self-publishing in collaboration with our commons network let us avoid all of these problems.  We have been able to rely on our own network of commoners, Web visibility and word-of-mouth recommendations – avoiding the expensive and mediocre outreach and promotion that many publishers do.  We have also been able to use a Creative Commons license (in our case, a CC BY-SA 4.0 license), which authorizes foreign translations for free and lets us post the entire book online. (Chapters will be posted in the next month or two.)   We value impact and connection over profit.

Of course, as non-academics, Silke and I don’t need to worry about the perceived prestige of a publisher.  Our careers are not dependent upon getting published with the most respected academic presses, which may also be expensive, averse to Creative Commons licenses, and focused on traditional marketing approaches.

I’ve published more than ten books with ten publishers in my career, and I’ve never had a happier publishing experience than with Levellers/Off the Common Books.  Steve Strimer, the publisher, is a wonderful guy who understands the commons and is creative and flexible in trying out new ideas.  The press can do print-on-demand, small-scale print runs for books, which means that its overhead costs are small, allowing it to turn a profit after selling only 200 to 300 copies of a book.  Steve takes pride in saying that he is one of the only for-profit US publishers of poetry, one of the most notoriously unprofitable genres of writing there is.  (Levellers’ poetry imprint is Hedgerow Books.) http://hedgerowbooks.net

I do think this commons-based model represents a superior commercial model for movement-oriented books so long as you have sufficient inhouse editorial and production know-how.  You can make your own choices about editorial content, control your own marketing, reap more of the revenues from sales, and use a Creative Commons license.  You don’t have to forfeit so much to a publisher and the commercial distribution apparatus.  In our case, it was crucial to have a partner like Off the Common Books, an author-friendly, movement-oriented cooperative.

I think the next step for commons-oriented publishing is to invent new sorts of cooperative book distribution systems so that small presses can avoid the crippling fees charged by the conventional book distributors.  A modest editorial and production infrastructure for a press could accomplish a great deal for very little money. (For those who read German: my colleague Silke Helfrich elaborated a bit on that idea and calls for a Commons Publishing or a Publishing Commons.)

Another Cooperative Publishing Experiment, in Spain

Let me quickly mention a second commons-based book publishing experiment now underway.  This project revolves around the Spanish translation for my 2014 book, Think Like a Commoner.  The book is licensed under a CC BY-SA license, which means that translators can do a translation for free.  So far, there are translations in French, Italian, Polish and Korean – with a Chinese one in progress.

Last year, a consortium of commons-oriented groups in Madrid organized by Guerrilla Translation – Stacco Troncoso and Ann Marie Utratel – decided that they wanted to translate Think Like a Commoner as a collaborative project.  The consortium includes the Medialab-Prado, free software publisher Traficantes de Sueños, the commons crowdfunding website Goteo and the translation team of Georgina Reparado, Susa Oñate and Lara San Mamés.

This group, coordinated by Guerrilla Translation’s Xana Libânio, is mounting a crowdfunding campaign to pay for the translation, campaign management, and book editing and design.  Contributors can choose from among numerous rewards, including copies of the book.

What’s especially imaginative is how the Spanish translation project is engaging with publishers in Latin America to print, distribute, promote and sell the book in their various countries – Tinta Limón in Argentina, La Libre in Perú, SurSiendo in México.  Publishers will print a set number of copies for the initial crowdfund distribution, but will then be free to print and sell additional copies for their respective countries.

I am grateful to the Madrid team that has undertaken this project, and impressed by the creative structures and cooperation that they have devised.  It makes me wonder if the time is right to start a standing press on commons and movement concerns.  That would surely require more resources and reliable revenue streams, but it is certainly worth exploring.  The economics of conventional publishing is delivering less and less value to authors and readers even as book prices go higher.  Meanwhile, important new books never get published in the first place because they are deemed unmarketable.

We have a chapter in Patterns of Commoning that describes some of the more notable commons-based publishing innovations out there.  Besides open access scholarly publishing, there is Oya magazine in Germany, Shareable in the US, Pillku in Latin America, among others.  Maybe it’s time for a commons-based publishing summit.


Originally published in bollier.org

Lead image: “Peoples Library Occupy Wall Street 2011 Shankbone” by David Shankbone, David Shankbone – Flickr: Peoples Library Occupy Wall Street 2011 Shankbone. Licensed under CC BY 3.0 via Commons –

The post A New Frontier: Book Publishing as a Commons appeared first on P2P Foundation.

]]>
https://blog.p2pfoundation.net/a-new-frontier-book-publishing-as-a-commons/2015/12/29/feed 2 53224
Who owns knowledge? In Solidarity with Library Genesis and Sci-hub https://blog.p2pfoundation.net/in-solidarity-with-library-genesis-and-sci-hub/2015/12/03 https://blog.p2pfoundation.net/in-solidarity-with-library-genesis-and-sci-hub/2015/12/03#comments Thu, 03 Dec 2015 11:29:54 +0000 http://blog.p2pfoundation.net/?p=52958 Marguerite Mendell forwarded us this open letter defending Library Genesis and Sci-hub. Please read and share! “We demonstrate daily, and on a massive scale, that the system is broken. We share our writing secretly behind the backs of our publishers, circumvent paywalls to access articles and publications, digitize and upload books to libraries. This is... Continue reading

The post Who owns knowledge? In Solidarity with Library Genesis and Sci-hub appeared first on P2P Foundation.

]]>
The_Little_Prince_37914

Marguerite Mendell forwarded us this open letter defending Library Genesis and Sci-hub. Please read and share!


“We demonstrate daily, and on a massive scale, that the system is broken. We share our writing secretly behind the backs of our publishers, circumvent paywalls to access articles and publications, digitize and upload books to libraries. This is the other side of 37% profit margins: our knowledge commons grows in the fault lines of a broken system. We are all custodians of knowledge, custodians of the same infrastructures that we depend on for producing knowledge, custodians of our fertile but fragile commons.”

In Antoine de Saint Exupéry’s tale the Little Prince meets a businessman who accumulates stars with the sole purpose of being able to buy more stars. The Little Prince is perplexed. He owns only a flower, which he waters every day. Three volcanoes, which he cleans every week. “It is of some use to my volcanoes, and it is of some use to my flower, that I own them,” he says, “but you are of no use to the stars that you own”.

There are many businessmen who own knowledge today. Consider Elsevier, the largest scholarly publisher, whose 37% profit margin [1] stands in sharp contrast to the rising fees, expanding student loan debt and poverty-level wages for adjunct faculty. Elsevier owns some of the largest databases of academic material, which are licensed at prices so scandalously high that even Harvard, the richest university of the global north, has complained that it cannot afford them any longer. Robert Darnton, the past director of Harvard Library, says “We faculty do the research, write the papers, referee papers by other researchers, serve on editorial boards, all of it for free … and then we buy back the results of our labour at outrageous prices.” [2] For all the work supported by public money benefiting scholarly publishers, particularly the peer review that grounds their legitimacy, journal articles are priced such that they prohibit access to science to many academics – and all non-academics – across the world, and render it a token of privilege[3].

Elsevier has recently filed a copyright infringement suit in New York against Science Hub and Library Genesis claiming millions of dollars in damages.[4] This has come as a big blow, not just to the administrators of the websites but also to thousands of researchers around the world for whom these sites are the only viable source of academic materials. The social media, mailing lists and IRC channels have been filled with their distress messages, desperately seeking articles and publications.

Even as the New York District Court was delivering its injunction, news came of the entire editorial board of highly-esteemed journal Lingua handing in their collective resignation, citing as their reason the refusal by Elsevier to go open access and give up on the high fees it charges to authors and their academic institutions. As we write these lines, a petition is doing the rounds demanding that Taylor & Francis doesn’t shut down Ashgate [5], a formerly independent humanities publisher that it acquired earlier in 2015. It is threatened to go the way of other small publishers that are being rolled over by the growing monopoly and concentration in the publishing market. These are just some of the signs that the system is broken. It devalues us, authors, editors and readers alike. It parasites on our labor, it thwarts our service to the public, it denies us access [6].

We have the means and methods to make knowledge accessible to everyone, with no economic barrier to access and at a much lower cost to society. But closed access’s monopoly over academic publishing, its spectacular profits and its central role in the allocation of academic prestige trumps the public interest. Commercial publishers effectively impede open access, criminalize us, prosecute our heroes and heroines, and destroy our libraries, again and again. Before Science Hub and Library Genesis there was Library.nu or Gigapedia; before Gigapedia there was textz.org; before textz.org there was little; and before there was little there was nothing. That’s what they want: to reduce most of us back to nothing. And they have the full support of the courts and law to do exactly that. [7]

In Elsevier’s case against Sci-Hub and Library Genesis, the judge said: “simply making copyrighted content available for free via a foreign website, deserves the public interest”[8]. Alexandra Elbakyan’s original plea put the stakes much higher: “If Elsevier manages to shut down our projects or force them into the darknet, that will demonstrate an important idea: that the public does not have the right to knowledge.”

We demonstrate daily, and on a massive scale, that the system is broken. We share our writing secretly behind the backs of our publishers, circumvent paywalls to access articles and publications, digitize and upload books to libraries. This is the other side of 37% profit margins: our knowledge commons grows in the fault lines of a broken system. We are all custodians of knowledge, custodians of the same infrastructures that we depend on for producing knowledge, custodians of our fertile but fragile commons. To be a custodian is, de facto, to download, to share, to read, to write, to review, to edit, to digitize, to archive, to maintain libraries, to make them accessible. It is to be of use to, not to make property of, our knowledge commons.

More than seven years ago Aaron Swartz, who spared no risk in standing up for what we here urge you to stand up for too, wrote: “We need to take information, wherever it is stored, make our copies and share them with the world. We need to take stuff that’s out of copyright and add it to the archive. We need to buy secret databases and put them on the Web. We need to download scientific journals and upload them to file sharing networks. We need to fight for Guerilla Open Access. With enough of us, around the world, we’ll not just send a strong message opposing the privatization of knowledge — we’ll make it a thing of the past. Will you join us?” [9]

We find ourselves at a decisive moment. This is the time to recognize that the very existence of our massive knowledge commons is an act of collective civil disobedience. It is the time to emerge from hiding and put our names behind this act of resistance. You may feel isolated, but there are many of us. The anger, desperation and fear of losing our library infrastructures, voiced across the internet, tell us that. This is the time for us custodians, being dogs, humans or cyborgs, with our names, nicknames and pseudonyms, to raise our voices.

Share this letter – read it in public – leave it in the printer. Share your writing – digitize a book – upload your files. Don’t let our knowledge be crushed. Care for the libraries – care for the metadata – care for the backup. Water the flowers – clean the volcanoes.


 

[^1]:
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0127502,
http://svpow.com/2012/01/13/the-obscene-profits-of-commercial-scholarly-publishers/

[^2]:
http://www.theguardian.com/science/2012/apr/24/harvard-university-journal-publishers-prices

[^3]:
http://www.aljazeera.com/indepth/opinion/2012/10/20121017558785551.html

[^4]:
https://torrentfreak.com/sci-hub-tears-down-academias-illegal-copyright-paywalls-150627/

[^5]: https://www.change.org/p/save-ashgate-publishing

[^6]: http://thecostofknowledge.com/

[^7]: In fact, with the TPP and TTIP being rushed through the
legislative process, no domain registrar, ISP provider, host or human
rights organization will be able to prevent copyright industries and
courts from criminalizing and shutting down websites “expeditiously”.

[^8]:
https://torrentfreak.com/court-orders-shutdown-of-libgen-bookfi-and-sci-hub-151102/

[^9]:
https://archive.org/stream/GuerillaOpenAccessManifesto/Goamjuly2008_djvu.txt

The post Who owns knowledge? In Solidarity with Library Genesis and Sci-hub appeared first on P2P Foundation.

]]>
https://blog.p2pfoundation.net/in-solidarity-with-library-genesis-and-sci-hub/2015/12/03/feed 1 52958
The Commons and EU Knowledge Policies https://blog.p2pfoundation.net/the-commons-and-eu-knowledge-policies/2015/09/02 https://blog.p2pfoundation.net/the-commons-and-eu-knowledge-policies/2015/09/02#respond Wed, 02 Sep 2015 08:47:35 +0000 http://blog.p2pfoundation.net/?p=51740 One of the great advantages of a commons analysis is its ability to deconstruct the prevailing myths of “intellectual property” as a wholly private “product” – and then to reconstruct it as knowledge and culture that lives and breathes only in a social context, among real people.  This opens up a new conversation about if... Continue reading

The post The Commons and EU Knowledge Policies appeared first on P2P Foundation.

]]>

Commons-Network-cover-820x400

One of the great advantages of a commons analysis is its ability to deconstruct the prevailing myths of “intellectual property” as a wholly private “product” – and then to reconstruct it as knowledge and culture that lives and breathes only in a social context, among real people.  This opens up a new conversation about if and how property rights in knowledge should be granted in the first place.  It also renders any ownership claims about knowledge under copyrights and patents far more complicated — and requires a fair consideration of how commons might actually be more productive substitutes or complements to traditional intellectual property rights.

After all, it is taxpayers who subsidize much of the R&D that goes into most new drugs, which are then claimed as proprietary and sold at exorbitant prices.  Musicians don’t create their songs out of thin air, but in a cultural context that first allows them to freely use inherited music and words from the public domain — which future musicians must also have access to. Science can only advance by being able to build on the findings of earlier generations.  And so on.

The great virtue of a new report recently released by the Berlin-based Commons Network is its application of a commons lens to a wide range of European policies dealing with health, the environment, science, culture, and the Internet.  “The EU and the Commons:  A Commons Approach to European Knowledge Policy,” by Sophie Bloemen and David Hammerstein, takes on the EU’s rigid and highly traditional policy defense of intellectual property rights.  Bloemen and Hammerstein are Coordinators of the Berlin-based Commons Network, which published the report along with the Heinrich Böll Foundation.  (I played a role in its editing.)  The 39-page report can be downloaded here — and an Executive Summary can be read here.

“The EU and the Commons” describes how treating many types of knowledge as commons could not only promote greater access to knowledge and social justice, it could help European economies become more competitive. If EU policymakers could begin to recognize the generative capacities of knowledge commons, drug prices could be reduced and climate-friendly “green technologies” could be shared with other countries. “Net neutrality” could assure that startups with new ideas would not be stifled by giant companies, but could emerge. And scientific journals, instead of being locked behind paywalls and high subscription fees, could be made accessible to anyone.

Bloemen and Hammerstein write that:

many of the economic and legal structures that govern knowledge and its modes of production – not to mention cultural mindsets – are exclusionary. They presume certain modes of corporate organization, market structures, government investment policies, intellectual property rights and social welfare metrics that are increasingly obsolete and socially undesirable. The European Union therefore faces an urgent challenge: How to manage knowledge in a way that is socially and ecologically sustainable? How can it candidly acknowledge epochal shifts in technology, commerce and social practice by devising policies appropriate to the current age?

EU policies generally focus on the narrow benefits of IRP-based innovation for individual companies and rely on archaic social wellbeing models and outdated models of human motivation. The EU has failed to explore the considerable public benefits that could be had through robust, open ecosystems of network-based collaboration. For example, the EU has paid little serious attention to the enormous innovative capacities of free, libre and open source software (FLOSS), digital peer production resulting in for example Wikipedia, open design and manufacturing, social networking platforms, and countless other network-based modes of knowledge creation, design and production.

Here’s a useful chart that summarizes key principles of the commons, policy designs, and outcomes that could be pursued through a knowledge commons agenda.

The report concludes with an agenda that the EU (or any government) could adopt to promote knowledge commons.  It includes such ideas as non-exclusive licensing of research so that biomedical innovations could have greater impact and more benefit for taxpayers; new support for knowledge commons through such things as patent pools, data sharing, the sharing of green technologies, and biomedical prizes that would make discoveries more widely available.  Muiltilateral trade treaties could be designed to promote investment in R&D and knowledge sharing among countries, producing enormous social benefits for people through expanding the global knowledge commons.  Net neutrality policies for the Internet could have similar catalytic benefits.

Will the EU stand in the way of the “collaborative economy” that is emerging, giving protectionist privileges to the big, politically connected digital corporations – or will it stand up for the great benefits that can be generated through open platforms, collaborative projects and knowledge sharing?  It’s great that this new report is stimulating this long-overdue debate.

For a broader overview of how the commons is going mainstream in Europe – most notably, via the new commons Intergroup in the European Parliament — here’s an insightful article by Dan Hancox that recently appeared in Al Jazeera English.

The post The Commons and EU Knowledge Policies appeared first on P2P Foundation.

]]>
https://blog.p2pfoundation.net/the-commons-and-eu-knowledge-policies/2015/09/02/feed 0 51740