Artificial Intelligence – P2P Foundation https://blog.p2pfoundation.net Researching, documenting and promoting peer to peer practices Thu, 14 Feb 2019 16:50:54 +0000 en-US hourly 1 https://wordpress.org/?v=5.5.15 62076519 A Q&A Session with Douglas Ruskhkoff https://blog.p2pfoundation.net/a-qa-session-with-douglas-ruskhkoff/2019/02/14 https://blog.p2pfoundation.net/a-qa-session-with-douglas-ruskhkoff/2019/02/14#respond Thu, 14 Feb 2019 17:00:00 +0000 https://blog.p2pfoundation.net/?p=74503 Douglas Rushkoff, author and host of Team Human recently held a Q&A session at Quora. Here are his answers: Will there be limits to what artificial intelligence will be able to “know” in the future? I like Michael’s answer. It reminds me of Godel’s theorem. I have a much more pedestrian view of these things.... Continue reading

The post A Q&A Session with Douglas Ruskhkoff appeared first on P2P Foundation.

]]>
Douglas Rushkoff, author and host of Team Human recently held a Q&A session at Quora. Here are his answers:

Will there be limits to what artificial intelligence will be able to “know” in the future?

I like Michael’s answer. It reminds me of Godel’s theorem. I have a much more pedestrian view of these things. I still think of artificial intelligence more as map than territory. So it can’t know everything without being as big as everything.

But more casually, I think you’re asking what practically won’t AI be able to know. And I think AI’s won’t know what it is like to be thinking. They’ll think or calculate, but they won’t know what it feels like to be thinking or calculating. I don’t think they’ll ever be aware. (Actually, that’s another way of saying what Michael just said.)

But that’s probably an almost religious conviction – the sort of view held by physicist Lee Smolin. I think consciousness preceded life. And I don’t think it inhabits stuff like chips in the same way it inhabits us.

I think humans are uniquely capable of embracing paradox. Of sustaining ambiguity. We even like it. We don’t need to resolve things. We can watch a David Lynch movie. AI won’t be able to know what that feels like.

How should a potential job seeker adapt oneself to deal with the rise of artificial intelligence?

Something feels a little sad to me about the idea that we should adapt ourselves to deal with AI. As if we’re optimizing humans for the digital future, instead of optimizing digital technology for the future of humanity. Screw that.

But, to your strategic point, I guess I’d suggest that we start doing what only human beings can do: empathy, compassion, nature, rapport, parenting, serving as an example. We can embody values.

We can also do what humans do, which is make nature and the world less cruel. We humans can instinctively tell right from wrong, cruel from kind. We know what pain is. We can see ourselves in someone else’s situation. We can identify.

What jobs do that?

Ultimately, I think we have to remember that jobs are not part of the human condition. Jobs are an invention of the late middle ages, when small businesses were declared illegal and replaced by chartered monopolies (porto-corporations). People were no longer allowed to be in an industry. They had to work for the king’s officially chartered friend. So instead of creating and selling value, we had to become employees of someone’s company, and sell our time. That’s when people started traveling to the cities for work, it’s when the plagues started, and it’s when the wonderful rise of the middle class was quashed by the aristocracy.

So I don’t know it’s jobs we want, anyway. We want a meaningful way to create value for one another. If AIs can do everything, fine. But they are really nowhere close. Look at how much slavery and pollution are externalized by today’s industrial processes? If we were a little bit *more* labor intensive in our soil management, we might not run out of topsoil in the next 60 years. Permaculture takes human labor. So does education – unless you’re just training people for jobs, which is never what education was supposed to be about.

How can the digital economy reward people instead of extracting their value?

The fast answer: platform cooperatives. Give workers and ownership stake.

The digital economy can distribute wealth if people own the means of production. If the drivers owned Uber, they’d be in a position to profit off their labor. Right now, they are not only driving for peanuts, but training their autonomous replacements. Their every move is recorded by machine learning programs. So they’re doing R&D for a company that will fire them.

If they owned the platform, then at least they’d continue to profit off their investment of labor.

Now, I hear a lot of folks asking, what about the investors? The people who put up the billions of dollars of investment for Uber to happen? Honestly? It didn’t cost that much. The reason why Uber has to bilk its employees (sorry, “contractors”) and hurt cities is because they need to pay back investors who are expecting much greater returns than could be delivered were Uber doing normal, appropriate business. The app is not that expensive. The lobbying of cities for legal accommodations, is, but they wouldn’t need to be paying for all that if they were good corporate citizens.

Cooperatives (read Nathan Schneider’s new book) distribute value to worker-owners, rather than extracting it from the top. And just as digital tech currently enables extraction on a scale unimaginable twenty years ago, digital tech could enable distributive enterprises on a scale unimaginable twenty years ago.

In Throwing Rocks at the Google Bus, I offer “digital distributism” as the answer to our economic woes. Basically, a retrieval of medieval, p2p business practices. An economy optimized not for growth but for flow: the velocity of money through the system.

How can we best engage with people who hold opposing viewpoints?

This is a tough one, but I think we have to see them as human beings, and try to understand the fear or other emotion that is informing their position. I have a section in the book where I try to explain the emotional logic of racism – particularly of white supremacists. Or the emotional logic of former coal miners wanting to open the mines again – no matter what it means to the climate.

They’re not crazy. Or, maybe they’re a bit crazy, but they are human and coming from a recognizably human place if we try on their world view for a moment. It’s scary to do. It’s scary to realize that the American white supremacist thinks his culture – white European culture – won America. He doesn’t understand why people should be taught about ‘loser’ cultures in school – that it will only make America weaker.

Or that people really don’t see other people as human. They look at pictures of Mexican immigrants and see something less than human. But then – something unexpected happens – like when those tapes came out of the sounds of those refugee children crying – and then their impassivity is broken.

So I think the way to break through is by going deeper. It’s not logic that will arrest their intransigence. It’s human rapport. Spend time with the person. Look in their eyes. I noticed at a family holiday, that when my aunt was talking about how it’s okay to let the Syrians die, she’d always break off eye contact with me. It was as if she couldn’t be both human and inhuman at the same time. There’s a clue in there, for how to reach the “other” side. Don’t let them be the other side. Our ability to engage the other is all we really have if we want to get through.

For others, sometimes the best thing is to ignore their viewpoints. Who cares what they think? It’s what they do that matters. If we want to make “red state” people more progressive, we shouldn’t try to get them to think more progressively. Just go there, and start up some initiatives for mutual aid. Get them working again, using favor banks, credit unions, and other projects that make their own lives better by working positively for the community. Someday, they may come to realize that they are engaged in progressive, almost socialist activities.

How can we organize resistance to capitalism, technology, and fascism?

I’ve got a whole section on that in the book called Organize. I really should post highlights from it it as an excerpt, somewhere.

In short, find the others. I think it happens best locally. While resistance at scale matters, it’s really easy to fall into the same dehumanizing traps that the corporations fall into. It becomes mailing lists and website discussions ideological very quickly. Local activities – from land use and school policy to community currencies and business cooperatives – end up changing the way people think and act. Being involved directly with mutual aid or child literacy changes one’s perception of social programs and immigration.

Plus, when our activism is connected to the real world, we humans have the home field advantage. Corporations will always have the advantage in the “brand” space. Once we resort to branding for connecting to human beings, we surrender to their more propagandistic communications style and the values that go along with it.

So my main advice for organizers is to organize locally, and then network globally with other local organizers. Confront real, immediate issues. They trickle up, because the problems we’re dealing with locally are largely the results of top-down domination, laws written to protect corporations, or regulations that owe their legacies to segregation.

I’m also keen on organizing around activities, rather than ideologies. I don’t care what someone “believes in.” I care what they do. Maybe that’s some leftover Jewishness – a religion about behavior, not beliefs. But there’s some sense in this. People on the right and the left want the river clean. So let’s clean the river. I had a great talk with an in-law of mine, who is a Trump supporter but was really ticked off that the forests around his home had been clearcut. They were taken down by the landowner, because of a Virginia subsidy for renewable energy. The wood was considered renewable. Cutting down the forest was dumb and bad for the environment. My relative blamed it on the climate change enthusiasts. I’d probably blame it on corrupt or short-sighted regulation. But we agreed on the outcome, and if I lived there, we could have worked together on solutions.

Will rote jobs such as accountants and librarians be affected first by artificial intelligence?

It’s interesting, but I don’t think of accountants and librarians as having rote jobs. They may have to act like they have rote jobs. But what are accountants really for? To figure out ways to make your ledger look like it is normal and proper – but to actually find ways to hide your money from the tax man. What are librarians for? To protect the books and keep them on the shelves? No. They are there to help you figure out what you really need to know in order to accomplish a task. They are the way writers and researchers get a leg up. Make friends with a librarian, and you are almost cheating as a researcher.

So an AI accountant may not be there to create the wiggle room you need in your tax return. Especially if it’s not really yours. And an AI librarian, it may get you what you say you want, but it won’t get you what it suspects you need. It’s not on the journey with you. It’s not excited to see your eyes go wide when you see that new book for the first time.

We’ve got figure and ground reversed. None of these activities are about the pure utility value. Certainly the librarian. The librarian is there to celebrate knowledge, the dignity of learning, the passion of research. The librarian can use AI, but the AI can’t be the librarian.

So far, it’s not the rote jobs but the wealthy who are most affected by AI. They’re the ones using AIs to trade on the stock market. It’s the hedge fund guys who were first unseated by AIs, if you really want to think about it. People who are just looking for a way of gaming human systems, rather than contributing to or participating in them.

What do you think it will take for people to respect their personal privacy and human worth in the face of such seductive technology?

I often wonder that. Part of the problem is that while people are afraid to let someone else see them masturbate or eat or sleep, they don’t realize how much more dangerous it is to share seemingly meaningless meta-data. It’s not the embarrassing details of your kinks that you should be concerned about sharing. It’s the meaningless points of data that can be used by algorithms to put you in a particular statistical bucket, and then manipulate you.

Or, worse, deny your rights. In China, your social media contacts can determine your eligibility for a visa or a job. Actually, that’s becoming increasingly true in the US – whether you’re looking to be a babysitter, get a loan, or get out of prison.

As long as you’re in the majority, and don’t care about how these technologies are used against people, it may feel like none of this matters. But the minute you run afoul of the law, or lose your money to an illness, all of a sudden this data oppression starts to matter.

Right now, people are acting as if they’re in a prison camp. They accept the trinkets of their virtual keepers, in return for the souls. It’s not that they don’t value their humanity, but they’re atomized victims experiencing something like Stockholm syndrome. It may take some real tragedies for people to recognize it’s gone too far.

As for human dignity, well, capitalism did a number on that before this digital technology came around to finish the job. Marx wrote a lot about this. Technology under capitalism led us to think of ourselves in terms of our utility value. Our productive output, rather than our essential dignity.

How can we rebuild the intermediate layers of collective intelligence and avoid a hollow top and bottom-heavy social collective?

That’s interesting. I guess you mean a “momma bear” sort of right-sized collective intelligence?

My guess would be that there’s various Dunbar numbers for social organization. (Dunbar’s number is 150 – the number of stable relationships a person can maintain). There are likely various levels of social organization that function if they’re formulated properly. So maybe we individuals organize into groups of 150, and then 150 of those can network into something else.

In Team Human, I also make a strong case for cities. There’s also a piece I did about cities vs. nation states on Medium. I understand cities as the largest ‘organic’ organization of people. They form from the bottom up, but they can serve as a pretty robust and populous layer of collective intelligence. Or at least collective interest and organization.

Additionally, collective intelligence doesn’t just move through space, but through time. Works the Torah, mathematics, and cathedrals are multi-generational projects. The collective intelligences may be small at each moment, but scale up over the centuries.

But yeah, I guess my answer is that collective intelligences are somewhat fractal, with little parts coordinating into a whole. More of a federated model, with distributed autonomy. I don’t know that individuals ever really experience themselves as part of a collective intelligence. At least not at this stage of human evolution. But I do know what it’s like to see oneself as part of a collective project.

The protocols for interaction really mean a lot. I look to organizations like Loomio, which coordinate group activity, for learnings about how to think with different size groups.

Will “platform economies” benefit cooperatist movements or be appropriated to corporatism?

Well, so far most of the big platforms are monopolies, not coops. Amazon, Facebook, Google…. And in those cases, only Google’s employees seem willing to push back on the company’s policies. By their very nature, digital platforms seem biased toward non-human entities like corporations over local, flesh-and-blood entities.

It’s hard to tell, though. In the early internet days, it felt just the opposite. These platforms were so intrinsically unfriendly to business. Most businesses eschewed the net: everyone wanted everything for free. People were sharing. It seemed to herald the end of corporatism. But of course, Barlow and other well-meaning libertarians pushed government and regulation off the net, and the large corporate players walked into the vacuum.

So the net we have now is populated and dominated by these super capital-intensive projects, and they’ve gathered enough users to become entrenched monopolies.

That said, anybody could build a Facebook or Uber today, with an almost trivial effort. None of this is as hard as it was back then. They couldn’t build an Amazon, because they have a whole lot of real-world infrastructure at this point. They’re like WalMart and UPS (logistics) in one company. Plus the could services.

Platform economies tend to favor those who scale, and scale almost infinitely. But so do growth-based capitalism, an interest-bearing currency, and an investor-rewarding tax structure. So I do see ways that platform economics can favor cooperatives, but those choosing to use digital platforms should really make sure they need them to organize their collectives. Or they at least need to organize along the principles of distributism – more like an anarcho-syndicalist network than a corporation. More like Ace hardware or Associated Press.

Are we in the midst of a cultural renaissance? How is that different from a digital revolution?

Oh, a revolution is just a turn of the cycle. One set of rulers is replaced by another. Bankers get replaced by crypto hackers or something. Rockefeller replaced by Gates. Gates by Zuckerberg. The US government by Trump.

Re-naissance means rebirth. A renaissance is the rebirth of old values in a new context. So the original Renaissance brought forward the ideals of ancient Greece and Rome – citizenship, individuality, centrality of the government, Empire. Our renaissance, the digital renaissance, has retrieved what got repressed the last time out: peer-to-peer economics, women, holism. It’s part of why we’re seeing all this medievalism. That’s the moment before the renaissance.

I think a renaissance is more hopeful, because it offers us the opportunity to retrieve essential human values, and then embed them in the digital future. It’s not simply about one kind of company ‘disrupting’ another. The game doesn’t change in that case. A renaissance changes the whole playing field.

Is it necessary to have a content creation platform in 2019?

I’m not exactly sure what you’re asking. You mean, necessary for humanity or necessary for individuals?

I’m using Medium these days as my content creation platform, and I’m glad to have a “place” where people gather to read and write and share and comment and cross-references. I like it better than I liked having my own website and then being on various people’s blog rolls. As long as Evan Williams stays on the writers’ side of the equation, it seems like a good thing. He’s even experimenting with a paywall where writers share what they’ve earned based on views and “claps.” A bit like a commons. And because it’s ad-free (and will hopefully stay that way!) it’s not subject to the same problems as a GoogleAds or Facebook.

When I first read the question, though, I thought maybe you were referring to WordPress and other content management systems through which to create writing and posts. I am still a fan of the “open web” and just serving html pages to people. But thats’ probably a sign of age. The dynamic rendering you can do effortlessly on a CMS are pretty useful – so your content will work on a phone or web browser. You don’t have to test your content on every device and browser. Who really wants to do that if they’re just making videos or writing articles? We don’t each have to know how to do everything in the process.

But it does require we *trust* the people creating the layer on which we’re publishing. Sometimes I do, and sometimes I don’t.

Finally, if you mean, does *everybody* need to be on a content creation platform of some kind? No. Not everyone needs to be a content creator. It used to be very few of us that wrote books – partly because it involved typing and a lot of work. It’s a whole lot easier to “publish” right now. But that doesn’t mean everyone should. There are many many other ways to participate meaningfully in society.

That’s part of why I’ve started thinking of Team Human as my last book. I am interested to see how else I can play – and I want get off the stage and let others get their work out there. Even the Team Human podcast is really about me using the platform I’ve developed to support the work of others.

What are the main absences in the “sociological imagination” of contemporary society?

I know people have a lot of definitions for “sociological imagination.” For me, it’s simply the way in which a person conceives of the relationship between individuals, each other, and society as a whole.

Right now, the issue I’m seeing – even from some intellectuals – is the inability to distinguish between human dignity and personal freedom. We’ve reached the zenith of individuality, and have come to imagine universal liberty as some expression of this individual liberty.

Even Constitutionally, I think it leaves out the right of assembly. That’s the First Amendment right that all these individuals have: to gather! Our current sociological imagination seems to miss that individuality and collectivism are not mutually exclusive. They’re more like yin and yang than either/or. The true expression of an individual occurs in a group. The true collective enhances the power or reach of all the individuals in it.

So I think the sociological imagination of today is confused about society itself. They’re still stuck in Maggie Thatcher’s rhetorical flourish, “there is no such thing as society.” which she didn’t even really mean that way. She was trying to adapt Hayek, and say that society is the reconciliation of a zillion bottom-up desires. More of a perfect, invisible hand, emergent market phenomenon. I don’t quite buy that, but at least she’s admitting that there’s a coordinated whole.

At my town’s meetings, I’ve heard people get up and ask, “why do we have to pay school tax if we no longer have kids in school?” Stuff like that. That’s a lack of sociological imagination. That’s part of why I wrote Team Human: to help people remember that being human is a group activity.

What is right and what is wrong for the world today?

Right and wrong? Them’s fightin’ words!

I’m reluctant to moralize, but whatever brings us together is right, whatever separates us is wrong. Yes, we’re entitled to private email and bathroom stalls, so I don’t mean to get all extreme. There’s room for both.

But when I look at right and wrong for the world today, I see people retreating from true connection with others. They do this either through strident individualism, or total conformity. It’s easy to see how individualism is a problem. And how algorithms further atomize us. We are easier targets when we’re picked off from the herd. When we have no social relationships, and look to products or ‘non-player characters’ for a feeling of satisfaction.

Totally conformist mobs – like the crowds at a fascist rally or the silent workers in a giant factory – they’re not really together, either. They’re under one banner, but each one has their own relationship to the leader or to the company. They’re not a labor union or a guild. They’re not a team working together. They’re just as atomized, afraid to speak to one another or share their doubts. They may march in lockstep, but they’re not together.

So right now, as a result of consumerism, social media, political divisiveness, and mass social programming designed to alienate us, the main “good” we can do is to help people see other people as humans. Even if they’re on the other side of some fence. Getting us to see others as “other” is an old trick. It’s what the lords and kings used to do to get peasants to fight against one another. It’s what earned them fealty.

Whenever you hear a leader telling you that the other side are rapists or cannibals, remember that it’s not really true. That’s a wrong thing for the world, today. The right thing is saying, “look at those people over there. They are just like us. These national boundaries we protect are not recognized by nature or humanity. We’re all one family. Their suffering is our suffering.”

Will Elon Musk be comfortable staying on the moon?

He will die up there if he goes. You know how hard it is to maintain a biosphere in a dome? We couldn’t even do it on earth, with two Biosphere attempts. I think a sustainable closed-terrarium for humans on inhospitable planets is a long way off – even longer than Musk’s extended life span.

Photo by gojogoj

The post A Q&A Session with Douglas Ruskhkoff appeared first on P2P Foundation.

]]>
https://blog.p2pfoundation.net/a-qa-session-with-douglas-ruskhkoff/2019/02/14/feed 0 74503
7 Lessons & 3 Big Questions for the Next 10 Years of Governance https://blog.p2pfoundation.net/7-lessons-3-big-questions-for-the-next-10-years-of-governance/2019/01/14 https://blog.p2pfoundation.net/7-lessons-3-big-questions-for-the-next-10-years-of-governance/2019/01/14#respond Mon, 14 Jan 2019 10:00:00 +0000 https://blog.p2pfoundation.net/?p=73983 Reposted from Medium Milica Begovic, Joost Beunderman, Indy Johar: The intent of putting the Next Generation Governance (#NextGenGov) agenda at the centre of the Istanbul Innovation Days 2018 was to start to explore the future of the world’s governance challenges, and to debate how a new set of models are needed to address a growing... Continue reading

The post 7 Lessons & 3 Big Questions for the Next 10 Years of Governance appeared first on P2P Foundation.

]]>
Reposted from Medium

Milica Begovic, Joost Beunderman, Indy Johar: The intent of putting the Next Generation Governance (#NextGenGov) agenda at the centre of the Istanbul Innovation Days 2018 was to start to explore the future of the world’s governance challenges, and to debate how a new set of models are needed to address a growing ‘relevance gap’ in governance and peacebuilding.

Any exploration into the Next Generation of Governance requires us to recognize that in an increasingly multi-polar world, a world where power is increasingly more directly used and the singular rule of law we had ‘hoped’ for is being challenged, the future of governance is not only about the technocratic capacity to make rules (even if they are machine readable rules), but also our ability to construct new social legitimacy for all and by all.

In a world where we are facing urgent calamities and deep-running risks like never before, organizations like UNDP are witnessing a growing gap between the incremental progress in practice and a rapidly accelerating set of challenges — whether rampant inequality and its impact on social cohesion, growing ranks of forcefully displaced people, the fragmentation of state agency, rapid depletion of the commons, or the seemingly intractable rise of new forms of violence. This gap — between the emerging reality (strategic risks) and existing practice — is set to exponentially grow unless there is a major rethink of development practice and how we remake governance fit for the 21st century.

Earlier, we hypothesized that across the world, our governance models are broken: we are holding on to 19th century models that deny the complexity of the ‘systemocracy’ we live in : a world of massive interdependencies. #NextGenGov therefore is an exploration — the first of many — of the type of experiments that chart a way towards a future in sync with the Sustainable Development Goals. It aims to explore the lessons, challenges and gaps emerging with governance not relegated to a single goal (SDG 16) but as the prerequisite of achieving the SDG agenda as a whole.


A very 21st century kind of failure

It could be argued governance is the central failure of the 21st century — sidelined as an inconvenient overhead, governance innovation has seen consistent underinvestment and a lack of attention. Our means of governance and regulation have become relics in an age of growing complexity. New capabilities and trends like rapid real-time data feedback loops, algorithmic decision making, new knowledge of the pathways of injustice and inequality, and the rise of new tools and domains of power are challenging established ways of decision making. Frequently hampered by simplistic notions about the levers of change, awed by networked power dynamics in the private sector and undermined by public sector austerity, many of us seem scared and disoriented in responding to the scale of failure and new needs we are witnessing. Worse still, we seem unable to make the case that ultimately, good governance should not be a means of state control but a means to unleash sustainably the full and fair capacity of all human beings.

In this context, the growing strategic risks of our age are making past governance protocols and processes increasingly incoherent and misaligned to the need of both member states and our broader global ecosystems; both real-world precedents and statistically derived probability are collapsing as viable decision-making tools. This incongruity is revealed at different scales and conditions:

1. The existing structures, governance and business models, skills, and institutional cultures are producing solutions that do not fit the new nature of problems they are supposed to be addressing (IPCC’s 1.5 C report and genetically engineered baby in China as most recent proxies of misalignment of current practice and emerging existential threats).

2. Business-as-usual as a method to address the entirely new scale and modality of problems is a recipe for decline and irrelevance (consider ongoing efforts to apply current regulatory paradigms to distributed technologies like blockchain).

3. Governments and investors too are experiencing the lack of coherence between existing solutions and emerging problems, and are therefore eager to restructure their relationship with UNDP and similar organisations.


Towards new Zones of Experiment

Searching for fresh perspectives, our approach was to hone in on a series of Zones of Experiment — a range of domains that could unlock some of the great transitions the world is facing. We looked at new ways to protect and restore the commons, to actualize the human rights of landless nations, and to prevent conflict and empower civic actors in revealing abuses. We also explored science fiction, arts and culture as seedbeds for imagining alternative economic systems, the role of new technologies in urban governance, and new practices in the way power is organized, manifested and influences decision making.

Across these zones of experiment, we are seeing how a new generation of edge practitioners is challenging the status quo, and how their experiments enable us to learn both context-specific and transferable lessons. Together, they point the way to the #NextGenGov agenda as a new approach to strategic innovation (and feedback coming in after the IID2018 indicates the need to explore additional Zones of Experiments with emerging new practices such as governance of digital financial markets and impact of systemic structural issues, such as decline of trust on single point sectors including attitudes towards vaccination).

Underlying all these is the double edged sword of rapid technological progress in a multi-polar world that is challenging established ethical certainties. The unexplainable AI is one such manifestation, where the advanced identification of correlation is argued to be sufficient to guide decision-making be judicial or even law enforcement, challenging — even perhaps regressing — us to a pre-scientific age and undermining the basic principles of governance — accountability and equality of treatment (as argued by Jacob Mchangama).

As Primavera de Filippi outlined in her keynote speech, new technological capabilities always carry the potential both to disrupt the status quo or conversely reify existing structures of power and inequality. If we want to put the new tools of power in the hands of the many not the few, we need to focus on the governance of the new infrastructures rather than rely on governance by those infrastructures. However, whether in blockchain applications (Primavera’s domain) or elsewhere, it is evident that often we simply don’t know yet what kind of detailed issues, unintended consequences or unexpected feedback loops we might face when applying new technological capabilities. This means experimentation can’t be seen as an add-on but should be at the core of exploring the future and rapid learning about implications of emerging trends.

This is not the place to summarize each zone of experiment discussed during the Istanbul Innovation Days. But we can outline a series of shared lessons and implications for the future of governance and peacebuilding.

1. Micro-massive Futures — A series of new micro-massive data, sensing, processing and influencing capabilities (as revealed in the work of Metasub, PulseLab Kampala and Decibel) is enabling state and non-state actors to transcend the tyranny of the statistically aggregated average, and instead focus on the micro, the unique and the predictive — early warnings on looming epidemics or weather-related crop failure, emerging signs of microbial antibiotic resistance, or the compound impacts of pollution on individuals, particularly in disadvantaged populations. The much more fine-grained understanding they enable (whether through big data, social media mining, or specific sampling and real-time blockchain-anchored measurement) creates radical new pathways to harboring and enhancing the public interest. Achieving decent average outcomes (of health, pollution, human development…) has more than ever become obsolete as a goal: the geographically, individually and temporally hyper specific data we can obtain, and the wicked nature of the issues at hand, require new ways of understanding ‘risk’ — and acting on it. This same micro-massive future on the other hand is also weaponizing the capacity to mine data in order to influence outcomes at the societal scale — opening up huge new questions about the meta governance of these new capacities in the first place. This implies a double set of responsibilities: if we can now govern and influence outcomes at the micro-level of the individual and molecular detail, and at the massive scale of societal bias, with at both scales growing capabilities to understand risk and predict possibilities — how do we govern in this new reality in order to use these powers for good? Or conversely how to ensure that the emerging capabilities of new governing realities are not resulting in human rights abuses, discrimination and violence?

2. From control to ennoblement — Where such distributed data generating and analysis capacity comes into its own is through new contracting agreements that change our capacity to manage shared assets. The multi-party contributory contracts, e.g. the blockchain-based agreements at the heart of the Regen Network, show us how the collective inertia around agricultural restoration could be overcome. Crucially, rather than disincentivizing ‘bad behaviours’ through control, such ennobling regulatory systems can now be imagined to incentivize, communicate and verify contributory systems. Equally, this capability is paving ways for entirely new class of governance mechanisms for the commons — including bestowing legal rights on rivers and the Amazon. How do we reimagine governance if such ennoblement and restoration would structurally be our objective?

3. Making the invisible visible — New ways of building the politics of change are continuously emerging. Using mapping, animation, arts and other visualization tools, practitioners like Forensic Architecture and Invisible Dust — and in different ways, Open Knowledge Germany — are empowering citizens and civic groups to reveal issues which for a wide range of reasons tend to remain hidden — whether in the case of state agents committing human rights abuses or pernicious, slow-moving killers like air pollution (which in many case of course equally implicates states in failing to uphold human rights). By involving distributed civic networks and creative professionals from right across traditional disciplines, and by connecting to the aspirations of populations in different ways, such emerging tactics act as a powerful complement to established tools to build the demand for change.

4. Hybrid participatory futures — Getting to a next level of citizen engagement in the transitions we face requires a next generation of platforms that enable engagement with complexity, new technology and alternative imaginings of the future. This is about new settings for deliberation, new ways of extending invitations to take part, and tapping into the creative resources of science fiction and the arts to reimagine social contract and alternative economic systems. Medialab Prado in Madrid, the Edgeryders community, the deliberative citizens’ assemblies in Ireland or, at local scale, RanLab’s deliberative polls across Africa, show in different ways that new settings for participation can engender new cultures of participation cutting across ‘online’ and ‘offline’. Their deep investment in the tactics of convening people enable the creation of new and highly constructive new communities of concern around difficult topics, as well as building legitimacy for bold experimental approaches. This in turn enables the prevention of potential policy failure or the addressing of topics hitherto thought untouchable by established political players. In an era that frequently bemoans the decline of trust in the abstract, such cultural infrastructure rebuilds avenues towards greater trustworthiness across different parties, and an ability to imagine futures unconstrained by current divisions and biases, as the mitigation of risk. As differing platforms have differing biases in terms of who they attract an what behaviors they foster, such participation will always needs to be hybrid — well curated online platforms, temporal gatherings and permanent physical spaces all play a role in building the shared legitimacy for civic innovation. Enabling the participatory co-creation of the future is a fundamental component of the governance architecture in a complex world: we must complement the nudging of people’s behavior (a crucial tactic which has been applied with considerable success) with nurturing human imagination and facilitating deliberation and engagement with evidence.

5. Public goods & rights beyond the state — In an era where about 68 million people are currently stateless and this number is expected to rise significantly in the coming years, we are seeing state players as unable, unwilling or simply absent in the anchoring of fundamental human rights like people’s individual and family identity, and unable to access public goods provided outside the national boundary. The Rohingya project and IRYO show powerful alternatives and lessons for the remaking of public services like healthcare for both refugee populations and other contexts where access to such services is patchy. These positive alternatives are equally matched by more challenges examples of the quasi privatization of justice — where large technology multinationals are already acting something like a judicial system — “one that is secretive, volatile, and often terrifying.’ They also reveal the need and possibility to reimagine not just service provision but also new architectures of governance beyond the nation state — consider the incoherence of applying national laws to growing numbers of stateless people, para-state futures around the world. The fundamental question arises whether the seemingly limitless rise in populations on the move and para state governance could compel us to imagine and construct at a more structural level new domains of service provision potentially disinter-mediated from the state — and whether that might be more than just dire necessity but also an opportunity to achieve the Sustainable Development Goals.

6. From Evidence based to Experimentation Driven Policy — In an age of increasing complexity, the danger of traditional evidence based policy leading us by the rear view mirror is evident. Instead, the zones of experiment — whether EcoLogic’s futuristic urban landscapes or the service design innovation shared by Pia Andrews from both New Zealand and New South Wales — show the possibility of a new arc of policy formation: experimentation is used to create new forms of situated intelligence and learning, consisting of both new evidence and new insights to underpin the ongoing and iterative development of policies and programmed. These pathways enable institutions to make sense of changes, (re)formulate intent and execution pathways, and thus co-evolve in an open and collaborative process. Fundamentally this is about recognizing 21st century governance will be structurally different: the new institutional capacity can clearly not be designed in vitro but has to grow in-situ, informed by strategic portfolios of experimental options in order to grow the evidence necessary for policy intervention.

7. Sovereignty 2.0. In an age where a vital commons governance can now also be advanced either by imbuing ecological entities — such as rivers in Columbia and elsewhere — with legal rights or by emerging new sets of capabilities like smart contracts and machine learning — as indicated by Regen Network — could this mean the massive scaling of strategies that imbue new types of bodies with ‘sovereign’ powers and capabilities for e.g. machine-based contracting and fining? If so, and heeding Primavera de Filippi’s warnings, the governance of these infrastructures will be a crucial field of innovation.


Whilst even individually these are important new trajectories, when taken together these emerging lessons show how we need to challenge our existing practices at a deeper level. Given the degrees of uncertainty and emergence we face, this implies a call for strategic investment in a broad portfolio of experiments can guide us to the future; fundamentally these are learning options that enable UNDP and its partners to seed and test new ways of governing across different domains. In parallel,#NextGenGov also pointed towards a further set of questions and challenges we face when staring into the future of Governance in a multi-polar tomorrow.

1. BEYOND THE SOCIAL CONTRACT.

In a world of sped-up complexity and change, the social contracts and legitimacy underlying our governance systems are constantly in question, not least because the relevance gaps affecting nearly all players (between needs and capabilities; between promise and delivery; between aspiration and capacity) means that not just trust, but actual trustworthiness is in decline. Across the world, we are seeing broadly two cultural-societal paradigms that underlie potential future social contracts: both of which could be argued as falling. Where individualism is the main tenet, we all too often fail to mainstream and anchor societal innovations that would reduce collective risk, whether vaccination rates or distributed flood prevention strategies. Where the collective is seen to take priority over the individual, the possible inability to accommodate divergence and diversity risks undermining the distributed creativity, energy and drive needed (and available!) to address wicked issues. The challenge we face is to move towards social contracts based on an explicit recognition of interdependence — reaffirming the need for the hybrid participation structures suggested above to provide the distributed fertile ground for this, as well as opening the space for discussions on system governance beyond the human governance. In future Innovation Days and Next Gen Gov experiments we need to transcend natural rights and embrace new sovereignty 2.0: such as sovereignty for rivers, trees and forests, opening the scope for dynamic interactions of such rights frameworks for a new social ecological contract.

2. MORE THAN ONE DEMOCRACY?

Irrespective of scale or context, it is clear that no sole actor — whether state, civic sector, corporate or start-up — has the ability to tackle the wicked issues of our time alone. This means that discourses on good governance and democracy fundamentally have to be about the distributed power to co-create society. Clearly this is conditioned both by the openness of institutional infrastructures and by the socio-economic fundamentals that enable or hinder people’s agency. Recognizing democracy as creating the positive freedom of ‘being able to care’ (whether about individual life choices, the craftsmanship of work, and about wider social and planetary interdependencies) implies not just a concern about the trends that reduce such capabilities (such as declining economic growth, growing job insecurity or the disasters that uproot people’s lives) but also a focus on the multitude of avenues that enable such care to be expressed and acted upon. The challenge we face is that in this reality, seeing multiparty parliamentary systems as the sole mechanism for delivering democracy seems increasingly hollow: citizen assemblies and participative, high-frequency accountability & feedback systems are examples of vital complementary mechanisms for the enhancement and preservation of public and shared goods. The examples we have seen are evidence of how they can unlock positive, inclusive new avenues to the future at any scale from the local to the global — in ways that ‘politics’ as usual cannot.

3. BUREAUCRATIC REVOLUTION.

In the non-pejorative sense of the word, bureaucracy is at the core of governance. Innovation and experimentation in the realm of our everyday bureaucracy can change the nature and people’s experience of governance and everyday life itself — look no further than Mariana Mazzucato’s work on the role of bureaucracy to create new markets. Just like the 19th century centralized bureaucracies shaped the notion of the modern state, the present ‘boring revolution’ in our capabilities (e.g. around data insight, zero overhead cost of micro transactions and transparent multi-actor contributory contracts) can drive a radical reinvention of the notion of governance and power. This is what is at stake. The challenge we face is evident in the many salutary lessons that IID2018 provided, on how positive outcomes of this process should not be taken for granted. Instead they can only result from clear intent, human-centered design and an approach to strategic innovation that is up to the magnitude of the issues at hand.

Beyond IID2018…to be continued

The IID2018 was an effort to manifest the strategic relevance gap between our rapidly growing needs and risks, and our all-too-slowly developing practice — in this case that of increasingly inadequate global governance models and implications across a range of interdisciplinary policy spaces. If revealing strategic risks and their interrelated nature is about building the demand side for ambitious change — Invisible Dust’s credo of “making the invisible visible” clearly struck a chord — then what comes next has to be a strategic innovation response that goes beyond organizational tweaks or individual responses. After all, in a show-of-hands poll on the first day of IID2018, only 5 people thought the world is on track in achieving Sustainable Development Goals — hardly surprising, given recent news on climate change or the accumulating impact of air pollution on health and learning. Addressing governance failures is at the heart of delivering the SDGs and it will require concerted belief, effort and strategic scale investment.

By virtue of its cross-sectoral strategic development role, UNDP has a natural and unique responsibility to focus on addressing the strategic, entangled and systemic governance risks facing us at a national, transnational and global level — and in doing so it needs to act as integrator on a country and transnational levels, whilst recognizing and respecting the necessity of a multipolar yet machine advanced interoperable future — where the notion, means and conceptions of governance are fully reimagined and socially co-created for a 21st century. Practically, this means NextGenGov was just the beginning of investing in and building a strategic portfolio of experiments that enable partners to learn, manage risk, and effect system change, in order to rebuild the (technical, political, informational, financial) capability of states and civic actors for agile, iterative governance that is premised less on building solutions and more about dealing with our new certainty — uncertainty.

*Special thanks in developing a part of this blog (strategic relevance gap) go to Luca Gatti of Axilo.

The post 7 Lessons & 3 Big Questions for the Next 10 Years of Governance appeared first on P2P Foundation.

]]>
https://blog.p2pfoundation.net/7-lessons-3-big-questions-for-the-next-10-years-of-governance/2019/01/14/feed 0 73983
Let’s train humans first…before we train machines https://blog.p2pfoundation.net/lets-train-humans-first-before-we-train-machines/2018/12/06 https://blog.p2pfoundation.net/lets-train-humans-first-before-we-train-machines/2018/12/06#respond Thu, 06 Dec 2018 10:00:00 +0000 https://blog.p2pfoundation.net/?p=73618 Reposted from Hazel Henderson’s blog Hazel Henderson: Billions are spent by governments, corporations and investors in training computer-based algorithms (i.e. computer programs) in today’s mindless rush to create so-called “artificial” intelligence, widely advertised as AI. Meanwhile, training our children and their brains (already superior to computer algorithms) is under-funded, schools are dilapidated, sited in run-down,... Continue reading

The post Let’s train humans first…before we train machines appeared first on P2P Foundation.

]]>
Reposted from Hazel Henderson’s blog

Hazel Henderson: Billions are spent by governments, corporations and investors in training computer-based algorithms (i.e. computer programs) in today’s mindless rush to create so-called “artificial” intelligence, widely advertised as AI. Meanwhile, training our children and their brains (already superior to computer algorithms) is under-funded, schools are dilapidated, sited in run-down, often polluted areas while our teachers are poorly paid and need greater respect. How did our national priorities get so skewed?

In reality, there is nothing artificial about these algorithms or their intelligence, and the term “AI” is a mystification! The term that describes the reality is “Human-Trained Machine Learning”, in today’s mad scramble to train these algorithms to mimic human intelligence and brain functioning. In the techie magazine WIRED, October 2018, we meet a pioneering computer scientist, Fei-Fei LI, testifying at a Congressional hearing, who underlines this truth. She said, “Humans train these algorithms” and she talked about the horrendous mistakes these machines make in mis-identifying people, using the term “bias in—bias out” updating the old computer saying, “garbage in—garbage out”.

Professor LI described how we are ceding our authority to these algorithms to judge who gets hired, who goes to jail, who gets a loan, a mortgage or good insurance rates — and how these machines code our behavior, change our rules and our lives. She is now back at Stanford University after a time as an ethicist at Google and has started a foundation to promote the truth about AI, since she feels responsible for her role in inventing some of these algorithms herself. As a celebrated pioneer of this field, Professor LI says “There’s nothing artificial about AI. It’s inspired by people, it’s created by people and more importantly, it impacts people”.

So how did Silicon Valley invade our culture and worldwide technology programs with its short-term, money -obsessed values: “move fast and break things”; disrupt the current systems while rushing to scale and cash out with an IPO? These values are discussed by two insiders in shocking detail, by Antonio G. Martinez in “Chaos Monkeys” (2016) and Bloomberg’s Emily Chang in “Brotopia” (2018). These authors explain a lot about how training these algorithms went so wrong: subconsciously mimicking their mostly male, misogynist, often white entrepreneurs and techies with their money-making monopolistic biases and often adolescent, libertarian fantasies.

I also explored all this in my article “The Future of Democracy Challenged in the Digital Age”, CADMUS, October 2018, describing all these issues of the takeover by AI of our economic sectors; from manufacturing, transport, education, retail, media, law, medicine, agriculture, to banking, insurance and finance. While many of these sectors have become more efficient and profitable for the shareholders, my conclusion in “The Idiocy of Things” critiqued the connecting of all appliances in so-called “smart homes” as quite hazardous and an invasion of privacy. I urged humans to take back control from the over-funded, over-invested, over-paid computer and information science sectors too often focused on corporate efficiency and cost-saving goals driven by the profit targets demanded by Wall Street.

I have called for an extension of the English law, settled in the year 1215: “habeas corpus” affirming that humans own their own bodies. This extension would cover ownership of our brains and all our information we generate in an updated “information habeas corpus”. Since May 2018, European law has ratified this with its General Data Protection Regulation (GDPR), which stipulates that individuals using social media platforms, or any other social system do indeed retain ownership of all their personal data.

So, laws are beginning to catch up with the inhuman uses of human beings, with our hard-earned skills being used to train algorithms that then replace us! The computer algorithm trainers then employ out of-work people surviving in the gig economy on Mechanical Turk and Task Rabbit sites, in minimum, hourly- paid data entry tasks to train these algorithms!

Scientist Jaron Lanier in his “Ten Arguments for Deleting Your Social Media Accounts Now” (2018) shows how social media are manipulating us with algorithms to engineer changes in our behavior, by engaging our attention with clickbait and content that arouses our emotions, fears and rage, playing on some of the divisions in our society to keep us on their sites. This helps drive ad sales and their gargantuan profits and rapid global growth. Time to rethink all this, beyond the dire alarms raised by Bill Gates, Elon Musk and the late Stephen Hawking that these algorithms we are teaching will soon take over and may harm or kill us as did HAL in the movie “2001”.

Why indeed are we spending all this money to train machines while short-changing our children, our teachers and schools? Training our children’s brains must take priority! Instead of training machines to hijack our attention and sell our personal data to marketers for profit — let’s steer funds into tripling efforts to train and pay our teachers, upgrade schools and curricula with courses on civic responsibility, justice, community values, freedoms under habeas corpus (women also own their own bodies!) and how ethics and trust are the basis of all market and societies.

Why all the expensive efforts to enhance machine learning to teach algorithms to recognize human faces, guide killer drones, falsify video images and further modify our behavior and capture our eyeballs with click bait, devising and spreading content that angers and outrages — further dividing us and disrupting democracies?

Let’s rein in the Big Brother ambitions of the new techno-oligopolists. As a wise NASA scientist, following Norbert Weiner’s Human Use of Human Beings (1950), reminded us in 1965 about the value of humans: “Man (SIC) is the lowest-cost, 150 pound, nonlinear all-purpose computer system which can be mass-produced by un-skilled labor”, quoted in Foreign Affairs, July-August, 2015, p. 11. Time for common sense!

Hazel Henderson© 2018


Hazel Henderson D.Sc.Hon., FRSA, is founder of Ethical Markets Media, LLC and producer of its TV series. She is a world renowned futurist, evolutionary economist, a worldwide syndicated columnist, consultant on sustainable development, and author of The Axiom and Nautilus award-winning book Ethical Markets: Growing the Green Economy (2006) and eight other books.

Her editorials appear in 27 languages and 200 newspapers syndicated by Inter Press Service, and her book reviews appear on SeekingAlpha.com. Her articles have appeared in over 250 journals, including (in USA) Harvard Business Review, New York Times, Christian Science Monitor; and Challenge, Mainichi (Japan), El Diario (Venezuela), World Economic Herald (China), LeMonde Diplomatique (France) and Australian Financial Review.

 

Photo by Ferrari + caballos + fuerza = cerebro Humano 

The post Let’s train humans first…before we train machines appeared first on P2P Foundation.

]]>
https://blog.p2pfoundation.net/lets-train-humans-first-before-we-train-machines/2018/12/06/feed 0 73618
Just another Cyber Monday: Amazing Amazon and the best deal ever https://blog.p2pfoundation.net/just-another-cyber-monday-amazing-amazon-and-the-best-deal-ever/2018/11/26 https://blog.p2pfoundation.net/just-another-cyber-monday-amazing-amazon-and-the-best-deal-ever/2018/11/26#respond Mon, 26 Nov 2018 07:39:08 +0000 https://blog.p2pfoundation.net/?p=73551 When you get something at 80% off on Amazon, who do you think wins — you or Amazon? If you think that’s a strange question, you ain’t seen nothing yet. Maybe it’s time we re:Invent some things. But, how can possibly getting a huge discount be bad? It’s not, if you actually need what you’re buying, and... Continue reading

The post Just another Cyber Monday: Amazing Amazon and the best deal ever appeared first on P2P Foundation.

]]>
When you get something at 80% off on Amazon, who do you think wins — you or Amazon? If you think that’s a strange question, you ain’t seen nothing yet. Maybe it’s time we re:Invent some things.

But, how can possibly getting a huge discount be bad? It’s not, if you actually need what you’re buying, and know what you’re buying into. Do you?

Do you know what you’re getting out of that Black Friday deal?

Have you carefully considered your needs and decided a 21″” Plasma TV for the bathroom is going to make your life better? Then by all means, do get it on Black Friday rather than any other day. Do your market research, compare prices and features, track your model of choice and wait for Black Friday to get it. And get it where you can get the best deal — quite likely, Amazon.

That may be a preposterous example, but there’s a reason for seemingly irrational compulsive buying behavior: shopping feels good. It releases dopamine in your brain, a chemical that triggers your reward centers. And if you buy things at discount, the chemical kick is even harder.

It’s just the way our brains our wired, tracing back to our hunter — gatherer history. You may not know or get it, but Amazon sure does. So let’s reframe that question: who would you say is more business-savvy — you or Amazon? At the risk of getting ahead of ourselves, we have to go with Amazon here. So why would Amazon give you this kind of deal then, and what do you really get out of it?

Amazing Amazon

You probably know the Amazon story already. What has enabled it to go from a fringe online bookstore in 1994 to one of the most important forces shaping the world in 2017 is a combination of foresight and execution, technology and business acumen.

Amazon has a demonstrated ability to see what the latest technology can do for its business and integrate it faster and better than the competition. Online shopping was just the beginning, after a certain point Amazon has not just been pioneering the fusion of existing technology and business models, but also developing new ones.

Amazon went from selling physical goods online to making goods such as books digital and giving out the medium on which to consume them, building an empire in the process. It also expanded the range of what is sold online and built a vast logistics network to support physical delivery. Today Amazon dominates retail to such an extent that its orders account for up to 15% of international shipping.

Amazon has a huge impact in the world, both digital and physical

All that is not even taking into account Amazon’s recent acquisition of Whole Foods, which combined with its -once more- pioneering use of digital technology in the physical realm could mean it will soon dominate not just what lands on your desk but also on your table.

Amazon has also been a force for digital transformation. The cloud, machine learning and product recommendations, voice-activated conversational interfaces — these are just some of the most visible ways in which Amazon and its ilk have pushed technology forward.

Amazon really is amazing. There’s just one problem: the one thing in Amazon’s agenda is Amazon.

That’s not to say that everyone at Amazon is rotten of course — far from it. There are extremely smart people working for Amazon, and some of them are trying to promote commendable causes too. And all this technology makes things better, faster, cheaper for everyone, right?

Black Friday

Do you know where the term Black Friday comes from? It started being used in a different way by employers and workers. As Thanksgiving on Thursdays is a holiday, the temptation to call in sick on Friday and have a 4-day long weekend was just too big. On the other hand, since stores are open on that day, people still go out and shop.

The combination of reduced manpower and increased demand is what made employers start calling this Black Friday, as black had a negative ring to it. Eventually marketing succeeded in making this an iconic day for shopping, so the connotation is not negative anymore. Not if you’re not a worker anyway, which brings us to an interesting point.

This Black Friday, Amazon workers across Europe were be on strike. Furthermore, grass-roots initiatives are calling for demonstrations and boycotts against Amazon., and there is a Greenpeace campaign in progress to make and repair things rather than buying more. Before you get all upset about your order possibly arriving late, it’s worth examining the reasons behind this.

Amazon has been known to push its workers to their limits. This means minimum wage, harsh working conditions and doing everything in its power to keep them from unionizing. That includes offshoring and hiring workers from agencies as temps, even though they may be in fact covering permanent positions. In that respect of course Amazon is not that different from other employers.

Not what most people would think of when talking about Black Friday workers, but there are more connections than you think

You could even argue Amazon sort of has to do this. If others do it and it’s legal, how else will they be able to compete, and why would they not do it? After all, keeping cost down and pushing people to get as many packets as quickly as possible means you can get your order for cheap and on the next day, which is great. It’s great if you’re a consumer and it’s great if you’re Amazon.

So why care about some workers doing low-paid, low-skill jobs? Their jobs will soon be automated anyway, and rightly so. Amazon is already automating its warehouses, meaning things will be done faster and smoother. Less manual effort, less accidents, less people needed, and no strikes too. And even the Mechanical Turks will not be needed soon, these tasks are better done by machines.

But what will the people whose jobs are made redundant do?

Brilliant machines

Of course, it’s not the first time we’re seeing something like this. Before the industrial revolution, most of the population used to work in agriculture, and now only 2% does. There are all sorts of jobs nobody could possibly think of at the time which are now made possible by technology. Technology creates jobs, is the adage.

But who creates technology then? People do, workers do. You would then assume the benefits of technology should all come back to them in a virtuous circle of shorts. Unfortunately that’s not really the case. Even though productivity is rising, which should mean reduced working hours and increased income, this is not happening.

[There] is [a] growing gap between productivity and wages. And you can see this in the gap between productivity, a measure of the bounty of brilliant machines, and how it’s being distributed in terms of wages. If we had an inflation-adjusted, productivity-adjusted minimum wage today, it would be something like $25 [an hour]. We would not be arguing about $10.

Laura Tyson, former Chair of the US President’s Council of Economic Advisers

You may argue that there’s the people making these “brilliant machines”, the people doing the low-end jobs, and consumers. We don’t need the low-end jobs, so let’s just retrain these people. Let us all become engineers and data scientists and AI experts, problem solved then, and we can consume happily ever after, right?

Building machines that build machines

Not really. Despite what you may think, engineers and scientists are workers too. Their work may require intellectual rather than physical labor, but at the end of the day, one thing is common: what they produce does not belong to them. It matters not whether you are a cog in a machine or build the machine, as long as you don’t own it. So if we build machines that can do and build more for less, where does that surplus value go?

The best deal ever

If anything, this is the best deal the Amazons of the world, much like the Fords before it, have managed to sell. They have succeeded in riding and pushing the wave of consumerism to disassociate people with the nature of their work to the point where they come to identify themselves as consumers rather than workers.

While it may not be true that Henry Ford started paying workers $5 wages so they could afford his cars, it is true that Amazon pays its workers in part with Amazon vouchers. This is taking an already brilliant scheme to new heights. Workers not only identify as consumers, often turning against other workers, but also keep feeding the machine they build.

So you have raw materials and infrastructure, labor that transforms that into goods and services, and their estimated value. Without labor, there is no value: extracting material and creating infrastructure also takes labor. Yet, the ones putting in the labor get a fraction of that value and zero decision making power in the companies they work for.

But, what about the entrepreneurial spirit of the creators or the Amazons of the world? Surely, their hard work and foresight deserves to be rewarded? As technology and automation progress, menial jobs are becoming obsolete and workers are asked to work not just hard, but smart. To take initiatives, be creative, bold, and entrepreneurial. And workers do that, but in the end that does not make much of a positive difference in their lives.

If data is the new oil, what are the oil rigs?

And what about the brave new world of big data automation? Surely, in this new digital era of innovation there are so many opportunities. All it would take to bring down these monopolies would be disruptive competition, so if we just let the market play its part it will work out in the end — or will it?

If data is the new oil, then the oil rigs for the new data monopolies that are the Amazons of the world are their data-driven products. They have come to dominate and nearly monopolize the web and digital economy to such an extent that if this is not realized and acted upon soon, it may be too late.

Wake up or scramble up

But, sure companies must understand this, right? They must care about their workers, they must have a plan to prevent social unrest, right? How does someone who automates the world’s top organizations answer that question?

“Time flies and technology waits for nobody. I have not met a single CEO, from Deutsche Bank to JP Morgan, who said to me: ‘ok, this will increase our productivity by a huge amount, but it’s going to have social impact — wait, let’s think about it’.

The most important thing right now, what our top minds should be starting to say, is how to move mankind to a higher ground. If people don’t wake up, they’ll have to scramble up — that’s my 2 cents”.

Cetan Dube, IPSoft CEO

Tyson on the other hand concludes that:

“We’re talking about machines displacing people, machines changing the ways in which people work. Who owns the machines? Who should own the machines? Perhaps what we need to think about is the way in which the workers who are working with the machines are part owners of the machines”.

So, what’s your take? How do you identify? Are you a consumer, or a worker?

Article originally published on Medium

The post Just another Cyber Monday: Amazing Amazon and the best deal ever appeared first on P2P Foundation.

]]>
https://blog.p2pfoundation.net/just-another-cyber-monday-amazing-amazon-and-the-best-deal-ever/2018/11/26/feed 0 73551
Integral Technology in Blockchain, Cryptocurrency and Beyond – a concept note for discussion https://blog.p2pfoundation.net/integral-technology-in-blockchain-cryptocurrency-and-beyond-a-concept-note-for-discussion/2018/11/13 https://blog.p2pfoundation.net/integral-technology-in-blockchain-cryptocurrency-and-beyond-a-concept-note-for-discussion/2018/11/13#comments Tue, 13 Nov 2018 09:00:00 +0000 https://blog.p2pfoundation.net/?p=73412 Jem Bendell and Matthew Slater: The billions of dollars of venture capital pouring into blockchain start-ups over the past year reflect how people with a serious financial interest in technology see significant potential in distributed ledger technology (DLT). Yet the actual use of these technologies for everyday applications is still rare. Some say that it is... Continue reading

The post Integral Technology in Blockchain, Cryptocurrency and Beyond – a concept note for discussion appeared first on P2P Foundation.

]]>
Jem Bendell and Matthew Slater: The billions of dollars of venture capital pouring into blockchain start-ups over the past year reflect how people with a serious financial interest in technology see significant potential in distributed ledger technology (DLT). Yet the actual use of these technologies for everyday applications is still rare. Some say that it is a passing fad. Others say that blockchains and cryptocurrencies like bitcoin are dangerous to our financial system, our security and the environment. How should we navigate this new sector: as innovators, advisors, regulators, or just as informed citizens?

In this concept note, prepared as background for our article for the World Economic Forum, we explain how approaches to blockchain and cryptocurrency need to be grounded in a clear appreciation of the relationship between technology and society. That clarity is important not just for discussions on blockchains and cryptocurrencies, but for all software technology, as it becomes so powerful in our lives. We will therefore develop a lens, called “integral technology,” to assess the positive and negative aspects of any technology and apply this to recent innovation on the field of distributed ledgers.

Deepmind’s AI interpretation of Escher’s famous hands

When we hear people comment on blockchain and cryptographic currency being good or bad, we are often hearing different assumptions about the relationship between technology and society. So first, let us review the various ways that people look at that. The Oxford English dictionary defines technology as “The application of scientific knowledge for practical purposes…” That is different to how the word is typically used to refer to the “artefacts” – or things – of technology, such as the arrow head, the mobile handset, blockchain, or nuclear missile. By describing both “application” and “practical purposes” the dictionary suggests that technology is best understood as a system of intentions and outcomes. That system involves people, knowledge, contexts and the transformations that are involved in creating those artefacts. These are what we identify as the five aspects of any technological system, which is what we will mean when we refer to a technology in this concept note. The power of this systems perspective on technology is that it invites us to consider further the wider context of politics, financing, iterative redesign processes, the side effects and finally the values that shape technologies. Which is what we will do now.

Is Technology Something to Love or Fear?

We humans attach a great deal of importance to technology because it seems to be able to meet many of our needs and desires. It brings aspects of our imagination into physical reality in ways that then reshape our lives and what we might imagine next. This utility of technology makes selling it very possible, but also means there is less emphasis given to the costs and consequences of those desires being met in those ways.

Given its centrality in civilisation, a range of perspectives on our relationship to technology have arisen.Some optimists believe any negative consequences are worth the benefit, and that the march of technology is synonymous with the march of human progress. This view is called “technological optimism”. Others believe that technology takes humans further from their natural state, isolating them from the world, and causing numerous new problems which often require further technological solutions. These “technological pessimists” can point to a range of dangerous situations such as nuclear waste, climate change and antibiotic resistance, to then question the hubris that humanity may have exhibited in thinking our technology meant we can exert influence on nature without an eventual response of equivalent impact on ourselves. The German philosopher Martin Heidegger argued that modern technologies have a quality of seeking to dominate nature rather than work with it, in ways that stem from – and contribute to – the illusion that humans are separate agents acting on nature.

Some of these optimists and pessimists don’t think that we humans have much influence on what is happening. Such “technological determinism” is the view that technology can be understood as having a logic of its own and develops as an unfolding of consciousness in ways that we, our entrepreneurs or our politicians, will not, in principle, control. Current debates about the merits or risks of blockchains and cryptocurrencies often echo these perspectives. Some argue it will change, or even save, the world. Others argue that it will collapse the financial basis of our nation states. Still others argue that whatever our view, it IS the future – as if it cannot be stopped.

Counter-posed to these views on technology has been the “technological neutralist” view which suggests that technology is neither inherently good or bad for humanity and therefore needs responsible management to maximise its intended benefits and minimise its unintended drawbacks. That view is the most widespread in the field of Science, Technology and Society (STS) studies. Sociologists have revealed as pure fiction the apolitical view of technology development as flowing from basic science, to applied science, development, and commercialization.  Instead, a variety of relevant stakeholder groups compete to influence a new technology and they determine how it becomes stabilised as an element of society.

Therefore, despite the pervasiveness of “great man” stories in our culture, technological innovation is not the result of heroes introducing new ‘technologies’ and release them into ‘society,’ starting a series of (un)expected impacts. Rather, innovation is a complex process of “co-construction” in which technology and society, to the degree that they could even be conceived separately of one another, negotiate the role of new technological artefacts, alter technology through resistance, and construct social and technological concepts and practices.

We share this perspective on technology. It invites us to see how innovation is a social process that we can choose to engage in to achieve public goals. We are not, however, “technology neutralists”, for a few reasons. First, we do not believe that all technologies have the same level of negative or positive potential prior to their human control. That is because all kinds of different phenomena exist under the one banner “technology”. For instance, while nuclear fission constantly produces poisons which require millennia of custody, smart decision-making algorithms only impact the world insofar as their decisions are acted upon. Second, we do not assume humanity to be the autonomous agent in our relationship with technology. Rather, we are influenced by the technologies that shape the society we are born into. Canadian philosopher of technology, Professor Andrew Feenberg explains this situation as humans and technology existing in an entangled hierarchy. “Neither society nor technology can be understood in isolation from each other because neither has a stable identity or form” he explains.

For us, “technological constructivism” is the perspective that technology and society influence each other in complex ways that cannot be predicted and therefore require constant vigilance by representatives from all stakeholders who are directly and indirectly affected. The implication of this perspective for innovation in blockchain and cryptographic currencies is that the intentions of innovators and financiers are important to know and influence, and that wider stakeholder participation in shaping the direction and governance of the technology is essential. This is the approach that we base our view of developments in software in general and blockchains, in particular.

The Technological State of the World

Humanity faces many dilemmas today. Some of these are brought about by our technology, some are not, and we may hope many can be solved by a sensible use of technology in future. Climate change is the result of our rapid use of technologies to burn fossil fuels and tear up forests. Malnutrition is the result of a wide array of factors, which are difficult to blame on technology, though its persistence despite the “green revolution” would make technological optimism a questionable position today.

One field of technology which may be exceptional with regard to regulation and the lack of it is Artificial Intelligence (AI), which describes the ability of computers to perceive their environment and determine an appropriate course of action. Narrow forms of AI are already in use. They often confer a tremendous advantage to those who use it well, and its use by the victorious Trump campaign, and the victorious Leave campaign (of the Brexit referendum) are raising huge questions about the justice of using people’s own data to manipulate their voting intention. AI systems tend to be very complicated and sometimes produce unexpected results. But because they save labour, for example by automatically judging loan applications or driving vehicles, there is commercial pressure to simply accept the automated decisions to reduce the costs. As AI is applied to more and more areas of trade, finance, military and critical infrastructure, the risks and ethical questions proliferate.

There are more intense concerns being expressed recently about more general forms of AI that include capabilities for software to be self-authoring. That does not mean consciousness, nor mimicking consciousness, but that overtime the software could develop itself beyond our understanding or control. It could ‘escape’ from a laboratory setting, or within specific applications, and disrupt the world through all our internet-connected systems. Astro-physicist Stephen Hawking said “The development of full artificial intelligence could spell the end of the human race. Once humans develop artificial intelligence, it will take off on its own and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.” Some even fear that, a rogue AI might only be disabled by killing the whole internet. Combined with the resilience of blockchains, which cannot be switched off at any one place, this possibility is a step closer. This potential existential danger invites a new seriousness about software regulation. But our concern in this concept note is more with the way machines in the service of powerful organisations are already shaping certain aspects of our lives with little accountability and that the field of AI is almost completely unregulated.

Introducing the Concept of Integral Technology

Given these problems, it is self-evident that humanity needs a better approach to technology. How might we frame that approach? Concepts of ethics, responsibility and sustainability have all been widely discussed in relation to technology. Given our systems view of technology, we find Integral Theory to provide a simple prompt for considering its implications for society. It invites us to question internal and external impacts of any system and its embeddedness in wider systems. We are going to propose that humanity needs to develop a more consciously integral approach to the development and implementation of technology. Key to this concept is that technologies need to be more internally and externally coherent. Internal coherence describes how their design does not undermine the intention for their creation. External coherence describes how their design does not undermine the social and political system that they depend upon and which holds technologies and their protagonists to account, as well as the wider environment upon which we all depend. As that social and political system would be undermined by increasing inequality, so the effects of technology on equality are important to its integral character.

To aid future discussion, here we outline six initial characteristics of such integral technologies.

1) Meaningful Purpose: The technology system is the result of people seeking to provide solutions to significant human needs and desires, rather than exploit people for personal gain. A positive example is the development of technologies for cataract operations that can be offered affordably for the poor. A negative example is the development of financial algorithms to front run stock market trading.
2) Stakeholder Accountability: A diversity of stakeholder opinions are solicited and used during technological development and implementation in an effort to avoid unexpected and negative externalities. A positive example is the cryptocurrency Faircoin for which everything is decided through an assembly; a negative example is bitcoin, in which computer mining stakeholders approve or veto new features based on their interests in maintaining power and profit.
3) Intended Safety: A technology does not cause harm when used in the intended ways, and those using it in unintended ways are made aware of known risks. A positive example is the indications and contra-indications on pharmaceutical labels; a negative example is when pesticides are marketed to be used just before the rice or grain harvesting to increase the yield, when that increases likelihood of toxic residues.
4) Optimal Availability: As much of the knowledge about the technology as safely possible is kept in the public domain, in order to reduce power differentials and maximise the benefits of the technology when other uses for the technology are found. A positive example is open source software which allows anyone with the right skills to deploy it for any purpose they choose; a negative example is the ingredients of cigarettes which are not published and make it harder for affected parties to build a case against the manufacturers.
5) Avoiding Externalities: The way in which the artefacts of the technology affect the world around them are considered at an early stage and actively addressed. A positive example is the design of products to use a circular flow of materials from the Earth and back to the Earth. A negative example is how addiction to computer games may be contributing to obesity in the young while the games companies continue to pursue similar goals.
6) Managing Externalities: Subsystems for mitigating known negative externalities are developed at the same time as the technology and launched alongside it. A positive example is the system of regulations that mandate regular physical inspections of aircraft. A negative example is government migrating social service administration to the internet and not ensuring the poorest have the computer access, skills and support they need to use the new system.

Integral Blockchain and Post-Blockchain Technologies

In the past year Bitcoin has been criticised for the huge amounts of energy it consumes to secure the blockchain. At the time of writing, some compare the consumption to that of Switzerland. Such consumption is not a necessary feature of securing blockchains, but the initial design choice of the inventor, with a system called “proof of work” being used to issue new digital tokens. Other systems like Ethereum also use “proof of work” and are similarly reliant on the computer-mining companies for whether this climate-toxic code is replaced. Sadly the “proof of work” systems of these leading technologies remain. Whereas some proponents of these technologies argue that they are not so environmentally bad, due to servers being located in cold places near renewable energy sources where energy is wasted, these are somewhat defensive post-hoc excuses. Clearly the environmental appropriateness of their code was not one of the design parameters in the minds of the designers.

In the case of Ethereum, the speculation in the price of Ether affects the price of Gas which is used to process transactions. That means that as the price balloons, the system loses its attractiveness for supporting activities that are high volume and low cost. It also transfers funds from the many who would use the system to the few who speculate on digital token value or own the computer-miners.

We contend that systems which are not internally coherent will eventually experience a disintegration of their intended or espoused purpose. In addition, systems which are not externally coherent will eventually experience a disintegration in their public support and their environmental basis. The situation with Bitcoin is probably unsolvable, and its carbon footprint may lead to significant regulator intervention in time. Ethereum has a wider set of aims and so despite the continual delays in moving substantially away from Proof of Work, it may still be able to address the barriers to progress presented by the short-term interests of those controlling the mining computers. However, there is no doubt that this form of governance-by-hash-power is currently an impediment to Ethereum becoming a more integral technology.

Given these difficulties, we would like to point out some lesser-known projects, which we regard as showing exemplary integral traits.

Providing the same smart contract functionality as Ethereum, the new Yetta blockchain is intended to be sustainable by design, with the low energy requirements of its codebase being moderated further by automated rewards for those nodes using renewable energy. It will also enable automated philanthropy to support the Sustainable Development Goals (SDGs).

Also dissatisfied with how both proof-of-work and proof-of-stake consensus algorithms reward those who already have the most, Faircoin developed a ‘proof-of-cooperation’ algorithm. More than that, there is an open assembly in which the price of the coin is determined every month. This also is an attempt to stabilise the price of the coin and deter speculators and the erratic price movements which arise from their profiteering. They hold that a medium of exchange is not supposed to be a vent from which value can be extracted from the economy.

One post-blockchain project, Holochain, is currently raising capital in an Initial Coin Offering (ICO). The communications team has made many criticisms of conventional blockchains. For example they have massive data redundancy built in, which causes such a problem for scaling that the original intention of these projects is now being compromised with such innovations as the Lightning networks. Another being that since blockchain tokens are assets without liabilities, they cannot have a stable value and thus constitute a poor medium of exchange. Holo tokens therefore are issued as liabilities, which means they have a purpose and a more stable value as long as the project lives.

“If someone tells you they’re building a “decentralized” system, and it runs a consensus algorithm configured to give the people with wealth or power more wealth and power, you may as well call bullshit and walk away. That is what nobody seems willing to see about blockchain.” – Art Brock

Another project called LocalPay, which we both work on, seeks to build a payment system for existing solidarity economy networks. Its protagonists believe that payments infrastructure is too critical and too political to be put only in the hands of monopolists and rent-seekers. Instead, infrastructure which is held in common, equally available to all, is the basis of a fairer society. They too, understand money as credit, with somebody always underwriting its value.

While none of these technologies is perfect, they are Integral Blockchains and post-Blockchains as they seek to be internally and externally coherent. The internal coherence of a Distributed Ledger Technology (DLT) means that the code and business model does not undermine the intention for their creation. External coherence of a DLT means that their code and business model does not undermine the social and political system that they depend upon and which holds the technologies and their protagonists to account, as well as the wider environmental system upon which we all depend. As that social and political system is undermined by increasing inequality, so the effect of a DLT on equality is important to its integral character. The four projects we highlighted all seek to integrate these considerations into their codebase and business model, rather than bolt on social or environmental considerations at a later time.

The Need for

Concerns about technology are growing. Warnings over unregulated nanotechnology and artificial intelligence are now widespread. Warnings about the socially and politically damaging effects of social media are growing. There’s a wider problem with how technology is financed and implemented in a free market system that means technology companies’ first duty is to deliver short term profits to shareholders. This means many technologies are developed in a hurry and much software is rushed to market before it is even finished. Many costs and negative impacts are hard to pin directly on the manufacturers, and thus sometimes nobody is accountable. The history of technology is one where resistance to development from society leads to stabilisation around control and access to technology. Recently we have had massive diffusion of new electronics such as the mobile phone and social media, while the systems for affected stakeholders to hold these technological systems to account do not yet exist in the ways they have done in other sectors.

The law is supposed to provide for unanticipated victims of technology and thus incentivise providers to take precautions. This clearly isn’t working nearly well enough perhaps because of the difficulty and expense of using the law and perhaps because some consequences are very hard to prove to the satisfaction of a jury. You may recall the decades of failing to prosecute tobacco companies because the link between cigarettes and lung cancer could not be proven easily. So if the law were better to favour the victims, then technology companies would do more to research and mitigate the secondary effects.

We will not be surprised if legal action will begin to be taken against platforms like Facebook on behalf of millions of claimants for a range of concerns. That might involve teenagers with clinical depression that has been correlated with social media usage, or relatives of those who then committed suicide. Companies like Facebook may point to their internal systems to address such risks, and whether that is sufficient may be debated in court sometime in the future. Such legal action may bankrupt some firms, or trigger changes. But to achieve a wider shift to more integral technologies there will need to be a shift in philosophy that the law alone will not be able to compel.

It is time for a new era of wisdom in the way we make and deploy our tools. A move from the knowledge of making things to the wisdom of making things – what we call an era of “technosophy”. In the field of digital technologies, this means the urgent development of new forms of deliberative governance, that uses both soft and hard forms of regulation. The forms that this will take need to be developed, but there are many examples from other sectors, where technical standards are agreed internationally and incorporate into national law. That would need to be done in ways that shape not stifle digital innovation, but also enable stakeholders to alert regulators to risk-laden projects, such as those using AI.

One idea might be to introduce a requirement that before software technologies can be deployed by large organisations (over 200 employees OR over 50 million USD turnover, with subsidiaries analysed as part of their parent companies), the software needs to be certified by an independent agency as not presenting a risk to the public. Such certifications could be based on new multi-stakeholder standards that would establish management systems for responsible software development. Any change of the software code that would be deployed by a large firm would need to be notified to the certifier of the underlying software before release, with a self-declared risk assessment, based on guidance provided by the standards organisation. Systems would need to be established for determining whether particular software types and uses pose heightened risks and require more oversight. For this approach to work it would have to be worldwide, so as to avoid firms moving to jurisdictions that avoid these regulations. Therefore, there is a rationale for an international treaty on software safety to be negotiated rapidly with significant resources marshalled to help these regulations to be appropriately implemented globally.

In developing this idea, we know that many protagonists in software innovation may be appalled. There is a strong anti-authoritarian mood amongst many computing enthusiasts. But it is time to realise that some technology optimists are becoming the new authoritarians, by enabling the diffusion of technologies that have wide effects on people worldwide without them having any influence on that process other than one role – if they can be a consumer. The challenge today is not whether there should be more regulation of software development and deployment or not, but how this should be done to reduce the risks and promote the widest human benefit. We offer the concept of Integral Technology as one way of helping that debate (and not as a template for regulation).

Unfortunately, in the hype and the reality around Distributed Ledger Technologies (DLTs) we don’t see many ideas and initiatives thinking beyond the initial value proposition and promised returns to investors. Some technologies like Bitcoin seem to us to have betrayed all the aims of the founder and early adopters, yet claims of internal and external incoherence are met with very questionable objections by their near fanatical adherents. The various projects to promote social or environmental good appear to be marginal to the main thrust of this sector, and many add such concerns on top of existing code and governance structures that are not aligned with the project goals.  On the other hand, incumbent banks and their regulators have often express dismissive or negative views of DLT technologies which suggest they do not understand the problems with existing bank power and practice, or the potential of DLTs. In some countries outright bans on DLTs or cryptocurrencies are not the result of wide stakeholder consultation on questions such as what and for whom systems of value exchange should be for.

Therefore, we believe a technosophical approach to blockchain and cryptographic currencies is currently absent and needs cultivation. It is why we urgently need more international multi-stakeholder processes to deliberate on standards for the future of software technologies in general. In the field of blockchain, one event that may help is the United Nations’ half day high level discussions on blockchain, taking place at the World Investment Forum in October. Whether wider political and environmental conditions will give humanity the time and space to come together to develop and implement an appropriate regulatory environment for the future of software is currently unknown, but it is worth attempting.


We provide a background to blockchain and cryptocurrency innovation in our free online course on Money and Society.

We also offer a Certificate in Sustainable Exchange, which involves a residential course in London (next April).

Our academic research on these topics includes a paper recently published on local currencies for promoting SME financing, a paper on thwarting a monopolisation of the complementary currency field and a paper on our theory of money, published by the United Nations.

Professor Bendell is the Chair of the Organising Committee of the Blockchains for Sustainable Development sessions at the World Investment Forum 2018 at the UN.

We produced this concept note on the IFLAS blog for rapid sharing. To reference this Concept Note:
Bendell, J. and M. Slater (2018) Integral Technology in Blockchain, Cryptocurrency and Beyond, Institute for Leadership and Sustainability, University of Cumbria.

The image used in this post is a reworking of Escher’s drawing that reflects the entanglement of author and authored. The image was reworked by Google AI project Deepmind, in its “dream” state, to produce the image you see. Deepmind is learning to identify the contents of images. This technology will be used to save lives, sell stuff and to kill with impunity. Reworking Escher’s hands in a rather bizarre fashion reflects our perspective of “technological constructivism” and our belief that the potential of AI to soon achieve (with human action and inaction) autonomous general super intelligence (amongst other dilemma, particularly climate change) means that we need a “technosophical” approach that more wisely assesses and governs technology systems.

Send comments to drjbendell at gmail

Photo by Daniel Kulinski

The post Integral Technology in Blockchain, Cryptocurrency and Beyond – a concept note for discussion appeared first on P2P Foundation.

]]>
https://blog.p2pfoundation.net/integral-technology-in-blockchain-cryptocurrency-and-beyond-a-concept-note-for-discussion/2018/11/13/feed 1 73412
New Technologies Won’t Reduce Scarcity, but Here’s Something That Might https://blog.p2pfoundation.net/new-technologies-wont-reduce-scarcity-but-heres-something-that-might/2018/09/14 https://blog.p2pfoundation.net/new-technologies-wont-reduce-scarcity-but-heres-something-that-might/2018/09/14#respond Fri, 14 Sep 2018 08:00:00 +0000 https://blog.p2pfoundation.net/?p=72620 Vasilis Kostakis, Andreas Roos:  In a book titled Why Can’t We All Just Get Along?, MIT scientists Henry Lieberman and Christopher Fry discuss why we have wars, mass poverty, and other social ills. They argue that we cannot cooperate with each other to solve our major problems because our institutions and businesses are saturated with... Continue reading

The post New Technologies Won’t Reduce Scarcity, but Here’s Something That Might appeared first on P2P Foundation.

]]>
Vasilis Kostakis, Andreas Roos:  In a book titled Why Can’t We All Just Get Along?, MIT scientists Henry Lieberman and Christopher Fry discuss why we have wars, mass poverty, and other social ills. They argue that we cannot cooperate with each other to solve our major problems because our institutions and businesses are saturated with a competitive spirit. But Lieberman and Fry have some good news: modern technology can address the root of the problem. They believe that we compete when there is scarcity, and that recent technological advances, such as 3D printing and artificial intelligence, will end widespread scarcity. Thus, a post-scarcity world, premised on cooperation, would emerge.

But can we really end scarcity?

We believe that the post-scarcity vision of the future is problematic because it reflects an understanding of technology and the economy that could worsen the problems it seeks to address. This is the bad news. Here’s why:

New technologies come to consumers as finished products that can be exchanged for money. What consumers often don’t understand is that the monetary exchange hides the fact that many of these technologies exist at the expense of other humans and local environments elsewhere in the global economy. The intuitive belief that technology can manifest from money alone, anthropologists tell us, is a culturally rooted notion which hides the fact that the scarcity experienced by some is linked to the abundance enjoyed only by a few.

Many people believe that issues of scarcity can be solved by using more efficient production methods. But this may overlook some of the unintended consequences of efficiency improvements. The Jevons Paradox, a key finding attributed to the 19th century British economist Stanley Jevons, illustrates how efficiency improvements can lead to an absolute increase of consumption due to lower prices per unit and a subsequent increase in demand. For example, the invention of more efficient train engines allowed for cheaper transportation that catalyzed the industrial revolution. However, this did not reduce the rate of fossil fuel use; rather, it increased it.  When more efficient machines use less energy, they cost less, which often encourages us to use them more—resulting in a net increase in energy consumption.

Past experience tells us that super-efficient technologies typically encourage increased throughput of raw materials and energy, rather than reducing them. Data on the global use of energy and raw materials indicate that absolute efficiency has never occurred: both global energy use and global material use have increased threefold since the 1970s. Therefore, efficiency is better understood as a rearranging of resources expenditures, such that efficiency improvements in one end of the world economy increase resource expenditures in the other end.

The good news is that there are alternatives. The wide availability of networked computers has allowed new community-driven and open-source business models to emerge. For example, consider Wikipedia, a free and open encyclopedia that has displaced the Encyclopedia Britannica and Microsoft Encarta. Wikipedia is produced and maintained by a community of dispersed enthusiasts primarily driven by other motives than profit maximization.  Furthermore, in the realm of software, see the case of GNU/Linux on which the top 500 supercomputers and the majority of websites run, or the example of the Apache Web Server, the leading software in the web-server market. Wikipedia, Apache and GNU/Linux demonstrate how non-coercive cooperation around globally-shared resources (i.e. a commons) can produce artifacts as innovative, if not more, as those produced by industrial capitalism.

In the same way, the emergence of networked micro-factories are giving rise to new open-source business models in the realm of design and manufacturing. Such spaces can either be makerspaces, fab labs, or other co-working spaces, equipped with local manufacturing technologies, such as 3D printing and CNC machines or traditional low-tech tools and crafts. Moreover, such spaces often offer collaborative environments where people can meet in person, socialize and co-create.

This is the context in which a new mode of production is emerging. This mode builds on the confluence of the digital commons of knowledge, software, and design with local manufacturing technologies.  It can be codified as “design global, manufacture local” following the logic that what is light (knowledge, design) becomes global, while what is heavy (machinery) is local, and ideally shared. Design global, manufacture local (DGML) demonstrates how a technology project can leverage the digital commons to engage the global community in its development, celebrating new forms of cooperation. Unlike large-scale industrial manufacturing, the DGML model emphasizes application that is small-scale, decentralized, resilient, and locally controlled. DGML could recognize the scarcities posed by finite resources and organize material activities accordingly. First, it minimizes the need to ship materials over long distances, because a considerable part of the manufacturing takes place locally. Local manufacturing also makes maintenance easier, and also encourages manufacturers to design products to last as long as possible. Last, DGML optimizes the sharing of knowledge and design as there are no patent costs to pay for.

There is already a rich tapestry of DGML initiatives happening in the global economy that do not need a unified physical basis because their members are located all over the world. For example, consider the L’Atelier Paysan  (France) and Farmhack (U.S.), communities that collaboratively build open-source agricultural machines for small-scale farming; or the Wikihouse project that democratizes the construction of sustainable, resource-light dwellings;  or the OpenBionics project that produces open source and low-cost designs for robotic and bionic devices; or the RepRap community that creates open-source designs for 3D printers that can be self-replicated.  Around these digital commons, new business opportunities are flourishing, while people engage in collaborative production driven by diverse motives.

So, what does this mean for the future of tomorrow’s businesses, the future of the global economy, and the future of the natural world?

First, it is important to acknowledge that within a single human being the “homo economicus”—the self-interested being programmed to maximize profits—will continue to co-exist with the “homo socialis”, a more altruistic being who loves to communicate, work for pleasure, and share. Our institutions are biased by design. They endorse certain behaviours over the others. In modern industrial capitalism, the foundation upon which our institutions have been established is that we are all homo economicus. Hence, for a “good” life, which is not always reflected in growth and other monetary indexes, we need to create institutions that would also harness and empower the homo socialis.

Second, the hidden social and environmental costs of technologies will have to be recognized. The so-called “digital society” is admittedly based on a material- and energy-intensive infrastructure. This is important to recognize so as not to further jeopardize the lives of current and future generations by unwittingly encouraging serious environmental instability and associated social problems.

Finally, a new network of interconnected commons-based businesses will continue to emerge, where sharing is not used to maximize profits, but to create new forms of businesses that would empower much more sharing, caring, and collaboration globally. As the global community becomes more aware of how their abundance is dependent on other human beings and the stability of environments, more and more will see commons-based businesses as the way of the future.


Vasilis Kostakis is a Senior Researcher at Tallinn University of Technology, Estonia, and he is affiliated with the Berkman Klein Center at Harvard University.

Andreas Roos is a PhD student in the interdisciplinary field of Human Ecology at Lund.

Originally published at HBR.org

Photo by longan drink

The post New Technologies Won’t Reduce Scarcity, but Here’s Something That Might appeared first on P2P Foundation.

]]>
https://blog.p2pfoundation.net/new-technologies-wont-reduce-scarcity-but-heres-something-that-might/2018/09/14/feed 0 72620
Artifictional Intelligence: is the Singularity or the Surrender the real threat to humanity? https://blog.p2pfoundation.net/artifictional-intelligence-is-the-singularity-or-the-surrender-the-real-threat-to-humanity/2018/09/07 https://blog.p2pfoundation.net/artifictional-intelligence-is-the-singularity-or-the-surrender-the-real-threat-to-humanity/2018/09/07#respond Fri, 07 Sep 2018 09:00:59 +0000 https://blog.p2pfoundation.net/?p=72597 Artificial intelligence is one of those things: overhyped and yet mystical, the realm of experts and yet something everyone is inclined to have an opinion on. Harry Collins is no AI expert, and yet he seems to get it in a way we could only wish more experts did. Collins is a sociologist. In his... Continue reading

The post Artifictional Intelligence: is the Singularity or the Surrender the real threat to humanity? appeared first on P2P Foundation.

]]>
Artificial intelligence is one of those things: overhyped and yet mystical, the realm of experts and yet something everyone is inclined to have an opinion on. Harry Collins is no AI expert, and yet he seems to get it in a way we could only wish more experts did.

Collins is a sociologist. In his book “Artifictional Intelligence – Against Humanity’s Surrender to Computers”, out today from Polity, Collins does many interesting things. To begin with, he argues what qualifies him to have an opinion on AI.

Collins is a sociologist of science at the School of Social Sciences, Cardiff University, Wales, and a Fellow of the British Academy. Part of his expertise is dealing with human scientific expertise, and therefore, intelligence.

It sounds plausible that figuring out what constitutes human intelligence would be a good start to figure out artificial intelligence, and Collins does a great job at it.

The impossibility claims

The gist of Collins’ argument, and the reason he wrote the book, is to warn against what he sees as a real danger of trusting AI to the point of surrendering critical thinking, and entrusting AI with more than what we really should. This is summarized by his 2 “impossibility claims”:

1. No computer will be fluent in natural language, pass a severe Turing test and have full human-like intelligence unless it is fully embedded in normal human society.

2. No computer will be fully embedded in normal human society as a result of incremental progress based on current techniques.

There is quite some work to back up those claims of course, and this is what Collins does throughout the 10 Chapters of his book. Before we embark on this kind of meta-journey of summarizing his approach, however, it might be good to start with some definitions.

The Turing test is a test designed to categorize “real” AI. At its core, it seems simple: a human tester is supposed to interact with an AI candidate in a conversational manner. If the human cannot distinguish the AI candidate from a human, then the AI has passed the Turing test and is said to display real human-like intelligence.

The Singularity is the hypothesis that the appearance of “real” artificial intelligence will lead to artificial superintelligence, bringing unforeseen consequences and unfathomable changes to human civilization. Views on the Singularity are typically polarized, seeing the evolution of AI as either ending human suffering and cares or ending humanity altogether.

This is actually a good starting point for Collins to ponder on the anthropomorphizing of AI. Why, Collins asks, do we assume that AIs would want the same things that humans want, such as dominance and affluence, and thus pose a threat to humanity?

This is a far-reaching question. It serves as a starting point to ask more questions about humanity, such as why people are, or are seen as, individualistic, how do people learn, and what is the role of society in learning.

Social Science

Science, and learning, argues Collins, do not happen in a monotonous, but rather in a modulated way. What this means is that rather than seeing knowledge acquisition as looking to uncover and unlock a set of predefined eternal truths, or rules, the way it progresses is also dependent on interpretation and social cues. It is, in other words, subject to co-production.

This applies, to begin with, to the directions knowledge acquisition will take. A society for which witches are a part of the mainstream discourse, for example, will have very different priorities than one in which symptomatic medicine is the norm.

But it also applies to the way observations, and data, are interpreted. This is a fundamental aspect of science, according to Collins: the data is *always* out there. Our capacity for collecting them may fluctuate with technical progress, but it is the ability to interpret them that really constitutes intelligence, and that does have a social aspect.

Collins leverages his experience from social embedding as practiced in sociology to support his view. When dealing with a hitherto unknown and incomprehensible social group, a scholar would not be able to understand its communication unless s/he is in some way embedded in it.

All knowledge is social, according to Collins. Image: biznology

Collins argues for the central position on language in intelligence, and ties it to social embedding. It would not be possible, he says, to understand a language simply by statistical analysis. Not only would that miss all the subtle cues of non-verbal communication, but, as opposed to games such as Go or chess that have been mastered by computers, language is open-ended and ever-evolving.

Collins also introduces the concept of interactional expertise, and substantiates it based on his own experience over a long period of time with a group of physicists working in the field of gravitational waves.

Even though he never will be an expert who produces knowledge in the field, Collins has been able to master the topics and the language of the group over time. This has not only gotten him to be accepted as a member of the community, but has also enabled him to pass a blind test.

A blind test is similar to a Turing test: a judge, who is a practising member of the community, was unable to distinguish Collins, a non-practising member, from another practising member, based on their answers to domain specific questions. Collins argues this would never have been possible had he not been embedded in the community, and this is the core of the support for his first impossibility claim.

Top-down or Bottom-up?

As for the second impossibility claim, it has to do with the way AI works. Collins has one chapter dedicated to the currently prevalent technique in AI called Deep Learning. He explains how Deep Learning works in an approachable way, which boils down to pattern recognition based on a big enough and good enough body of precedents.

The fact that there are more data (digitized precedents) and more computing power (thanks to Moore’s Law) today is what has enabled this technique to work. It’s not really new, as it has been around for decades, it’s just that we did not have enough data and processing power to make it work reliably and fast enough up until now.

In the spirit of investigating the principal, not the technicalities behind this approach, Collins concedes some points to its proponents. First, he assumes technical capacity will not slow down and soon reach the point of being able to use all human communication in transcribed form.

Second, he accepts a simplified model of the human brain as used by Ray Kurzweil, one of AIs more prominent proponents. According to this model, the human brain is composed of a large number of pattern recognition elements. So all intelligence boils down to is advanced pattern recognition, or bottom-up discovery of pre-existing patterns.

Top-down, or bottom-up? Image: Organizational Physics

Collins argues however that although pattern recognition is a necessary precondition for intelligence, it is not sufficient. Patterns alone do not equal knowledge, there needs to be some meaning attached to them, and for this language and social context is required. Language and social context are top-down constructs.

Collins, therefore, introduices an extended model of the human brain, in which additional inputs are processed, coming from social context. This, in fact, is related to another approach in AI, labeled symbolic AI. In this top-down approach, instead on relying exclusively on pattern recognition, the idea is to encode all available knowledge in a set of facts and rules.

Collins admits that his second impossibility claim is weaker than the first one. The reason is that technical capacity may reach a point that enables us to encode all available knowledge, even tacit one, a task that seems out of reach today. But then again, many things that are commonplace today seemed out of reach yesterday.

In fact, the combination of bottom-up and top-down approaches to intelligence that Collins stands behind, is what many AI experts stand for as well. The most promising path to AI will not be Deep Learning alone, but a combination of Deep Learning and symbolic AI. To his credit, Collins is open-minded about this, has had very interesting conversations with leading experts in the field, and incorporated them in the book.

Technical understanding and Ideology

There are many more interesting details that could not possibly fit in a book review: Collins’ definition of 6 levels of AI, the fractal model of knowledge, exploring what an effective Turing test would be, and more.

The book is a tour de force of epistemology for the masses: easy to follow, and yet precise and well-informed. Collins tiptoes his way around philosophy and science, from Plato to Wittgestein to AI pioneers, in a coherent way.

He also touches issues such as the roots of capitalism or what is driving human behavior, although he seems to have made a conscious choice of not going into them, possibly in the spirit of not derailing the conversation or perhaps alienating readers. In any case, his book will not only make AI approachable, but will also make you think on a variety of topics.

And, in the end, it does achieve what it set out to do. It gives a vivid warning against the Surrender, which should be about technical understanding, but perhaps even more so about ideology.

Collins, Harry M. (2018). Artifictional Intelligence: Against Humanity’s Surrender to Computers. Cambridge, UK; Malden, Massachusetts: Polity. ISBN 9781509504121.

The post Artifictional Intelligence: is the Singularity or the Surrender the real threat to humanity? appeared first on P2P Foundation.

]]>
https://blog.p2pfoundation.net/artifictional-intelligence-is-the-singularity-or-the-surrender-the-real-threat-to-humanity/2018/09/07/feed 0 72597
Democratising AgTech? Agriculture and the Digital Commons | Part 2 https://blog.p2pfoundation.net/democratising-agtech-agriculture-and-the-digital-commons-part-2/2018/06/01 https://blog.p2pfoundation.net/democratising-agtech-agriculture-and-the-digital-commons-part-2/2018/06/01#respond Fri, 01 Jun 2018 08:00:00 +0000 https://blog.p2pfoundation.net/?p=71115 Agriculture 3.0 describes the increasing implementation and promotion of digital technologies in agricultural production. Promising more efficient farming, higher yields and environmental sustainability, AgTech has entered the mainstream, pushed by the EU, international corporations and national governments across the world. Increasingly, serious questions are raised about the impact of such market-oriented technologies on the agricultural... Continue reading

The post Democratising AgTech? Agriculture and the Digital Commons | Part 2 appeared first on P2P Foundation.

]]>
Agriculture 3.0 describes the increasing implementation and promotion of digital technologies in agricultural production. Promising more efficient farming, higher yields and environmental sustainability, AgTech has entered the mainstream, pushed by the EU, international corporations and national governments across the world. Increasingly, serious questions are raised about the impact of such market-oriented technologies on the agricultural sector. Who has access to these technologies? Who controls the data? In this second of a two part piece, Gabriel Ash investigates the potential of Free/ Open Source Software (FOSS) to make agricultural digitisation more accessible. 

Can FOSS stem the tide towards the commodification of agricultural knowledge?

Gabriel Ash: Acting against the grain of current economic and political structures and offering both valuable access and inspiring ideas about collaboration, the sharing of ‘the commons,’ and the future of work, these FOSS-modelled schemes are unlikely to be the last of their kind. But if they are to realize their full potential, it is essential that both the lessons of the history of FOSS, and differences in context between IT and agriculture, as well as the impact of the quarter century that separates the two moments in time, become subjects of reflection.

The reality of FOSS is significantly more complicated that the simple distinction between open and proprietary. In many products—the Android phone, for example—‘open’ and ‘closed’ elements co-exist, and tiered commercial projects with an Open Source base and proprietary additions are common. Furthermore, ‘open’ itself is a continuum, with various licensing schemes offering a range of different degrees of control. If FOSS models become widespread, forms of accommodation between open and proprietary technologies are likely to emerge in agriculture as well, which could further advance the interests of agribusiness at the expense of farmers. It matters therefore how and to what ends FOSS schemes engage and mobilize users and producers.

Blueprints for agricultural technology and machinery can be found on websites like FarmHack or Atelier Paysan (CCO)

The history of the evolution of agricultural knowledge is also more complicated than a simple binary between proprietary and public. The Green Revolution replaced the informal, tacit knowledge of farmers with formal, scientific knowledge that was nevertheless organized as public knowledge, primary through institutions of research and higher learning. This phase of development elicited resistance and criticism for both the damage to farmers and ecosystems, primarily in the Third World, and for the denigration of centuries of accumulated local knowledge. This conflict was instrumental in the emergence of agroecology as a discipline[1] as well as in a range of efforts to foster better interactions between scientists and farmers.[2]

A second process that began shifting funding, control, and eventually the ownership of knowledge from the public to the private sector occurred later. In contrast to agriculture, software development never had the equivalent of farmers, and FOSS emerged purely out of resistance to the second process. This difference implies that FOSS-inspired schemes in agriculture could be more complex and resilient, and potentially more effective alternatives. But it also opens more room for misaligned interests and internal conflicts.

The ideas of unfettered collaboration and democratic creativity that FOSS schemes invoke are not external to the development of the privatized knowledge economy and its attendant intensification of intellectual property rights. Workforce creativity, technological innovation, intellectual property rights, and economic growth are widely perceived today by policy makers as linked.[3] By advancing ideas of knowledge as common and knowledge production as free, FOSS-inspired schemes expose some of the internal contradictions of a model of economic growth premised on profiting from immaterial labour and the control and selling of knowledge. But they will not buck the trend towards privatized hi-tech agriculture alone.

Agriculture, however, may offer unique opportunities for linking FOSS-inspired schemes with other forms of engagement and mobilization on issues such as environmentalism and farmers’ and peasants’ rights, and the different ways each of the latter raises the question of the commons. Let these projects be the early shoots of a wide wave of reflection, experimentation, and mobilization around these questions.


Read part 1 of this series here.

[1] Gliessman S.R. (2015) Agroecology: the ecology of sustainable food systems, 3rd Ed., CRC Press, Taylor & Francis, New York, USA, p. 28.

[2] World Bank (2006) Global – International Assessment of Agricultural Science and Technology for Development (IAASTD) Project. Washington, DC: World Bank http://documents.worldbank.org/curated/en/753791468314375364/Global-International-Assessment-of-Agricultural-Science-and-Technology-for-Development-IAASTD-Project , pp. 65-68.

[3] See Barry (2008), pp. 42-43.

The post Democratising AgTech? Agriculture and the Digital Commons | Part 2 appeared first on P2P Foundation.

]]>
https://blog.p2pfoundation.net/democratising-agtech-agriculture-and-the-digital-commons-part-2/2018/06/01/feed 0 71115
Democratising AgTech? Agriculture and the Digital Commons | Part 1 https://blog.p2pfoundation.net/democratising-agtech-agriculture-and-the-digital-commons-part-1/2018/05/25 https://blog.p2pfoundation.net/democratising-agtech-agriculture-and-the-digital-commons-part-1/2018/05/25#respond Fri, 25 May 2018 07:00:00 +0000 https://blog.p2pfoundation.net/?p=71107  Agriculture 3.0 describes the increasing implementation and promotion of digital technologies in agricultural production. Promising more efficient farming, higher yields and environmental sustainability, AgTech has entered the mainstream, pushed by the EU, international corporations and national governments across the world. Increasingly, serious questions are raised about the impact of such market-oriented technologies on the agricultural... Continue reading

The post Democratising AgTech? Agriculture and the Digital Commons | Part 1 appeared first on P2P Foundation.

]]>
 Agriculture 3.0 describes the increasing implementation and promotion of digital technologies in agricultural production. Promising more efficient farming, higher yields and environmental sustainability, AgTech has entered the mainstream, pushed by the EU, international corporations and national governments across the world. Increasingly, serious questions are raised about the impact of such market-oriented technologies on the agricultural sector. Who has access to these technologies? Who controls the data? In this 2-part piece, Gabriel Ash investigates the potential of Free/ Open Source Software to make agricultural digitisation more accessible. 

Gabriel Ash: Recently, a number of initiatives defending free access to agricultural knowledge have emerged. FarmHackAtelier PaysanThe Open Seeds Initiative, and Open Source Seeds advance alternatives to the proprietary knowledge model of industrial farming based on ideas drawn from Free/Open Source Software. These initiatives respond to current trends in agricultural development and raise questions about its direction; they express an emergent concern for the commons against the drive to privatize knowledge. But why now? What is Free/Open Source Software (FOSS)? How is the FOSS model applied to agriculture? Finally, what are the opportunities and pitfalls such schemes present?[1]

Why now?

Artificial Intelligence, Big Data, blockchain, cryptocurrencies—these are today’s ‘hot’ investment trends. The hi-tech ventures that seek to deploy these technologies receive the bulk of new investment in start-ups as well as media attention. The dominance of Information Technologies affects agriculture in two ways: First, an investment gold rush is building up in ‘Agritech,’ around buzzwords such as ‘smart farming’ or ‘precision agriculture,’ and a crop of companies that seek to make agriculture more efficient and profitable with information technologies such as drone and satellite imagery analysis, cloud based data collection, digital exchanges, etc. One gets a sense of the magnitude of the forces unleashed from browsing the offerings of start-up accelerators such as EIT.  Second, businesses, regulators, politicians, NGOs, and the media adopt vocabulary, goals, expectations, and ‘common sense’ derived from Information Technology, which are then applied to agriculture.[2]

The dominance of Information Technology and its tendency to shape other industries as well as law and regulation is not simply the outcome of “market forces.” Both the US and the EU have long promoted the dissemination of Information and Communication Technology (ICT) and the adoption of new intellectual property rights to support it. Thus, “the 2005 Spring European Council called knowledge and innovation the engines of sustainable growth…it is essential to build a fully inclusive information society, based on the widespread use of information and communication technologies (ICT) in public services, SMEs and households.” According to António Guterres, United Nations Secretary-General, “we want to ensure that big data will bring the big impact that so many people need.” It is taken for granted by policy makers that innovation and growth depend on commodified, proprietary knowledge, which in turn require reforming and unifying intellectual property rights.[3]

With the growing visibility of ICT, the policy drive for hi-tech innovation, and the push to commodify and privatise knowledge, alternative practices that first emerged within ICT—notably Free/Open Source Software—have also migrated into the mainstream, inspiring projects such the Creative Commons and Free Culture. They are also gaining a presence in agriculture.

What is Free/Open Source Software (FOSS)?

FOSS emerged in the 1980s among computer scientists and engineers who resented the way commercial constraints interfered with the norms of unfettered collaboration and exchange of information that prevail in science. In 1985, Richard Stallman created the Free Software Foundation (FSF), which launched the GNU project of free software tools. Breaking with the habits of commercial development, the software was written by volunteers in open collaboration over the internet and gave users full access to the source code as well as the right to freely share, tinker with and modify the program.

The FSF introduced a new relation between software producers and users, the General Public License (GPL), which effectively “hacks” copyright law to create the very opposite of a property right, a resource that obliges its users to place the fruits of their own labour in a shared common domain. By mandating that all derivative works must be distributed with the same license, this property of the GPL, called ‘copyleft’, prevents the appropriation and integration of free software in a proprietary product and guarantees that the code will remain free and open to users.

Although inspired initially by ideals of openness and freedom, FOSS did not evolve as a radical challenge to proprietary software. Companies large and small soon began investing important sums in open source development, creating new business models around it. In 1998, the shift toward as a more business-friendly model was formalized with the establishment of Open Source Initiative. Today the trend for new projects is towards licenses that eschew copyleft.

There is a perception that FOSS is US-centric. This is true insofar as the powerful US tech industry has shaped its major trends, but with important qualifications. Not only are there numerous European organizations promoting FOSS, but European countries, especially France and Germany, provide a surprisingly large number of participants. Furthermore, a number of Third World countries and public institutions have embraced it for political reasons.

FOSS is undoubtedly a success story. Its products, including heavyweights such as the operating system Linux and the ubiquitous PHP, MySQL, and Apache, power much of the web, and major ITC companies rely on it. It is also a realm of empowerment and meaning for the skilled programmers who contribute to it, one that implicitly invokes new forms of collective creativity, unfettered by the structures of intellectual property that support the expansion of the ‘information society’ and its attendant commodification of knowledge. Yet FOSS has not delivered on the utopian aspirations that are often invested in it. It has not subverted the dominant proprietary industrial structures, nor has it ushered a society of empowered technology users/creators. In David Barry’s words, FOSS remains “precariously balanced between the need for a common public form in which innovation and creativity can blossom and the reliance, to a large extent, on private corporations…” that push forward the commodification and enclosure of knowledge.[4]

Blueprints for agricultural technology and machinery can be found on websites like FarmHack or Atelier Paysan (CCO)

FOSS-inspired initiatives in Agriculture

Mechanized farm equipment manufacturers such as John Deer progressively moved toward digitized, software-controlled components that require authorized software access to repair, as well as restrictive contracts that forbid repairs and modifications. This inspired hackers, first in Eastern Europe, then in the US, to develop and share hacked versions of the control software, circumventing the manufacturers’ protections. In the US, farmers who used those hacked versions joined a larger movement demanding legislation to protect ‘the right to repair.’[5]

Addressing similar concerns from a different direction, FarmHack, established in 2010 and describing itself as “a worldwide community of farmers that build and modify our own tools,” draws inspiration from the hacking culture of FOSS to promote low-cost, open farm technology. Participants share designs for farm tools and license them under ‘copyleft’ licenses. FarmHack seeks to “light the spark for a collaborative, self-governing community that builds its own capacity and content, rather than following a traditional cycle of raising money to fund top-down knowledge generation.”

In France, Atelier Paysan was set up in 2011 with a similar basic concept, offering “an on-line platform for collaboratively developing methods and practices to reclaim farming skills and achieve self-sufficiency in relation to the tools and machinery used in organic farming.” Unlike FarmHack, whose off-line presence is limited to meetups, Atelier Paysan is organized as a cooperative that owns a certain amount of equipment and provides workshops to farmers. Atelier Paysan publishes its collaborators’ design under the same creative commons ‘copyleft’ license.

The enclosure and commodification of plant genome through patenting, licensing, and hybridization have spurred similar efforts. The Open Source Seed Initiative, a US organization created in 2012, describes itself as “inspired by the free and open source software movement that has provided alternatives to proprietary software,” with the goal “to free the seed – to make sure that the genes in at least some seed can never be locked away from use by intellectual property rights.” After initially trying and failing to devise a legally enforceable license, OSSI opted for a short pledge that is printed on all seed packages: “…you have the freedom to use these OSSI- Pledged seeds in any way you choose. In return, you pledge not to restrict others’ use of these seeds or their derivatives by patents or other means, and to include this Pledge with any transfer of these seeds or their derivatives.” As of today, OSSI’s list of pledged seeds numbers over 400 varieties.

Last year, a second open seeds initiative was unveiled in Germany, Open Source Seeds, which has its institutional roots in ecological agricultural development in the Third World. Unlike FOSS copyright-based licenses, OSS license was devised under German civil contract law. The license, which is copyleft and includes derivatives, aims at combating market concentration. As one can expect for an organization that operates for less than a year, only five open source varieties are listed so far, all tomatoes.

Part 2 will question whether FOSS can stem the tide towards the commodification of agricultural knowledge. 

Gabriel Ash is a translator, software developer, writer, activist, and filmmaker. He lives now in Geneva, Switzerland

[1] The account of FOSS below is highly indebted to David Berry’s excellent analysis in Berry, D. (2008) Copy, Rip, Burn: The Politics of Copyleft and Open Source, Pluto Press, London.

[2] See the European Conference on Precision Agriculture Sponsors, the European Parliament report on Precision Agriculture and the Future of Farming in Europe, the European Commission’s Communication on Future of Food and Farming .

[3] See European Commission (2005), p.4.

[4] Berry (2008), p. 144;

[5] See The Repair Association  and Nebraska’s Fair Repair Bill

The post Democratising AgTech? Agriculture and the Digital Commons | Part 1 appeared first on P2P Foundation.

]]>
https://blog.p2pfoundation.net/democratising-agtech-agriculture-and-the-digital-commons-part-1/2018/05/25/feed 0 71107
Frank Pasquale on the Shift from Territorial to Functional Sovereignty https://blog.p2pfoundation.net/frank-pasquale-on-the-shift-from-territorial-to-functional-sovereignty/2018/01/16 https://blog.p2pfoundation.net/frank-pasquale-on-the-shift-from-territorial-to-functional-sovereignty/2018/01/16#respond Tue, 16 Jan 2018 09:00:00 +0000 https://blog.p2pfoundation.net/?p=69274 It is very clear that power in our societies is changing. After the financialization of our economies under neoliberal globalization, we have a new layer of corporate power emerging from the platform economy. This process is very well described by Frank Pascuale in the recommended text we excerpt below, under the concept of Functional Governance.... Continue reading

The post Frank Pasquale on the Shift from Territorial to Functional Sovereignty appeared first on P2P Foundation.

]]>
It is very clear that power in our societies is changing. After the financialization of our economies under neoliberal globalization, we have a new layer of corporate power emerging from the platform economy. This process is very well described by Frank Pascuale in the recommended text we excerpt below, under the concept of Functional Governance. Please read the full text carefully, as well as the videotaped presentation. As Pacuale explains, these netarchical platforms, privately owned platforms that extract value from our own peer to peer exchanges, through their ownership of our data, their ability to nudge our behaviours, and the capacity to overtake a number of formerly public sector functions, are also threatening any democratic accountability and possibilities of commons-based co-production, co-governance and co-ownership of value creation.

However, this doesn’t mean that we are powerless and in a next installment, we will propose a strategy that is also learning from the innovations of platform capitalism. The following extracts have been sourced from Open Democracy:

Frank Pasquale: As digital firms move to displace more government roles over time, from room-letting to transportation to commerce, citizens will be increasingly subject to corporate, rather than democratic, control.

Economists tend to characterize the scope of regulation as a simple matter of expanding or contracting state power. But a political economy perspective emphasizes that social relations abhor a power vacuum. When state authority contracts, private parties fill the gap. That power can feel just as oppressive, and have effects just as pervasive, as garden variety administrative agency enforcement of civil law. As Robert Lee Hale stated, “There is government whenever one person or group can tell others what they must do and when those others have to obey or suffer a penalty.”

We are familiar with that power in employer-employee relationships, or when a massive firm extracts concessions from suppliers. But what about when a firm presumes to exercise juridical power, not as a party to a conflict, but the authority deciding it? I worry that such scenarios will become all the more common as massive digital platforms exercise more power over our commercial lives.


Focusing on the identity and aspirations of major digital firms. They are no longer market participants. Rather, in their fields, they are market makers, able to exert regulatory control over the terms on which others can sell goods and services. Moreover, they aspire to displace more government roles over time, replacing the logic of territorial sovereignty with functional sovereignty. In functional arenas from room-letting to transportation to commerce, persons will be increasingly subject to corporate, rather than democratic, control.

For example: Who needs city housing regulators when AirBnB can use data-driven methods to effectively regulate room-letting, then house-letting, and eventually urban planning generally? Why not let Amazon have its own jurisdiction or charter city, or establish special judicial procedures for Foxconn? Some vanguardists of functional sovereignty believe online rating systems could replace state occupational licensure—so rather than having government boards credential workers, a platform like LinkedIn could collect star ratings on them.


This shift from territorial to functional sovereignty is creating a new digital political economy.


Forward-thinking legal thinkers are helping us grasp these dynamics. For example, Rory van Loo has described the status of the “corporation as courthouse”—that is, when platforms like Amazon run dispute resolution schemes to settle conflicts between buyers and sellers. Van Loo describes both the efficiency gains that an Amazon settlement process might have over small claims court, and the potential pitfalls for consumers (such as opaque standards for deciding cases). I believe that, on top of such economic considerations, we may want to consider the political economic origins of e-commerce feudalism. For example, as consumer rights shrivel, it’s rational for buyers to turn to Amazon (rather than overwhelmed small claims courts) to press their case. The evisceration of class actions, the rise of arbitration, boilerplate contracts—all these make the judicial system an increasingly vestigial organ in consumer disputes. Individuals rationally turn to online giants for powers to impose order that libertarian legal doctrine stripped from the state. And in so doing, they reinforce the very dynamics that led to the state’s etiolation in the first place.

This weakness has become something of a joke with Amazon’s recent decision to incite a bidding war for its second headquarters. Mayors have abjectly begged Amazon to locate jobs in their jurisdictions. As readers of Richard Thaler’s “The Winner’s Curse” might have predicted, the competitive dynamics have tempted far too many to offer far too much in the way of incentives. As journalist Danny Westneat recently confirmed,

  • Chicago has offered to let Amazon pocket $1.32 billion in income taxes paid by its own workers.
  • Fresno has a novel plan to give Amazon special authority over how the company’s taxes are spent.
  • Boston has offered to set up an “Amazon Task Force” of city employees working on the company’s behalf.

Stonecrest, Georgia even offered to cannibalize itself, to give Bezos the chance to become mayor of a 345 acre annex that would be known as “Amazon, Georgia.

The example of Amazon

Amazon’s rise is instructive. As Lina Khan explains, “the company has positioned itself at the center of e-commerce and now serves as essential infrastructure for a host of other businesses that depend upon it.” The “everything store” may seem like just another service in the economy—a virtual mall. But when a firm combines tens of millions of customers with a “marketing platform, a delivery and logistics network, a payment service, a credit lender, an auction house…a hardware manufacturer, and a leading host of cloud server space,” as Khan observes, it’s not just another shopping option.

Digital political economy helps us understand how platforms accumulate power. With online platforms, it’s not a simple narrative of “best service wins.” Network effects have been on the cyberlaw (and digital economics) agenda for over twenty years. Amazon’s dominance has exhibited how network effects can be self-reinforcing. The more merchants there are selling on (or to) Amazon, the better shoppers can be assured that they are searching all possible vendors. The more shoppers there are, the more vendors consider Amazon a “must-have” venue. As crowds build on either side of the platform, the middleman becomes ever more indispensable. Oh, sure, a new platform can enter the market—but until it gets access to the 480 million items Amazon sells (often at deep discounts), why should the median consumer defect to it? If I want garbage bags, do I really want to go over to Target.com to re-enter all my credit card details, create a new log-in, read the small print about shipping, and hope that this retailer can negotiate a better deal with Glad? Or do I, ala Sunstein, want a predictive shopping purveyor that intimately knows my past purchase habits, with satisfaction just a click away?

As artificial intelligence improves, the tracking of shopping into the Amazon groove will tend to become ever more rational for both buyers and sellers. Like a path through a forest trod ever clearer of debris, it becomes the natural default. To examine just one of many centripetal forces sucking money, data, and commerce into online behemoths, play out game theoretically how the possibility of online conflict redounds in Amazon’s favor. If you have a problem with a merchant online, do you want to pursue it as a one-off buyer? Or as someone whose reputation has been established over dozens or hundreds of transactions—and someone who can credibly threaten to deny Amazon hundreds or thousands of dollars of revenue each year? The same goes for merchants: The more tribute they can pay to Amazon, the more likely they are to achieve visibility in search results and attention (and perhaps even favor) when disputes come up. What Bruce Schneier said about security is increasingly true of commerce online: You want to be in the good graces of one of the neo-feudal giants who bring order to a lawless realm. Yet few hesitate to think about exactly how the digital lords might use their data advantages against those they ostensibly protect.

Photo by thisisbossi

The post Frank Pasquale on the Shift from Territorial to Functional Sovereignty appeared first on P2P Foundation.

]]>
https://blog.p2pfoundation.net/frank-pasquale-on-the-shift-from-territorial-to-functional-sovereignty/2018/01/16/feed 0 69274