George Anadiotis – P2P Foundation https://blog.p2pfoundation.net Researching, documenting and promoting peer to peer practices Fri, 28 Feb 2020 09:16:30 +0000 en-US hourly 1 https://wordpress.org/?v=5.5.15 62076519 Make software great again: can open source be ethical and fair? https://blog.p2pfoundation.net/is-there-a-way-to-go-beyond-open-source-and-have-ethical-fair-software-in-a-cloud-first-world-this-is-what-some-people-in-the-open-source-community-think/2020/03/02 https://blog.p2pfoundation.net/is-there-a-way-to-go-beyond-open-source-and-have-ethical-fair-software-in-a-cloud-first-world-this-is-what-some-people-in-the-open-source-community-think/2020/03/02#respond Mon, 02 Mar 2020 07:15:00 +0000 https://blog.p2pfoundation.net/?p=75668 Is there a way to go beyond open source, and have ethical, fair software in a cloud-first world? This is what some people in the open source community think. In the 20 years since its inception, open source has turned out to be the most successful model for building software. The world today runs on open-source software... Continue reading

The post Make software great again: can open source be ethical and fair? appeared first on P2P Foundation.

]]>
Is there a way to go beyond open source, and have ethical, fair software in a cloud-first world? This is what some people in the open source community think.

In the 20 years since its inception, open source has turned out to be the most successful model for building software. The world today runs on open-source software (OSS). An ecosystem has been created around OSS. Businesses and software builders use OSS directly or indirectly, while others offer services and products based on OSS.

OSS is perceived as being free, fair and/or ethical. This perception, however, may not be entirely true. That may be counter-intuitive, but it’s at the heart of the debate around OSS. As OSS is growing up, it’s becoming more successful, more complex, and ubiquitous. It seems we are entering a new phase for OSS, and it’s not without growing pains.

Commercial OSS in the cloud

The four essential freedoms are a cornerstone of OSS. They refer to what users can do with the software, but they tell us nothing about the economic cost, or benefit, related to the software. OSS is free as in speech, but not free as in beer. Someone has to build the software, and then someone has to maintain, run, and manage it.

As far as the perception of OSS being fair or ethical goes: it’s just that – a perception. The perception stems from the OSS community ethos, but in reality, the OSS freedoms are at odds with notions of fair or ethical use. Anyone can contribute as much or as little as they please to OSS. Anyone can use OSS for any purpose, regardless of contribution.

This has led to where we are today. Cloud vendors like AWS, Google or Microsoft, have built their infrastructure based on OSS. Each of them also contributes to OSS in many ways, including code and outreach for existing OSS projects, as well as establishing new OSS projects. But use of, or contribution to, each OSS project is not really accounted for.

There are many pieces in the open source software puzzle. Photo by Hans-Peter Gauster on Unsplash

Recently, the Apache Software Foundation, one of the key OSS institutions, celebrated its 20th anniversary. The ASF claims the value of the software under its auspices is around $20 Billion, by its own estimates. Everyone is entitled to use the software for free, and many do. But the ones who create this value are the ones who contribute to OSS, be it in code or in other ways.

As analyses have shown, many OSS contributors do this because they are intrinsically motivated: the software is interesting to them, they need it, or they feel good about their contribution. In that respect, they are not much different from vendors that have chosen to build OSS products. Those vendors have invested in their OSS, and their ROI depends on it.

Which brings us to cloud vendors. As many pundits note, cloud vendors operate on a whole different plane. If commercial OSS vendors are about taking innovation from 0 to 1, cloud vendors are about taking it from 1 to n. This brings value in and by itself. Cloud vendors also release OSS projects of their own, and contribute to existing ones. Their strategies, however, differ, and this is where things get complicated.

AWS is the leader in the cloud market. The strategy AWS has adopted with regards to OSS, however, has exposed it to criticism. Recently, an independent data-driven analysis was done on GitHub, where OSS code lives. The analysis showed that in terms of code, AWS does not seem to be contributing much to the development of the OSS products it offers as a service.

It’s understandable why vendors building those products are looking to tweak their licenses to disallow AWS from running their software as a service. It’s also understandable why the OSI, which has control over OSS licenses, is pushing back: by introducing those tweaks, the software is no longer OSS.

If this was just a clash of commercial interests, we might be getting our pop corn to watch. But for something with such high value to society at large as OSS, the ramifications are important. Is there a way everyone involved can get a fair share of the profit, and keep contributing to OSS? Let’s hear what 2 CEOs from vendors who build OSS, and work with AWS, have to say.

The co-opetition view: one big act vs. many small ones

Dor Laor is the founder and CEO of ScyllaDB, an OSS vendor with an interesting story. ScyllaDB was built on a contentious premise, as it is a re-implementation of another OSS database: Apache Cassandra. Laor has shared thoughts on OSS license changes, as well as Amazon’s latest move to offer Cassandra as a managed service on AWS cloud.

Our discussion started touching upon ScyllaDB’s latest features. According to Laor, these features (most prominently lightweight transactions) do not just bring parity with Cassandra, but go one step further. Laor expanded on the technical aspects of ScyllaDB’s solution. As these seemed technically sound, yet conceptually simple, the discussion moved to a broader topic.

ScyllaDB exemplifies the complexity of open source software: built on existing software and APIs, while being open source itself. Image: ScyllaDB

Laor claimed none of ScyllaDB’s closest matches, namely Apache Cassandra and AWS DynamoDB, have such features. When asked why he thinks that is, given the nature of those features, Laor offered 2 answers.

For Cassandra, he mentioned that for the last few years its former main contributor, namely DataStax, has taken a step back. Naturally, this has stalled Cassandra’s development considerably. As for AWS, Laor noted that AWS has the tendency to offer products that are good enough, but not necessarily the best in their league.

As ScyllaDB is also available on AWS, and Laor was present at AWS’s main event, re:Invent, in 2019, he offered a metaphor to explain this. Laor said there were a number of stages set up for various acts in the re:Invent after party, and he found all of them mediocre. Laor went on to add that he sees that as a metaphor for AWS’ philosophy of going wide, rather than deep in its undertakings. This is a point shared in other OSS vendor strategies, too.

But ScyllaDB went beyond that, to do something no other OSS vendor we know of has done before: offer a compatibility layer for one of AWS’ products, namely DynamoDB. ScyllaDB’s DynamoDB API support will be officially available soon, and it will enable DynamoDB users to migrate to ScyllaDB. Laor said there is a waiting list for this.

This is technically feasible, and legally permissible. Unless things change, there are no restrictions on using APIs, as per the famous Oracle vs. Google case verdict. While some of AWS’ own people questioned this move, Laor claimed users are better off using ScyllaDB. In turn, this opens up some interesting questions. What about ethics, and contribution?

Building a new implementation of an existing API seems cleaner than using someone else’s implementation, but it still means benefiting from a userbase others built. Laor acknowledged that, as well as the fact that ScyllaDB leverages contributions from Amazon, Cassandra, and DataStax. He also pointed out that this spurs innovation and benefits users, and measuring contribution is very hard.

ScyllaDB has an open core strategy. Some features are proprietary, while the OSS core is licensed under AGPL, which Laor said AWS avoids. So far this has worked in deterring AWS from offering ScyllaDB as a service, although it could also be that ScyllaDB has not reached critical mass yet. In any case, as Laor said, these things change.

The collaboration view: balancing OSS makers and takers

Most OSS products fall under one of two categories. Many products are largely driven by a single vendor, whose employees contribute most of the related effort and drive its directions. Other products leverage contributions that cross-cut organizations who employ the contributors; often, OSS work is the main activity for such contributors.

But there is an OSS product in which the vendor commercializing it only contributes 5% of its code while still being the largest contributor. The product is commercially successful, has a community-driven decision making process, and is a distinguished AWS partner, too. And these are not the only reasons why Acquia, the vendor commercializing the Drupal CMS, and Dries Buytaert, its founder, stand out.

Recently, Buytaert shared his thoughts on balancing OSS makers and takers in an elaborate blog post. In our discussion, Buytaert confessed it took him a couple of weeks to put his post together. This is understandable, considering how many aspects of OSS it touches upon.

If makers and takers in the open source ecosystem can’t be balanced, the ecosystem won’t be sustainable. Image: Dries Buytaert

Drupal started in 2000, while Acquia was founded in 2007. As Buytaert highlighted, Acquia and the Drupal community have a unique relationship, which is formally documented in a charter. The community includes about 80.000 contributors, while Aquia employs about 1.000 people.

Yet, Drupal’s governance is not with Acquia. The community sets Drupal’s roadmap, and elects people in leadership roles. People choose to contribute to areas that matter most to them, and Acquia does this, too. Buytaert said that even when there is a decision Acquia does not agree with, the decision is carried through, if there is substantial backing for it.

Buytaert builds on the notion of OSS as part of the Commons, introducing an important distinction. For end users, OSS projects are public goods; the shared resource is the software. But for OSS companies, OSS projects are common goods; the shared resource is the (potential) customer. Makers invest heavily in the software, takers are mostly interested in customers.

Buytaert, leveraging Elinor Ostrom’s work in addition to his own experience, seems to have gotten to the heart of the issue. Research shows that when the Commons are left unchecked, without governance or rules for contribution, they collapse: shared resources are either engulfed or exhausted.

Organizations like the ASF and the OSI have done a good job in making OSS successful. But now that OSS is successful, without a mechanism for fair reward in place, we have no reason to believe OSS will not have the fate of Commons that preceded it. This is why we wondered whether the OSI should perhaps reconsider. Apparently, we are not the only ones, and the OSI seems to be listening.

Ethical software

First off, there seems to be an ongoing debate within the OSI itself as to what should constitute an OSS license today. This goes to show that what worked 20 years ago is not necessarily what works today. In addition, more and more people seem to be realizing the OSS conundrum, and are sharing ideas to move forward. Buytaert, on his part, offers 3 concrete proposals.

One, don’t just appeal to organizations’ self-interest, but also to their fairness principles. Two, encourage end users to offer selective benefits to Makers. Three, experiment with new licenses. Those points were also backed by Laor, who prompted users to consciously vet their OSS providers for fairness, and pointed to precedents like the Open Invention Network.

One thing is clear: AWS should not be excluded, it’s a vital part of the OSS ecosystem. The fact that this is a complex ecosystem with many actors that need to strike a balance is something many people agree on. This includes Buytaert, Laor, and AWS VP/Distinguished Engineer Matthew Wilson, a self-proclaimed “OSS romantic”, to name but a few.

Buytaert also agreed with Laor that while AWS is a good partner to have, if it decided to start offering ScyllaDB or Drupal as a managed service on its own, there would be nothing they could do to stop it. Buytaert was also clear on something else: making OSS sustainable may require a break with OSS as we know it. But if that’s what it takes, so be it.

This also seems to be the gist of Wilson’s position as stated in a number of Twitter threads: this is how OSS works. If you are not happy with it, do it differently – just don’t call it OSS. This is a fair point, made by others, too. Recently Stephen Walli, principal program manager on the Azure engineering team at Microsoft and an OSS veteran, shared his ideas on Software Freedom in a Post Open Source World.

Walli went through the history of OSS, the four essential freedoms, and the ways and reasons people challenge how OSS works. Walli’s message is along similar lines: “I am happy for people to challenge the ideas that define our software collaborations and culture of outbound sharing. But I want them to be bold. If you want to define a new movement then do so.”

Ethical Source is trying to define a new movement

Some people call it Commercial OSS, others Cloud Native OSS. Either way, it’s not just commercial interests that question how OSS works today. It’s also people concerned about the ethical implications of OSS. Although it could be argued that fairness touches upon ethics too, Coraline Ada Ehmke and the Ethical Source Movement (ESM) have a somewhat different angle.

Ehmke, who founded the ESM, is a software engineer, a public speaker, and has been an active OSS participant since the early 2000s. Ehmke, who previously stated that “OSI and FSF are not the real arbiters of what is Open Source and what is Free Software” is now running for the board of directors of the OSI, and the OSI’s VP seems open to engaging with her. The ESM states:

“Today, the same OSS that enriches the commons and powers innovation also plays a critical role in mass surveillance, anti-immigrant violence, protester suppression, racist policing, the deployment of cruel and inhumane weapons, and other human rights abuses all over the world.

We want to do something about this misuse of our software. But as developers we don’t seem to have any recourse, no way to prevent our work from being used to harm others. We want to change that”.

Fair software

The definition of Ethical Software breaks with the four essential freedoms of OSS, creating licenses such as the Hippocratic or the Atmosphere Licenses. This raises questions, including how to enforce such licenses. Though a definite answer is not readily available, for the time being the thinking seems to be that fear of exposure of illegal use should work on a first level. People seem sympathetic to the notion.

Ethical software licenses are not the only OSS variant around, however. There is also the Fair Source License, allowing users to view, download, execute, and modify code free of charge. Up to a certain number of users from an organization can use the code for free, too. After an organization hits that user limit, it will start paying a licensing fee determined by the software publisher.

Fair Source was created by Sourcegraph and drafted by Heather Meeker, a prominent OSS lawyer who also drafted the Commons Clause for RedisLabs. Fair Source got featured on Wired, and received praise from GitLab, but it does not look like it got much traction. The reason is probably that as things stand, Fair Source is also not an OSS compatible license.

Fair Source is another variant on Open Source, but adoption remains low.

This all seems to be pointing somewhere: perhaps we’ve reached the limits of what OSS in its current form can do. People are realizing it, and questioning the status quo. Whether that will lead somewhere, remains to be seen. But some first steps are taken, and the potential seems to be there. OSS was a bold step in its time, too, and its pioneers paved the way.

To wrap up, let us revisit the “quantifying OSS contribution is hard, and it’s not only about code” argument. This is true beyond the shadow of a doubt. But before dismissing quantification as mission impossible, we should consider a few things.

Commercial OSS vendors are building platforms to power today’s data-driven economy. As a 3rd party analysis on GitHub data shows, they -expectedly- seem to be key contributors to their own codebases. While there may be communities of practice built around the products, in most cases we would assume vendors do much of the non-code work too – promotion, support etc.

OSS vendors have people who contribute to these tasks in their payrolls. Presumably, these people leave the digital footprint of their work on all sorts of systems. From OSS code repositories to issue trackers, HR, project management tools and spreadsheets, to social media. Nobody should be more motivated or better positioned to develop a holistic, data-driven model for OSS contribution, than commercial OSS vendors.

Doing this would make their claims much more grounded. To be entirely fair, commercial OSS vendors should also apply this to external contributions, be it from individuals or from organizations such as cloud vendors. And to back claims about putting OSS sustainability and the common good first, changing their status to B Corporation to reflect that might help, too.

To get over the OSS midlife crisis, and make software great again, leadership is paramount. There is no doubt the amount of legal, social, software, and data engineering needed to evolve OSS is staggering. But OSS is so important, that it would be irresponsible to shy away from it. Some OSS leaders are showing the way. Opinions may vary, but the issue is being acknowledged. Who would not want to have ethical, fair, open-source software available on demand in the cloud?

This is a chance for everyone to put their data to good use. Amazon, as well as commercial OSS vendors, are leaders, each in their own way. They have great power, which comes with great responsibility. The way other cloud vendors deal with OSS vendors may not be perfect, but it’s a start. We’d like to see that taken to the next level, and involving the entire industry.

Coming up with a way to fix commercial OSS by measuring and rewarding contribution is something that will not just benefit vendors, but the world at large. So if not them, who? If not now, when?

Originally published on Linked Data Orchestration under CC BY-SA 4.0

The post Make software great again: can open source be ethical and fair? appeared first on P2P Foundation.

]]>
https://blog.p2pfoundation.net/is-there-a-way-to-go-beyond-open-source-and-have-ethical-fair-software-in-a-cloud-first-world-this-is-what-some-people-in-the-open-source-community-think/2020/03/02/feed 0 75668
Book of the Day: The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity https://blog.p2pfoundation.net/book-of-the-day-the-fourth-age-smart-robots-conscious-computers-and-the-future-of-humanity/2019/01/11 https://blog.p2pfoundation.net/book-of-the-day-the-fourth-age-smart-robots-conscious-computers-and-the-future-of-humanity/2019/01/11#respond Fri, 11 Jan 2019 07:50:42 +0000 https://blog.p2pfoundation.net/?p=73951 Technology offers the potential for a better society. But only if used wisely and fairly, and this is the part we are missing and need to focus on.

The post Book of the Day: The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity appeared first on P2P Foundation.

]]>
Is technology the answer to life, the universe and everything?

A brief account of human history. Technology and economics 101. The human brain, belief systems and metaphysics. And lots of AI. That’s what’s included in Byron Reese’s book The Fourth Age, featured in CES 2019. There is no lack of ambition or ability to negotiate a variety of topics. But while the book succeeds in this, and shows methodical approach and intellectual honesty, its optimistic lens hampers its analysis and borders on solutionism.

“In The Fourth Age, Byron Reese offers the reader something much more valuable than what to think about Artificial Intelligence and robotics — he focuses on HOW to think about these technologies, and the ways in which they will change the world forever”.

“While we can probably agree that the exact future of AI has a lot of unknowns, and hence potential dangers, it doesn’t change the fact that we can choose to view the possibilities through an optimistic lens, as Reese does here”.

These are just some of the reviews people have written about The Fourth Age. The former belongs to John Mackey, co-founder and CEO, Whole Foods Market. The latter to an anonymous reviewer. They are both valid, in their own way. This may seem paradoxical at first, so an explanation is due.

Assumptions, and a brief history of everything

Byron Reese is the CEO and publisher of the technology research company Gigaom, and the founder of several high-tech companies. Reese has a keen interest in AI, and hosts the Voices in AI podcast. Reese gets to interact with some of AI’s top minds and entrepreneurs regularly, and is presumably embedded in the tech and entrepreneurship culture. This is the book’s greatest asset and most formidable liability at the same time.

Reese does a good job at presenting a brief history of everything: the course of humanity from prehistoric time to today, and how technology has evolved and affected humanity through the ages. This sets the stage well, and Reese also ventures on more ambitious undertakings, negotiating topics such as economics and labor, the human brain, free will and consciousness.

It may seem overly ambitious, but the fact is that when dealing with artificial intelligence and the future, adressing human intelligence and history is a necessary foundation. The good thing about how Reese approaches such topics is that he presents concise overviews of alternative theories or beliefs, showing how each assumption may lead to different conclusions.

The well-made point is that ultimately, some things are less about technology itself, and more about our fundamental assumptions about the world. If you believe in the divine nature of human soul, for example, it’s hard to see how you can also believe in the possibility of creating AI with consciousness. Reese states that he makes no effort to conceal his own assumptions, and that much is true.

Ideology and cognitive bias

Reese does mention ideology as a certain cognitive bias, for example claiming that Marx believed machines were at odds with workers. Marx certainly was no Luddite; his work shows admiration for technological progress, but questions the control of the means of production and the distribution of the fruits of this progress. But misrepresentation is not really the issue here – we could attribute this to what is probably a casual acquaintance with Marx’s work.

The issue is that Reese displays this ideological bias himself, albeit from a different standpoint. While he offers a grounded analysis of how capital accumulation interacts with technology to widen income inequality, for example, the conclusions he arrives at based on this analysis can only be justified seen through the lens of ideology and overly optimism.

Reese also discusses universal basic income as a means of accounting for technological disruption in labor and income inequality, citing statistics, quoting Warren Buffet, and even referring to the commons to build a case. While this seems like an open-minded approach, when Reese offers his own version of a vision for the future, his view on the topic is astonishing.

Reese’s view seems to be that in the long run, income inequality does not matter, because there will be abundance for everyone. This is the well-known “tide that raises all boats” argument, taken to its logical extreme. The issues with this are equally extreme, unfortunately.


Does technology equal infinite growth? There’s something missing from this picture. Photo by Simon Marsault 🇫🇷 on Unsplash

Infinite growth and Climate change

What this basically says is that there is no limit in natural resources. This implies either infinite growth on a finite planet, or interplanetary travel and technological breakthroughs that offer practically infinite resources. That world may be a very interesting place, as shown in Iain M. Bank’s The Culture series. But it’s far from being our world, and seeing this as the end-all is not only misguided, but ultimately dangerous.

Our biggest challenge as a species at this time is not interplanetary travel or conscious AI, it is survival. Our current trajectory is towards irreversible climate change, resource depletion, environmental doom, and everything that goes with this. Reese is on the boat of those who think exponential technological progress can, and will, solve everything. Even if it can, and that’s a very big if and a convenient way to kick down the can, this is a short-sighted view.

According to the UN, humanity has 10 years to act before the damage on Earth and its climate is irreversible. One would expect this may be a concern for a book which is about, well, the future. We are not talking about some vague or remote possibility, after all, but about the most crucial challenge humanity needs to deal with to even have a future.

Reese mentions climate change in passing 1 time in the entire book, while he devotes chapters to things such as implants. This seems like a glaring omission for a book that is about the future of humanity – maybe that’s not futuristic enough to be popular. Judging on his belief that everything is a technical issue, perhaps Reese also believes that something like Geo-engineering can solve the problem within 10 years.

Decision making and Deus ex Machina

Which brings us to another issue. Dealing with climate change requires
decision making, coordination and action on a global scale. Reese believes that the underclass has a say in decision making in Democracies. Another oversight in the ‘inequality does not matter’ argument is that money does not just represent buying power, it also represents decision-making power. What happens when income inequality is left unchecked is that decision-making power follows.

Buying a bigger TV is not the same as deciding the world needs more TVs. Reese claims we have collectively opted for a “better standard of living”. Perhaps it would be more accurate to say that we have been collectively indoctrinated to consume.

Reese does mention that people have the power to step up, when given a chance. So it’s quite interesting that the innovation that is praised when applied to technology is so cautiously, if at all, applied to decision making and education. Democracy, often referred to as the means to counter decision-making inequality, is not that different today from ancient Greece: it warrants equality among a closed group of privileged.

Reese’s view is optimistic here, too: the patricians will not risk social upheaval, and will therefore grant something to the plebeians. Maybe so. But if history is anything to go by, the patricians may need a little push. Meanwhile, time is running out. So what may turn out to be the biggest obstacle towards this bright future of automation is the fact that social progress is not keeping up with technological progress. It may well be, in fact, that AI favors tyranny.

We are collectively unable to keep up with technology in terms of the evolution of our social structure and cognitive biases. Even if technological progress and the economic system that dictates infinite growth were to simply come to a halt now, we would still need time to level the playing field.

Offering more technology as the solution to everything is like giving a mad gunman an infinitely more powerful gun, in the hope he will use it better than the one he now has. Placing our hopes on AI that will sort everything out is like waiting for a Deus ex Machina.

Yes, technology offers the potential for a better society. But only if used wisely and fairly, and this is the part we are missing and need to focus on.
We need to reform the mad gunman, and no AI is going to do this for us.

Disclosure: The Fourth Age was provided to me free of charge for review via Gigaom. I used to have a business relationship with Gigaom before Byron Reese became its CEO. After Gigaom was shut down by its former management, myself as well as a number of people who had outstanding invoices with Gigaom lost their money. To the best of my knowledge, none of this debt has been repaid by Gigaom’s new management.

The post Book of the Day: The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity appeared first on P2P Foundation.

]]>
https://blog.p2pfoundation.net/book-of-the-day-the-fourth-age-smart-robots-conscious-computers-and-the-future-of-humanity/2019/01/11/feed 0 73951
Just another Cyber Monday: Amazing Amazon and the best deal ever https://blog.p2pfoundation.net/just-another-cyber-monday-amazing-amazon-and-the-best-deal-ever/2018/11/26 https://blog.p2pfoundation.net/just-another-cyber-monday-amazing-amazon-and-the-best-deal-ever/2018/11/26#respond Mon, 26 Nov 2018 07:39:08 +0000 https://blog.p2pfoundation.net/?p=73551 When you get something at 80% off on Amazon, who do you think wins — you or Amazon? If you think that’s a strange question, you ain’t seen nothing yet. Maybe it’s time we re:Invent some things. But, how can possibly getting a huge discount be bad? It’s not, if you actually need what you’re buying, and... Continue reading

The post Just another Cyber Monday: Amazing Amazon and the best deal ever appeared first on P2P Foundation.

]]>
When you get something at 80% off on Amazon, who do you think wins — you or Amazon? If you think that’s a strange question, you ain’t seen nothing yet. Maybe it’s time we re:Invent some things.

But, how can possibly getting a huge discount be bad? It’s not, if you actually need what you’re buying, and know what you’re buying into. Do you?

Do you know what you’re getting out of that Black Friday deal?

Have you carefully considered your needs and decided a 21″” Plasma TV for the bathroom is going to make your life better? Then by all means, do get it on Black Friday rather than any other day. Do your market research, compare prices and features, track your model of choice and wait for Black Friday to get it. And get it where you can get the best deal — quite likely, Amazon.

That may be a preposterous example, but there’s a reason for seemingly irrational compulsive buying behavior: shopping feels good. It releases dopamine in your brain, a chemical that triggers your reward centers. And if you buy things at discount, the chemical kick is even harder.

It’s just the way our brains our wired, tracing back to our hunter — gatherer history. You may not know or get it, but Amazon sure does. So let’s reframe that question: who would you say is more business-savvy — you or Amazon? At the risk of getting ahead of ourselves, we have to go with Amazon here. So why would Amazon give you this kind of deal then, and what do you really get out of it?

Amazing Amazon

You probably know the Amazon story already. What has enabled it to go from a fringe online bookstore in 1994 to one of the most important forces shaping the world in 2017 is a combination of foresight and execution, technology and business acumen.

Amazon has a demonstrated ability to see what the latest technology can do for its business and integrate it faster and better than the competition. Online shopping was just the beginning, after a certain point Amazon has not just been pioneering the fusion of existing technology and business models, but also developing new ones.

Amazon went from selling physical goods online to making goods such as books digital and giving out the medium on which to consume them, building an empire in the process. It also expanded the range of what is sold online and built a vast logistics network to support physical delivery. Today Amazon dominates retail to such an extent that its orders account for up to 15% of international shipping.

Amazon has a huge impact in the world, both digital and physical

All that is not even taking into account Amazon’s recent acquisition of Whole Foods, which combined with its -once more- pioneering use of digital technology in the physical realm could mean it will soon dominate not just what lands on your desk but also on your table.

Amazon has also been a force for digital transformation. The cloud, machine learning and product recommendations, voice-activated conversational interfaces — these are just some of the most visible ways in which Amazon and its ilk have pushed technology forward.

Amazon really is amazing. There’s just one problem: the one thing in Amazon’s agenda is Amazon.

That’s not to say that everyone at Amazon is rotten of course — far from it. There are extremely smart people working for Amazon, and some of them are trying to promote commendable causes too. And all this technology makes things better, faster, cheaper for everyone, right?

Black Friday

Do you know where the term Black Friday comes from? It started being used in a different way by employers and workers. As Thanksgiving on Thursdays is a holiday, the temptation to call in sick on Friday and have a 4-day long weekend was just too big. On the other hand, since stores are open on that day, people still go out and shop.

The combination of reduced manpower and increased demand is what made employers start calling this Black Friday, as black had a negative ring to it. Eventually marketing succeeded in making this an iconic day for shopping, so the connotation is not negative anymore. Not if you’re not a worker anyway, which brings us to an interesting point.

This Black Friday, Amazon workers across Europe were be on strike. Furthermore, grass-roots initiatives are calling for demonstrations and boycotts against Amazon., and there is a Greenpeace campaign in progress to make and repair things rather than buying more. Before you get all upset about your order possibly arriving late, it’s worth examining the reasons behind this.

Amazon has been known to push its workers to their limits. This means minimum wage, harsh working conditions and doing everything in its power to keep them from unionizing. That includes offshoring and hiring workers from agencies as temps, even though they may be in fact covering permanent positions. In that respect of course Amazon is not that different from other employers.

Not what most people would think of when talking about Black Friday workers, but there are more connections than you think

You could even argue Amazon sort of has to do this. If others do it and it’s legal, how else will they be able to compete, and why would they not do it? After all, keeping cost down and pushing people to get as many packets as quickly as possible means you can get your order for cheap and on the next day, which is great. It’s great if you’re a consumer and it’s great if you’re Amazon.

So why care about some workers doing low-paid, low-skill jobs? Their jobs will soon be automated anyway, and rightly so. Amazon is already automating its warehouses, meaning things will be done faster and smoother. Less manual effort, less accidents, less people needed, and no strikes too. And even the Mechanical Turks will not be needed soon, these tasks are better done by machines.

But what will the people whose jobs are made redundant do?

Brilliant machines

Of course, it’s not the first time we’re seeing something like this. Before the industrial revolution, most of the population used to work in agriculture, and now only 2% does. There are all sorts of jobs nobody could possibly think of at the time which are now made possible by technology. Technology creates jobs, is the adage.

But who creates technology then? People do, workers do. You would then assume the benefits of technology should all come back to them in a virtuous circle of shorts. Unfortunately that’s not really the case. Even though productivity is rising, which should mean reduced working hours and increased income, this is not happening.

[There] is [a] growing gap between productivity and wages. And you can see this in the gap between productivity, a measure of the bounty of brilliant machines, and how it’s being distributed in terms of wages. If we had an inflation-adjusted, productivity-adjusted minimum wage today, it would be something like $25 [an hour]. We would not be arguing about $10.

Laura Tyson, former Chair of the US President’s Council of Economic Advisers

You may argue that there’s the people making these “brilliant machines”, the people doing the low-end jobs, and consumers. We don’t need the low-end jobs, so let’s just retrain these people. Let us all become engineers and data scientists and AI experts, problem solved then, and we can consume happily ever after, right?

Building machines that build machines

Not really. Despite what you may think, engineers and scientists are workers too. Their work may require intellectual rather than physical labor, but at the end of the day, one thing is common: what they produce does not belong to them. It matters not whether you are a cog in a machine or build the machine, as long as you don’t own it. So if we build machines that can do and build more for less, where does that surplus value go?

The best deal ever

If anything, this is the best deal the Amazons of the world, much like the Fords before it, have managed to sell. They have succeeded in riding and pushing the wave of consumerism to disassociate people with the nature of their work to the point where they come to identify themselves as consumers rather than workers.

While it may not be true that Henry Ford started paying workers $5 wages so they could afford his cars, it is true that Amazon pays its workers in part with Amazon vouchers. This is taking an already brilliant scheme to new heights. Workers not only identify as consumers, often turning against other workers, but also keep feeding the machine they build.

So you have raw materials and infrastructure, labor that transforms that into goods and services, and their estimated value. Without labor, there is no value: extracting material and creating infrastructure also takes labor. Yet, the ones putting in the labor get a fraction of that value and zero decision making power in the companies they work for.

But, what about the entrepreneurial spirit of the creators or the Amazons of the world? Surely, their hard work and foresight deserves to be rewarded? As technology and automation progress, menial jobs are becoming obsolete and workers are asked to work not just hard, but smart. To take initiatives, be creative, bold, and entrepreneurial. And workers do that, but in the end that does not make much of a positive difference in their lives.

If data is the new oil, what are the oil rigs?

And what about the brave new world of big data automation? Surely, in this new digital era of innovation there are so many opportunities. All it would take to bring down these monopolies would be disruptive competition, so if we just let the market play its part it will work out in the end — or will it?

If data is the new oil, then the oil rigs for the new data monopolies that are the Amazons of the world are their data-driven products. They have come to dominate and nearly monopolize the web and digital economy to such an extent that if this is not realized and acted upon soon, it may be too late.

Wake up or scramble up

But, sure companies must understand this, right? They must care about their workers, they must have a plan to prevent social unrest, right? How does someone who automates the world’s top organizations answer that question?

“Time flies and technology waits for nobody. I have not met a single CEO, from Deutsche Bank to JP Morgan, who said to me: ‘ok, this will increase our productivity by a huge amount, but it’s going to have social impact — wait, let’s think about it’.

The most important thing right now, what our top minds should be starting to say, is how to move mankind to a higher ground. If people don’t wake up, they’ll have to scramble up — that’s my 2 cents”.

Cetan Dube, IPSoft CEO

Tyson on the other hand concludes that:

“We’re talking about machines displacing people, machines changing the ways in which people work. Who owns the machines? Who should own the machines? Perhaps what we need to think about is the way in which the workers who are working with the machines are part owners of the machines”.

So, what’s your take? How do you identify? Are you a consumer, or a worker?

Article originally published on Medium

The post Just another Cyber Monday: Amazing Amazon and the best deal ever appeared first on P2P Foundation.

]]>
https://blog.p2pfoundation.net/just-another-cyber-monday-amazing-amazon-and-the-best-deal-ever/2018/11/26/feed 0 73551
Artifictional Intelligence: is the Singularity or the Surrender the real threat to humanity? https://blog.p2pfoundation.net/artifictional-intelligence-is-the-singularity-or-the-surrender-the-real-threat-to-humanity/2018/09/07 https://blog.p2pfoundation.net/artifictional-intelligence-is-the-singularity-or-the-surrender-the-real-threat-to-humanity/2018/09/07#respond Fri, 07 Sep 2018 09:00:59 +0000 https://blog.p2pfoundation.net/?p=72597 Artificial intelligence is one of those things: overhyped and yet mystical, the realm of experts and yet something everyone is inclined to have an opinion on. Harry Collins is no AI expert, and yet he seems to get it in a way we could only wish more experts did. Collins is a sociologist. In his... Continue reading

The post Artifictional Intelligence: is the Singularity or the Surrender the real threat to humanity? appeared first on P2P Foundation.

]]>
Artificial intelligence is one of those things: overhyped and yet mystical, the realm of experts and yet something everyone is inclined to have an opinion on. Harry Collins is no AI expert, and yet he seems to get it in a way we could only wish more experts did.

Collins is a sociologist. In his book “Artifictional Intelligence – Against Humanity’s Surrender to Computers”, out today from Polity, Collins does many interesting things. To begin with, he argues what qualifies him to have an opinion on AI.

Collins is a sociologist of science at the School of Social Sciences, Cardiff University, Wales, and a Fellow of the British Academy. Part of his expertise is dealing with human scientific expertise, and therefore, intelligence.

It sounds plausible that figuring out what constitutes human intelligence would be a good start to figure out artificial intelligence, and Collins does a great job at it.

The impossibility claims

The gist of Collins’ argument, and the reason he wrote the book, is to warn against what he sees as a real danger of trusting AI to the point of surrendering critical thinking, and entrusting AI with more than what we really should. This is summarized by his 2 “impossibility claims”:

1. No computer will be fluent in natural language, pass a severe Turing test and have full human-like intelligence unless it is fully embedded in normal human society.

2. No computer will be fully embedded in normal human society as a result of incremental progress based on current techniques.

There is quite some work to back up those claims of course, and this is what Collins does throughout the 10 Chapters of his book. Before we embark on this kind of meta-journey of summarizing his approach, however, it might be good to start with some definitions.

The Turing test is a test designed to categorize “real” AI. At its core, it seems simple: a human tester is supposed to interact with an AI candidate in a conversational manner. If the human cannot distinguish the AI candidate from a human, then the AI has passed the Turing test and is said to display real human-like intelligence.

The Singularity is the hypothesis that the appearance of “real” artificial intelligence will lead to artificial superintelligence, bringing unforeseen consequences and unfathomable changes to human civilization. Views on the Singularity are typically polarized, seeing the evolution of AI as either ending human suffering and cares or ending humanity altogether.

This is actually a good starting point for Collins to ponder on the anthropomorphizing of AI. Why, Collins asks, do we assume that AIs would want the same things that humans want, such as dominance and affluence, and thus pose a threat to humanity?

This is a far-reaching question. It serves as a starting point to ask more questions about humanity, such as why people are, or are seen as, individualistic, how do people learn, and what is the role of society in learning.

Social Science

Science, and learning, argues Collins, do not happen in a monotonous, but rather in a modulated way. What this means is that rather than seeing knowledge acquisition as looking to uncover and unlock a set of predefined eternal truths, or rules, the way it progresses is also dependent on interpretation and social cues. It is, in other words, subject to co-production.

This applies, to begin with, to the directions knowledge acquisition will take. A society for which witches are a part of the mainstream discourse, for example, will have very different priorities than one in which symptomatic medicine is the norm.

But it also applies to the way observations, and data, are interpreted. This is a fundamental aspect of science, according to Collins: the data is *always* out there. Our capacity for collecting them may fluctuate with technical progress, but it is the ability to interpret them that really constitutes intelligence, and that does have a social aspect.

Collins leverages his experience from social embedding as practiced in sociology to support his view. When dealing with a hitherto unknown and incomprehensible social group, a scholar would not be able to understand its communication unless s/he is in some way embedded in it.

All knowledge is social, according to Collins. Image: biznology

Collins argues for the central position on language in intelligence, and ties it to social embedding. It would not be possible, he says, to understand a language simply by statistical analysis. Not only would that miss all the subtle cues of non-verbal communication, but, as opposed to games such as Go or chess that have been mastered by computers, language is open-ended and ever-evolving.

Collins also introduces the concept of interactional expertise, and substantiates it based on his own experience over a long period of time with a group of physicists working in the field of gravitational waves.

Even though he never will be an expert who produces knowledge in the field, Collins has been able to master the topics and the language of the group over time. This has not only gotten him to be accepted as a member of the community, but has also enabled him to pass a blind test.

A blind test is similar to a Turing test: a judge, who is a practising member of the community, was unable to distinguish Collins, a non-practising member, from another practising member, based on their answers to domain specific questions. Collins argues this would never have been possible had he not been embedded in the community, and this is the core of the support for his first impossibility claim.

Top-down or Bottom-up?

As for the second impossibility claim, it has to do with the way AI works. Collins has one chapter dedicated to the currently prevalent technique in AI called Deep Learning. He explains how Deep Learning works in an approachable way, which boils down to pattern recognition based on a big enough and good enough body of precedents.

The fact that there are more data (digitized precedents) and more computing power (thanks to Moore’s Law) today is what has enabled this technique to work. It’s not really new, as it has been around for decades, it’s just that we did not have enough data and processing power to make it work reliably and fast enough up until now.

In the spirit of investigating the principal, not the technicalities behind this approach, Collins concedes some points to its proponents. First, he assumes technical capacity will not slow down and soon reach the point of being able to use all human communication in transcribed form.

Second, he accepts a simplified model of the human brain as used by Ray Kurzweil, one of AIs more prominent proponents. According to this model, the human brain is composed of a large number of pattern recognition elements. So all intelligence boils down to is advanced pattern recognition, or bottom-up discovery of pre-existing patterns.

Top-down, or bottom-up? Image: Organizational Physics

Collins argues however that although pattern recognition is a necessary precondition for intelligence, it is not sufficient. Patterns alone do not equal knowledge, there needs to be some meaning attached to them, and for this language and social context is required. Language and social context are top-down constructs.

Collins, therefore, introduices an extended model of the human brain, in which additional inputs are processed, coming from social context. This, in fact, is related to another approach in AI, labeled symbolic AI. In this top-down approach, instead on relying exclusively on pattern recognition, the idea is to encode all available knowledge in a set of facts and rules.

Collins admits that his second impossibility claim is weaker than the first one. The reason is that technical capacity may reach a point that enables us to encode all available knowledge, even tacit one, a task that seems out of reach today. But then again, many things that are commonplace today seemed out of reach yesterday.

In fact, the combination of bottom-up and top-down approaches to intelligence that Collins stands behind, is what many AI experts stand for as well. The most promising path to AI will not be Deep Learning alone, but a combination of Deep Learning and symbolic AI. To his credit, Collins is open-minded about this, has had very interesting conversations with leading experts in the field, and incorporated them in the book.

Technical understanding and Ideology

There are many more interesting details that could not possibly fit in a book review: Collins’ definition of 6 levels of AI, the fractal model of knowledge, exploring what an effective Turing test would be, and more.

The book is a tour de force of epistemology for the masses: easy to follow, and yet precise and well-informed. Collins tiptoes his way around philosophy and science, from Plato to Wittgestein to AI pioneers, in a coherent way.

He also touches issues such as the roots of capitalism or what is driving human behavior, although he seems to have made a conscious choice of not going into them, possibly in the spirit of not derailing the conversation or perhaps alienating readers. In any case, his book will not only make AI approachable, but will also make you think on a variety of topics.

And, in the end, it does achieve what it set out to do. It gives a vivid warning against the Surrender, which should be about technical understanding, but perhaps even more so about ideology.

Collins, Harry M. (2018). Artifictional Intelligence: Against Humanity’s Surrender to Computers. Cambridge, UK; Malden, Massachusetts: Polity. ISBN 9781509504121.

The post Artifictional Intelligence: is the Singularity or the Surrender the real threat to humanity? appeared first on P2P Foundation.

]]>
https://blog.p2pfoundation.net/artifictional-intelligence-is-the-singularity-or-the-surrender-the-real-threat-to-humanity/2018/09/07/feed 0 72597
Mama, Uber just killed a man – or more https://blog.p2pfoundation.net/mama-uber-just-killed-a-man-or-more/2018/03/30 https://blog.p2pfoundation.net/mama-uber-just-killed-a-man-or-more/2018/03/30#comments Fri, 30 Mar 2018 08:00:28 +0000 https://blog.p2pfoundation.net/?p=70304 It was a woman actually, but that time finally came. Uber’s self driving car will go down in history as the first one to cause a fatality. While Uber should certainly be held responsible for this, judging Uber and its ilk on moral grounds distracts from the real issues at hand. This incident is likely... Continue reading

The post Mama, Uber just killed a man – or more appeared first on P2P Foundation.

]]>
It was a woman actually, but that time finally came. Uber’s self driving car will go down in history as the first one to cause a fatality. While Uber should certainly be held responsible for this, judging Uber and its ilk on moral grounds distracts from the real issues at hand.

This incident is likely to be treated as so many others before it: it will cause a commotion and attract attention, some fire-fighting measures will be announced, then it will slowly fade in the background and it will be business as usual.

The governor of Arizona, where the accident happened, has already withdrawn support for Uber and recalled its licence to conduct self-driving tests in Arizona. Others like Nvidia, the company providing much of the technology used in self driving cars, have called for giving Uber a chance, while at the same time holding off further testing on the streets, and rolling out simulations. It may seem preposterous to justify Uber at a time like this, but there are some important points to be made here.

It has been argued that the goal for self-driving cars is not to be perfect, but to be better than humans. This sounds like a pragmatic position. And it is true that no technology is introduced without having its side effects and its wild west period. But this was literally an accident waiting to happen.

An accident waiting to happen

An accident waiting to happen. Image: Reuters

Part of it has to do with the process of developing and introducing new technology, and it can be that in the long run the benefits will outweigh the side effects. But there is another part of it, the wild west part, that has to do with the lack of will and ability to oversee and regulate the use of technology.

Recent research on the deep learning algorithms used in self driving cars revealed thousands of errors. This somewhat expected outcome, given the technology’s breakneck progress and rapid application, seems to have been ignored by companies and authorities alike. In all fairness, the accident that Uber’s car was involved in may not have been related to this.

The fact that this research has been ignored however should be telling. It’s not the first time Uber has been in the limelight, scrutinized and criticized, for all the wrong reasons.

Uber is still operating in London, in you case you did not notice. Uber will continue to do so while a legal appeal process that could take a year lasts. What’s more, the fire-fighting statements and apologetic tone adopted by newly appointed Uber top management seem to appease some, including London’s mayor.

But to focus on Uber’s misconduct and ethics, to lay personal blame and to seek and accept apologies and promises is to miss the point entirely. Uber, and organizations like Uber, are neither good nor bad – they are signs of the time. Even if Uber was ran by Arizona’s Governor or London’s Mayor, it would still have the same defining qualities and effects.

To focus on Uber's ethics is to miss the point entirely; Uber is part of the rising data monopolies. Image: derivative, original by Anya Mooney

To focus on Uber’s ethics is to miss the point entirely; Uber is part of the rising data monopolies. Image: derivative, original by Anya Mooney

Its efficiency is based on optimized and evolving algorithms, clever marketing and big data. Its self centered nature is inevitable, as it has no one to answer to except its shareholders.

Uber may be revolutionary, but not for the reasons you think. A future in which car ownership is obsolete and you can be picked up in no time and driven safely and efficiently to your destination for cheap is something many people would stand behind. Except there won’t be drivers in those cars, and it will be up to Uber to run things as it sees fit.

It’s clear that the combination of big data, processing power and algorithms can progressively automate every task to the point of making it more efficient than what humans are able to achieve. Driving and dispatching is no exception, and that’s what Uber and its ilk are doing.

But that’s only part of the reason why Uber is displacing traditional taxis. The other part is Uber’s employment model. Instead of employing full time, properly trained drivers, Uber will employ just about anyone with a car and willing to spend hours behind the wheel.

These people will be precarious workers with minimum rights and income, be manipulated to stay on the road as long as needed, and be disposed of when self driving technology and legislation are in place – which should not be too long.

In the meanwhile, Uber can sit back and watch the divide and conquer strategy that has played out so well throughout time work in its favor. Uber drivers operating as an army of low-paid disposable contractors before the algorithms take over completely are inadvertently helping dispose of everyone else’s rights and livelihoods as well.

As Wired reports, New York City’s cab drivers are in crisis, and they’re blaming Uber and Lyft. Since December, four taxi drivers have killed themselves, seemingly in response to the intense financial pressures that have accompanied an increase in for-hire vehicles on the city’s streets.

So it’s freelancers versus full time employees, and now Uber sympathizers versus the people and regulators. Uber sympathizers who have signed an Uber petition to keep it in the streets of London are closing the one million mark, citing safety and loss of jobs. Many would probably cite innovation and better service as well.

While these claims are not entirely unfounded, they are hollow. These jobs will be soon lost anyway, and there have been enough reported incidents to undermine security claims. But this brings us to the core of the issue: the emerging data driven monopolies.

Efficiency and safety are both based on a foundation of data. Data collected, processed and used by Uber to power its algorithms in complete opaqueness. By gaining market share, Uber is amassing ever more data, in a reinforcement loop that makes it harder and harder to compete against.

The fact that Uber ditches every notion of ethics and legality in the process, by doing things such as collecting data from user devices without consent even when the application is not runningusing that data to drive analytics that determine pricing and using backdoors to spy on users and apps to evade control is just adding insult to injury.

You can expect data monopolies to operate similarly to good old monopolies, except more efficiently. Image: Anya Mooney

You can expect data monopolies to operate similarly to good old monopolies, except more efficiently. Image: Anya Mooney

But, should not the market self-regulate, and will there not be competition from other innovative companies? Let’s look at another part of the world for answers: Russia.

In Russia Uber was facing stiff competition from Yandex. Yandex is a Russia-based technology giant that dominates its home market in search, cloud services and ride hailing among other things.

Both companies have been using similar approaches to capture market share, resulting in driving prices down and owning a combined near 90% of the local market. Now Uber and Yandex Taxi have made a deal to work together, in essence forming a monopoly. What are the chances of anyone else, let alone independent drivers, competing in this landscape?

Greg Abovsky, Yandex CFO, responded to a request for comment by citing the deal is subject to approval by Russian regulators, and the argument is that since there is room for growth in the market this is not a monopoly.

Yandex is often called the Russian Google, and this does sound a bit like what Google would sound like if they said they are not a monopoly in search because more people will be searching online in the future.

First mover advantage in the big data and AI age will be tremendously important if left unchecked. There’s an interesting implication of this however. These technologies will make the market smarter and make it possible to plan and predict market forces so as to allow us to finally achieve a planned economy.

If you’re wondering where such a bold claim may be coming from, it’s none other than Jack Ma, the founder of another one in the league of giants: Alibaba. Companies of this caliber already dwarf governments in nearly every aspect, including their ability to gather and process data.

Some economists argue that the online platform monopolies resemble central planning institutions, so it would be more “legitimate and rational” for the state to become a “super-monopoly” platform.

This may sound scary and big-brother-ish. But before we get lost in the arguments in favor of one or the other monopoly, let’s think about the real issue: allegiance and control. Where does corporate allegiance lay, and how much control do we have over it? Then what about the state?

In a world that is increasingly becoming data-driven, reinventing algorithms and institutions seems like more than a realistic option – it seems inevitable. The real question is by whom, and for whom. If we want to be actors and citizens rather than users and consumers, it’s time we reinvented our collective identity and started taking control.

This assassination of character is what we should be really worried about.

This article was first published as Keep on Uberin the free world, on the Linked Data Orchestration blog.

Photo by marki1983

The post Mama, Uber just killed a man – or more appeared first on P2P Foundation.

]]>
https://blog.p2pfoundation.net/mama-uber-just-killed-a-man-or-more/2018/03/30/feed 2 70304