The post Algorithms, Capital, and the Automation of the Common appeared first on P2P Foundation.
]]>This essay was written by Tiziana Terranova and originally published in Euromade.info
Tiziana Terranova: This essay is the outcome of a research process which involves a series of Italian institutions of autoformazione of post-autonomist inspiration (‘free’ universities engaged in grassroots organization of public seminars, conferences, workshops etc) and anglophone social networks of scholars and researchers engaging with digital media theory and practice officially affiliated with universities, journals and research centres, but also artists, activists, precarious knowledge workers and such likes. It refers to a workshop which took place in London in January 2014, hosted by the Digital Culture Unit at the Centre for Cultural Studies (Goldsmiths’ College, University of London). The workshop was the outcome of a process of reflection and organization that started with the Italian free university collective Uninomade 2.0 in early 2013 and continued across mailing lists and websites such as Euronomade, Effimera, Commonware, I quaderni di San Precario, and others. More than a traditional essay, then, it aims to be a synthetic but hopefully also inventive document which plunges into a distributed ‘social research network’ articulating a series of problems, theses and concerns at the crossing between political theory and research into science, technology and capitalism.
What is at stake in the following is the relationship between ‘algorithms’ and ‘capital’—that is, the increasing centrality of algorithms ‘to organizational practices arising out of the centrality of information and communication technologies stretching all the way from production to circulation, from industrial logistics to financial speculation, from urban planning and design to social communication.1 These apparently esoteric mathematical structures have also become part of the daily life of users of contemporary digital and networked media. Most users of the Internet daily interface or are subjected to the powers of algorithms such as Google’s Pagerank (which sorts the results of our search queries) or Facebook Edgerank (which automatically decides in which order we should get our news on our feed) not to talk about the many other less known algorithms (Appinions, Klout, Hummingbird, PKC, Perlin noise, Cinematch, KDP Select and many more) which modulate our relationship with data, digital devices and each other. This widespread presence of algorithms in the daily life of digital culture, however, is only one of the expressions of the pervasiveness of computational techniques as they become increasingly co-extensive with processes of production, consumption and distribution displayed in logistics, finance, architecture, medicine, urban planning, infographics, advertising, dating, gaming, publishing and all kinds of creative expressions (music, graphics, dance etc).
The staging of the encounter between ‘algorithms’ and ‘capital’ as a political problem invokes the possibility of breaking with the spell of ‘capitalist realism’—that is, the idea that capitalism constitutes the only possible economy while at the same time claiming that new ways of organizing the production and distribution of wealth need to seize on scientific and technological developments2. Going beyond the opposition between state and market, public and private, the concept of the common is used here as a way to instigate the thought and practice of a possible post-capitalist mode of existence for networked digital media.
Looking at algorithms from a perspective that seeks the constitution of a new political rationality around the concept of the ‘common’ means engaging with the ways in which algorithms are deeply implicated in the changing nature of automation. Automation is described by Marx as a process of absorption into the machine of the ‘general productive forces of the social brain’ such as ‘knowledge and skills’3,which hence appear as an attribute of capital rather than as the product of social labour. Looking at the history of the implication of capital and technology, it is clear how automation has evolved away from the thermo-mechanical model of the early industrial assembly line toward the electro-computational dispersed networks of contemporary capitalism. Hence it is possible to read algorithms as part of a genealogical line that, as Marx put it in the ‘Fragment on Machines’, starting with the adoption of technology by capitalism as fixed capital, pushes the former through several metamorphoses ‘whose culmination is the machine, or rather, an automatic system of machinery…set in motion by an automaton, a moving power that moves itself’4.The industrial automaton was clearly thermodynamical, and gave rise to a system ‘consisting of numerous mechanical and intellectual organs so that workers themselves are cast merely as its conscious linkages’5. The digital automaton, however, is electro-computational, it puts ‘the soul to work’ and involves primarily the nervous system and the brain and comprises ‘possibilities of virtuality, simulation, abstraction, feedback and autonomous processes’6. The digital automaton unfolds in networks consisting of electronic and nervous connections so that users themselves are cast as quasi-automatic relays of a ceaseless information flow. It is in this wider assemblage, then, that algorithms need to be located when discussing the new modes of automation.
Quoting a textbook of computer science, Andrew Goffey describes algorithms as ‘the unifying concept for all the activities which computer scientists engage in…and the fundamental entity with which computer scientists operate’7. An algorithm can be provisionally defined as the ‘description of the method by which a task is to be accomplished’ by means of sequences of steps or instructions, sets of ordered steps that operate on data and computational structures. As such, an algorithm is an abstraction, ‘having an autonomous existence independent of what computer scientists like to refer to as “implementation details,” that is, its embodiment in a particular programming language for a particular machine architecture’8. It can vary in complexity from the most simple set of rules described in natural language (such as those used to generate coordinated patterns of movement in smart mobs) to the most complex mathematical formulas involving all kinds of variables (as in the famous Monte Carlo algorithm used to solve problems in nuclear physics and later also applied to stock markets and now to the study of non-linear technological diffusion processes). At the same time, in order to work, algorithms must exist as part of assemblages that include hardware, data, data structures (such as lists, databases, memory, etc.), and the behaviours and actions of bodies. For the algorithm to become social software, in fact, ‘it must gain its power as a social or cultural artifact and process by means of a better and better accommodation to behaviors and bodies which happen on its outside’.9
Furthermore, as contemporary algorithms become increasingly exposed to larger and larger data sets (and in general to a growing entropy in the flow of data also known as Big Data), they are, according to Luciana Parisi, becoming something more then mere sets of instructions to be performed: ‘infinite amounts of information interfere with and re-program algorithmic procedures…and data produce alien rules’10. It seems clear from this brief account, then, that algorithms are neither a homogeneous set of techniques, nor do they guarantee ‘the infallible execution of automated order and control11.
From the point of view of capitalism, however, algorithms are mainly a form of ‘fixed capital’—that is, they are just means of production. They encode a certain quantity of social knowledge (abstracted from that elaborated by mathematicians, programmers, but also users’ activities), but they are not valuable per se. In the current economy, they are valuable only in as much as they allow for the conversion of such knowledge into exchange value (monetization) and its (exponentially increasing) accumulation (the titanic quasi-monopolies of the social Internet). In as much as they constitute fixed capital, algorithms such as Google’s Page Rank and Facebook’s Edgerank appear ‘as a presupposition against which the value-creating power of the individual labour capacity is an infinitesimal, vanishing magnitude’12. And that is why calls for individual retributions to users for their ‘free labor’ are misplaced. It is clear that for Marx what needs to be compensated is not the individual work of the user, but the much larger powers of social cooperation thus unleashed, and that this compensation implies a profound transformation of the grip that the social relation that we call the capitalist economy has on society.
From the point of view of capital, then, algorithms are just fixed capital, means of production finalized to achieve an economic return. But that does not mean that, like all technologies and techniques, that is all that they are. Marx explicitly states that even as capital appropriates technology as the most effective form of the subsumption of labor, that does not mean that this is all that can be said about it. Its existence as machinery, he insists, is not ‘identical with its existence as capital… and therefore it does not follow that subsumption under the social relation of capital is the most appropriate and ultimate social relation of production for the application of machinery’.13 It is then essential to remember that the instrumental value that algorithms have for capital does not exhaust the ‘value’ of technology in general and algorithms in particular—that is, their capacity to express not just ‘use value’ as Marx put it, but also aesthetic, existential, social, and ethical values. Wasn’t it this clash between the necessity of capital to reduce software development to exchange value, thus marginalizing the aesthetic and ethical values of software creation, that pushed Richard Stallman and countless hackers and engineers towards the Free and Open Source Movement? Isn’t the enthusiasm that animates hack-meetings and hacker-spaces fueled by the energy liberated from the constraints of ‘working’ for a company in order to remain faithful to one’s own aesthetics and ethics of coding?
Contrary to some variants of Marxism which tend to identify technology completely with ‘dead labor’, ‘fixed capital’ or ‘instrumental rationality’, and hence with control and capture, it seems important to remember how, for Marx, the evolution of machinery also indexes a level of development of productive powers that are unleashed but never totally contained by the capitalist economy. What interested Marx (and what makes his work still relevant to those who strive for a post-capitalist mode of existence) is the way in which, so he claims, the tendency of capital to invest in technology to automate and hence reduce its labor costs to a minimum potentially frees up a ‘surplus’ of time and energy (labor) or an excess of productive capacity in relation to the basic, important and necessary labor of reproduction (a global economy, for example, should first of all produce enough wealth for all members of a planetary population to be adequately fed, clothed, cured and sheltered). However, what characterizes a capitalist economy is that this surplus of time and energy is not simply released, but must be constantly reabsorbed in the cycle of production of exchange value leading to increasing accumulation of wealth by the few (the collective capitalist) at the expense of the many (the multitudes).
Automation, then, when seen from the point of view of capital, must always be balanced with new ways to control (that is, absorb and exhaust) the time and energy thus released. It must produce poverty and stress when there should be wealth and leisure. It must make direct labour the measure of value even when it is apparent that science, technology and social cooperation constitute the source of the wealth produced. It thus inevitably leads to the periodic and widespread destruction of this accumulated wealth, in the form of psychic burnout, environmental catastrophe and physical destruction of the wealth through war. It creates hunger where there should be satiety, it puts food banks next to the opulence of the super-rich. That is why the notion of a post-capitalist mode of existence must become believable, that is, it must become what Maurizio Lazzarato described as an enduring autonomous focus of subjectivation. What a post-capitalist commonism then can aim for is not only a better distribution of wealth compared to the unsustainable one that we have today, but also a reclaiming of ‘disposable time’—that is, time and energy freed from work to be deployed in developing and complicating the very notion of what is ‘necessary’.
The history of capitalism has shown that automation as such has not reduced the quantity and intensity of labor demanded by managers and capitalists. On the contrary, in as much as technology is only a means of production to capital, where it has been able to deploy other means, it has not innovated. For example, industrial technologies of automation in the factory do not seem to have recently experienced any significant technological breakthroughs. Most industrial labor today is still heavily manual, automated only in the sense of being hooked up to the speed of electronic networks of prototyping, marketing and distribution; and it is rendered economically sustainable only by political means—that is, by exploiting geo-political and economic differences (arbitrage) on a global scale and by controlling migration flows through new technologies of the border. The state of things in most industries today is intensified exploitation, which produces an impoverished mode of mass production and consumption that is damaging to both to the body, subjectivity, social relations and the environment. As Marx put it, disposable time released by automation should allow for a change in the very essence of the ‘human’ so that the new subjectivity is allowed to return to the performing of necessary labor in such a way as to redefine what is necessary and what is needed.
It is not then simply about arguing for a ‘return’ to simpler times, but on the contrary a matter of acknowledging that growing food and feeding populations, constructing shelter and adequate housing, learning and researching, caring for the children, the sick and the elderly requires the mobilization of social invention and cooperation. The whole process is thus transformed from a process of production by the many for the few steeped in impoverishment and stress to one where the many redefine the meaning of what is necessary and valuable, while inventing new ways of achieving it. This corresponds in a way to the notion of ‘commonfare’ as recently elaborated by Andrea Fumagalli and Carlo Vercellone, implying, in the latter’s words, ‘the socialization of investment and money and the question of the modes of management and organisation which allow for an authentic democratic reappropriation of the institutions of Welfare…and the ecologic re-structuring of our systems of production13. We need to ask then not only how algorithmic automation works today (mainly in terms of control and monetization, feeding the debt economy) but also what kind of time and energy it subsumes and how it might be made to work once taken up by different social and political assemblages—autonomous ones not subsumed by or subjected to the capitalist drive to accumulation and exploitation.
In a recent intervention, digital media and political theorist Benjamin H. Bratton has argued that we are witnessing the emergence of a new nomos of the earth, where older geopolitical divisions linked to territorial sovereign powers are intersecting the new nomos of the Internet and new forms of sovereignty extending in electronic space14. This new heterogenous nomos involves the overlapping of national governments (China, United States, European Union, Brasil, Egypt and such likes), transnational bodies (the IMF, the WTO, the European Banks and NGOs of various types), and corporations such as Google, Facebook, Apple, Amazon, etc., producing differentiated patterns of mutual accommodation marked by moments of conflict. Drawing on the organizational structure of computer networks or ‘the OSI network model, upon with the TCP/IP stack and the global internet itself is indirectly based’, Bratton has developed the concept and/or prototype of the ‘stack’ to define the features of ‘a possible new nomos of the earth linking technology, nature and the human.’15 The stack supports and modulates a kind of ‘social cybernetics’ able to compose ‘both equilibrium and emergence’. As a ‘megastructure’, the stack implies a ‘confluence of interoperable standards-based complex material-information systems of systems, organized according to a vertical section, topographic model of layers and protocols…composed equally of social, human and “analog” layers (chthonic energy sources, gestures, affects, user-actants, interfaces, cities and streets, rooms and buildings, organic and inorganic envelopes) and informational, non-human computational and “digital” layers (multiplexed fiber optic cables, datacenters, databases, data standards and protocols, urban-scale networks, embedded systems, universal addressing tables)’16.
In this section, drawing on Bratton’s political prototype, I would like to propose the concept of the ‘Red Stack’—that is, a new nomos for the post-capitalist common. Materializing the ‘red stack’ involves engaging with (at least) three levels of socio-technical innovation: virtual money, social networks, and bio-hypermedia. These three levels, although ‘stacked’, that is, layered, are to be understood at the same time as interacting transversally and nonlinearly. They constitute a possible way to think about an infrastructure of autonomization linking together technology and subjectivation.
The contemporary economy, as Christian Marazzi and others have argued, is founded on a form of money which has been turned into a series of signs, with no fixed referent (such as gold) to anchor them, explicitly dependent on the computational automation of simulational models, screen media with automated displays of data (indexes, graphics etc) and algo-trading (bot-to-bot transactions) as its emerging mode of automation17. As Toni Negri also puts it, ‘money today—as abstract machine—has taken on the peculiar function of supreme measure of the values extracted out of society in the real subsumption of the latter under capital’18.
Since ownership and control of capital-money (different, as Maurizio Lazzarato remind us, from wage-money, in its capacity to be used not only as a means of exchange, but as a means of investment empowering certain futures over others) is crucial to maintaining populations bonded to the current power relation, how can we turn financial money into the money of the common? An experiment such as Bitcoin demonstrates that in a way ‘the taboo on money has been broken’19 and that beyond the limits of this experience, forkings are already developing in different directions. What kind of relationship can be established between the algorithms of money-creation and ‘a constituent practice which affirms other criteria for the measurement of wealth, valorizing new and old collective needs outside the logic of finance’?20
Current attempts to develop new kinds of cryptocurrencies must be judged, valued and rethought on the basis of this simple question as posed by Andrea Fumagalli: Is the currency created not limited solely to being a means of exchange, but can it also affect the entire cycle of money creation – from finance to exchange?21.
Does it allow speculation and hoarding, or does it promote investment in post-capitalist projects and facilitate freedom from exploitation, autonomy of organization etc.? What is becoming increasingly clear is that algorithms are an essential part of the process of creation of the money of the common, but that algorithms also have politics (What are the gendered politics of individual ‘mining’, for example, and of the complex technical knowledge and machinery implied in mining bitcoins?) Furthermore, the drive to completely automate money production in order to escape the fallacies of subjective factors and social relations might cause such relations to come back in the form of speculative trading. In the same way as financial capital is intrinsically linked to a certain kind of subjectivity (the financial predator narrated by Hollywood cinema), so an autonomous form of money needs to be both jacked into and productive of a new kind of subjectivity not limited to the hacking milieu as such, but at the same time oriented not towards monetization and accumulation but towards the empowering of social cooperation. Other questions that the design of the money of the common might involve are: Is it possible to draw on the current financialization of the Internet by corporations such as Google (with its Adsense/Adword programme) to subtract money from the circuit of capitalist accumulation and turn it into a money able to finance new forms of commonfare (education, research, health, environment etc)? What are the lessons to be learned from crowdfunding models and their limits in thinking about new forms of financing autonomous projects of social cooperation? How can we perfect and extend experiments such as that carried out by the Inter-Occupy movement during the Katrina hurricane in turning social networks into crowdfunding networks which can then be used as logistical infrastructure able to move not only information, but also physical goods?22.
Over the past ten years, digital media have undergone a process of becoming social that has introduced genuine innovation in relation to previous forms of social software (mailing lists, forums, multi-user domains, etc). If mailing lists, for example, drew on the communicational language of sending and receiving, social network sites and the diffusion of (proprietary) social plug-ins have turned the social relation itself into the content of new computational procedures. When sending and receiving a message, we can say that algorithms operate outside the social relation as such, in the space of the transmission and distribution of messages; but social network software places intervenes directly on the social relationship. Indeed, digital technologies and social network sites ‘cut into’ the social relation as such—that is, they turn it into a discrete object and introduce a new supplementary relation.23
If, with Gabriel Tarde and Michel Foucault, we understand the social relation as an asymmetrical relation involving at least two poles (one active and the other receptive) and characterized by a certain degree of freedom, we can think of actions such as liking and being liked, writing and reading, looking and being looked at, tagging and being tagged, and even buying and selling as the kind of conducts that transindividuate the social (they induce the passage from the pre-individual through the individual to the collective). In social network sites and social plug-ins these actions become discrete technical objects (like buttons, comment boxes, tags etc) which are then linked to underlying data structures (for example the social graph) and subjected to the power of ranking of algorithms. This produces the characteristic spatio-temporal modality of digital sociality today: the feed, an algorithmically customized flow of opinions, beliefs, statements, desires expressed in words, images, sounds etc. Much reviled in contemporary critical theory for their supposedly homogenizing effect, these new technologies of the social, however, also open the possibility of experimenting with many-to-many interaction and thus with the very processes of individuation. Political experiments (se the various internet-based parties such as the 5 star movement, Pirate Party, Partido X) draw on the powers of these new socio-technical structures in order to produce massive processes of participation and deliberation; but, as with Bitcoin, they also show the far from resolved processes that link political subjectivation to algorithmic automation. They can function, however, because they draw on widely socialized new knowledges and crafts (how to construct a profile, how to cultivate a public, how to share and comment, how to make and post photos, videos, notes, how to publicize events) and on ‘soft skills’ of expression and relation (humour, argumentation, sparring) which are not implicitly good or bad, but present a series of affordances or degrees of freedom of expression for political action that cannot be left to capitalist monopolies. However, it is not only a matter of using social networks to organize resistance and revolt, but also a question of constructing a social mode of self-Information which can collect and reorganize existing drives towards autonomous and singular becomings. Given that algorithms, as we have said, cannot be unlinked from wider social assemblages, their materialization within the red stack involves the hijacking of social network technologies away from a mode of consumption whereby social networks can act as a distributed platform for learning about the world, fostering and nurturing new competences and skills, fostering planetary connections, and developing new ideas and values.
The term bio-hypermedia, coined by Giorgio Griziotti, identifies the ever more intimate relation between bodies and devices which is part of the diffusion of smart phones, tablet computers and ubiquitous computation. As digital networks shift away from the centrality of the desktop or even laptop machine towards smaller, portable devices, a new social and technical landscape emerges around ‘apps’ and ‘clouds’ which directly ‘intervene in how we feel, perceive and understand the world’.24). Bratton defines the ‘apps’ for platforms such as Android and Apple as interfaces or membranes linking individual devices to large databases stored in the ‘cloud’ (massive data processing and storage centres owned by large corporations).25
This topological continuity has allowed for the diffusion of downloadable apps which increasingly modulate the relationship of bodies and space. Such technologies not only ‘stick to the skin and respond to the touch’ (as Bruce Sterling once put it), but create new ‘zones’ around bodies which now move through ‘coded spaces’ overlayed with information, able to locate other bodies and places within interactive, informational visual maps. New spatial ecosystems emerging at the crossing of the ‘natural’ and the artificial allow for the activation of a process of chaosmotic co-creation of urban life.26 Here again we can see how apps are, for capital, simply a means to ‘monetize’ and ‘accumulate’ data about the body’s movement while subsuming it ever more tightly in networks of consumption and surveillance. However, this subsumption of the mobile body under capital does not necessarily imply that this is the only possible use of these new technological affordances. Turning bio-hypermedia into components of the red stack (the mode of reappropriation of fixed capital in the age of the networked social) implies drawing together current experimentation with hardware (shenzei phone hacking technologies, makers movements, etc.) able to support a new breed of ‘imaginary apps’ (think for example about the apps devised by the artist collective Electronic Disturbance Theatre, which allow migrants to bypass border controls, or apps able to track the origin of commodities, their degrees of exploitation, etc.).
This short essay, a synthesis of a wider research process, means to propose another strategy for the construction of a machinic infrastructure of the common. The basic idea is that information technologies, which comprise algorithms as a central component, do not simply constitute a tool of capital, but are simultaneously constructing new potentialities for postneoliberal modes of government and postcapitalist modes of production. It is a matter here of opening possible lines of contamination with the large movements of programmers, hackers and makers involved in a process of re-coding of network architectures and information technologies based on values others than exchange and speculation, but also of acknowledging the wide process of technosocial literacy that has recently affected large swathes of the world population. It is a matter, then, of producing a convergence able to extend the problem of the reprogramming of the Internet away from recent trends towards corporatisation and monetisation at the expense of users’ freedom and control. Linking bio-informational communication to issues such as the production of a money of the commons able to socialize wealth, against current trends towards privatisation, accumulation and concentration, and saying that social networks and diffused communicational competences can also function as means to organize cooperation and produce new knowledges and values, means seeking for a new political synthesis which moves us away from the neoliberal paradigm of debt, austerity and accumulation. This is not a utopia, but a program for the invention of constituent social algorithms of the common.
In addition to the sources cited above, and the texts contained in this volume, we offer the following expandable bibliographical toolkit or open desiring biblio-machine. (Instructions: pick, choose and subtract/add to form your own assemblage of self-formation for the purposes of materialization of the red stack):
— L. Baroniant and C. Vercellone, Moneta Del Comune e Reddito Sociale Garantito (2013), Uninomade.
— M. Bauwens, The Social Web and Its Social Contracts: Some Notes on Social Antagonism in Netarchical Capitalism (2008), Re-Public Re-Imaging Democracy.
— F. Berardi and G. Lovink, A call to the army of love and to the army of software (2011), Nettime.
— R. Braidotti, The posthuman (Cambridge: Polity Press, 2013).
— G. E. Coleman, Coding Freedom: The Ethics and Aesthetics of Hacking (Princeton and Oxford: Princeton University Press, 2012).
— A. Fumagalli, Trasformazione del lavoro e trasformazioni del welfare: precarietà e welfare del comune (commonfare) in Europa, in P. Leon and R. Realfonso (eds), L’Economia della precarietà (Rome: Manifestolibri, 2008), 159–74.
— G. Giannelli and A. Fumagalli, Il fenomeno Bitcoin: moneta alternativa o moneta speculativa? (2013), I Quaderni di San Precario.
— G. Griziotti, D. Lovaglio and T. Terranova, Netwar 2.0: Verso una convergenza della “calle” e della rete (2012), Uninomade 2.0.
— E. Grosz, Chaos, Territory, Art (New York: Columbia University Press, 2012).
— F. Guattari, Chaosmosis: An Ethico-Aesthetic Paradigm (Indianapolis, IN: Indiana University Press, 1995).
S. Jourdan, Game-over Bitcoin: Where Is the Next Human-Based Digital Currency? (2014).
— M. Lazzarato, Les puissances de l’invention (Paris: L’empecheurs de penser ronde, 2004).
— M. Lazzarato, The Making of the Indebted Man (Los Angeles: Semiotext(e), 2013).
— G. Lovink and M. Rasch (eds), Unlike Us Reader: Social Media Monopolies and their Alternatives (Amsterdam: Institute of Network Culture, 2013).
— A. Mackenzie (2013), Programming subjects in the regime of anticipation: software studies and subjectivity in In: Subjectivity. 6, p. 391-405
— L. Manovich, The Poetics of Augmented Space, Virtual Communication 5:2 (2006), 219–40.
— S. Mezzadra and B. Neilson, Border as Method or the Multiplication of Labor (Durham, NC: Duke University Press, 2013).
— P. D. Miller aka DJ Spooky and S. Matviyenko, The Imaginary App (Cambridge, MA: MIT Press, forthcoming).
— A. Negri, Acting in common and the limits of capital (2014), in Euronomade.
— A. Negri and M. Hardt, Commonwealth (Cambridge, MA: Belknap Press, 2009).
— M. Pasquinelli, Google’s Page Rank Algorithm: A Diagram of the Cognitive Capitalism and the Rentier of the Common Intellect(2009).
— B. Scott, Heretic’s Guide to Global Finance: Hacking the Future of Money (London: Pluto Press, 2013).
— G. Simondon, On the Mode of Existence of Technical Objects (1958), University of Western Ontario
— R. Stallman, Free Software: Free Society. Selected Essays of Richard M. Stallman (Free Software Foundation, 2002).
— A. Toscano, Gaming the Plumbing: High-Frequency Trading and the Spaces of Capital (2013), in Mute.
— I. Wilkins and B. Dragos, Destructive Distraction? An Ecological Study of High Frequency Trading, in Mute.
Download this article as an e-book
The post Algorithms, Capital, and the Automation of the Common appeared first on P2P Foundation.
]]>The post Let’s train humans first…before we train machines appeared first on P2P Foundation.
]]>In reality, there is nothing artificial about these algorithms or their intelligence, and the term “AI” is a mystification! The term that describes the reality is “Human-Trained Machine Learning”, in today’s mad scramble to train these algorithms to mimic human intelligence and brain functioning. In the techie magazine WIRED, October 2018, we meet a pioneering computer scientist, Fei-Fei LI, testifying at a Congressional hearing, who underlines this truth. She said, “Humans train these algorithms” and she talked about the horrendous mistakes these machines make in mis-identifying people, using the term “bias in—bias out” updating the old computer saying, “garbage in—garbage out”.
Professor LI described how we are ceding our authority to these algorithms to judge who gets hired, who goes to jail, who gets a loan, a mortgage or good insurance rates — and how these machines code our behavior, change our rules and our lives. She is now back at Stanford University after a time as an ethicist at Google and has started a foundation to promote the truth about AI, since she feels responsible for her role in inventing some of these algorithms herself. As a celebrated pioneer of this field, Professor LI says “There’s nothing artificial about AI. It’s inspired by people, it’s created by people and more importantly, it impacts people”.
So how did Silicon Valley invade our culture and worldwide technology programs with its short-term, money -obsessed values: “move fast and break things”; disrupt the current systems while rushing to scale and cash out with an IPO? These values are discussed by two insiders in shocking detail, by Antonio G. Martinez in “Chaos Monkeys” (2016) and Bloomberg’s Emily Chang in “Brotopia” (2018). These authors explain a lot about how training these algorithms went so wrong: subconsciously mimicking their mostly male, misogynist, often white entrepreneurs and techies with their money-making monopolistic biases and often adolescent, libertarian fantasies.
I also explored all this in my article “The Future of Democracy Challenged in the Digital Age”, CADMUS, October 2018, describing all these issues of the takeover by AI of our economic sectors; from manufacturing, transport, education, retail, media, law, medicine, agriculture, to banking, insurance and finance. While many of these sectors have become more efficient and profitable for the shareholders, my conclusion in “The Idiocy of Things” critiqued the connecting of all appliances in so-called “smart homes” as quite hazardous and an invasion of privacy. I urged humans to take back control from the over-funded, over-invested, over-paid computer and information science sectors too often focused on corporate efficiency and cost-saving goals driven by the profit targets demanded by Wall Street.
I have called for an extension of the English law, settled in the year 1215: “habeas corpus” affirming that humans own their own bodies. This extension would cover ownership of our brains and all our information we generate in an updated “information habeas corpus”. Since May 2018, European law has ratified this with its General Data Protection Regulation (GDPR), which stipulates that individuals using social media platforms, or any other social system do indeed retain ownership of all their personal data.
So, laws are beginning to catch up with the inhuman uses of human beings, with our hard-earned skills being used to train algorithms that then replace us! The computer algorithm trainers then employ out of-work people surviving in the gig economy on Mechanical Turk and Task Rabbit sites, in minimum, hourly- paid data entry tasks to train these algorithms!
Scientist Jaron Lanier in his “Ten Arguments for Deleting Your Social Media Accounts Now” (2018) shows how social media are manipulating us with algorithms to engineer changes in our behavior, by engaging our attention with clickbait and content that arouses our emotions, fears and rage, playing on some of the divisions in our society to keep us on their sites. This helps drive ad sales and their gargantuan profits and rapid global growth. Time to rethink all this, beyond the dire alarms raised by Bill Gates, Elon Musk and the late Stephen Hawking that these algorithms we are teaching will soon take over and may harm or kill us as did HAL in the movie “2001”.
Why indeed are we spending all this money to train machines while short-changing our children, our teachers and schools? Training our children’s brains must take priority! Instead of training machines to hijack our attention and sell our personal data to marketers for profit — let’s steer funds into tripling efforts to train and pay our teachers, upgrade schools and curricula with courses on civic responsibility, justice, community values, freedoms under habeas corpus (women also own their own bodies!) and how ethics and trust are the basis of all market and societies.
Why all the expensive efforts to enhance machine learning to teach algorithms to recognize human faces, guide killer drones, falsify video images and further modify our behavior and capture our eyeballs with click bait, devising and spreading content that angers and outrages — further dividing us and disrupting democracies?
Let’s rein in the Big Brother ambitions of the new techno-oligopolists. As a wise NASA scientist, following Norbert Weiner’s Human Use of Human Beings (1950), reminded us in 1965 about the value of humans: “Man (SIC) is the lowest-cost, 150 pound, nonlinear all-purpose computer system which can be mass-produced by un-skilled labor”, quoted in Foreign Affairs, July-August, 2015, p. 11. Time for common sense!
Hazel Henderson© 2018
Hazel Henderson D.Sc.Hon., FRSA, is founder of Ethical Markets Media, LLC and producer of its TV series. She is a world renowned futurist, evolutionary economist, a worldwide syndicated columnist, consultant on sustainable development, and author of The Axiom and Nautilus award-winning book Ethical Markets: Growing the Green Economy (2006) and eight other books.
Her editorials appear in 27 languages and 200 newspapers syndicated by Inter Press Service, and her book reviews appear on SeekingAlpha.com. Her articles have appeared in over 250 journals, including (in USA) Harvard Business Review, New York Times, Christian Science Monitor; and Challenge, Mainichi (Japan), El Diario (Venezuela), World Economic Herald (China), LeMonde Diplomatique (France) and Australian Financial Review.
Photo by Ferrari + caballos + fuerza = cerebro Humano
The post Let’s train humans first…before we train machines appeared first on P2P Foundation.
]]>The post Out of the Frying Pan and Into the Fire appeared first on P2P Foundation.
]]>While Mariana’s criticisms of surveillance capitalism are spot on, her proposed remedy is as far from the mark as it possibly could be.
Mariana starts off by making the case, and rightly so, that surveillance capitalists2 like Google or Facebook “are making huge profits from technologies originally created with taxpayer money.”
Google’s algorithm was developed with funding from the National Science Foundation, and the internet came from DARPA funding. The same is true for touch-screen displays, GPS, and Siri. From this the tech giants have created de facto monopolies while evading the type of regulation that would rein in monopolies in any other industry. And their business model is built on taking advantage of the habits and private information of the taxpayers who funded the technologies in the first place.
There’s nothing to argue with here. It’s a succinct summary of the tragedy of the commons that lies at the heart of surveillance capitalism and, indeed, that of neoliberalism itself.
Mariana also accurately describes the business model of these companies, albeit without focusing on the actual mechanism by which the data is gathered to begin with3:
Facebook’s and Google’s business models are built on the commodification of personal data, transforming our friendships, interests, beliefs, and preferences into sellable propositions. … The so-called sharing economy is based on the same idea.
So far, so good.
But then, things quickly take a very wrong turn:
There is indeed no reason why the public’s data should not be owned by a public repository that sells the data to the tech giants, rather than vice versa.
There is every reason why we shouldn’t do this.
Mariana’s analysis is fundamentally flawed in two respects: First, it ignores a core injustice in surveillance capitalism – violation of privacy – that her proposed recommendation would have the effect of normalising. Second, it perpetuates a fundamental false dichotomy – that there is no other way to design technology than the way Silicon Valley and surveillance capitalists design technology – which then means that there is no mention of the true alternatives: free and open, decentralised, interoperable ethical technologies.
The core injustice that Mariana’s piece ignores is that the business model of surveillance capitalists like Google and Facebook is based on the violation of a fundamental human right. When she says “let’s not forget that a large part of the technology and necessary data was created by all of us” it sounds like we voluntarily got together to create a dataset for the common good by revealing the most intimate details of our lives through having our behaviour tracked and aggregated. In truth, we did no such thing.
We might have resigned ourselves to being farmed by the likes of Google and Facebook because we have no other choice but that’s not a healthy definition of consent by any standard. If 99.99999% of all investment goes into funding surveillance-based technology (and it does), then people have neither a true choice nor can they be expected to give any meaningful consent to being tracked and profiled. Surveillance capitalism is the norm today. It is mainstream technology. It’s what we funded and what we built.
It is also fundamentally unjust.
There is a very important reason why the public’s data should not be owned by a public repository that sells the data to the tech giants because it’s not the public’s data, it is personal data and it should never have been collected by a third party to begin with. You might hear the same argument from people who say that we must nationalise Google or Facebook.
No, no, no, no, no, no, no! The answer to the violation of personhood by corporations isn’t violation of personhood by government, it’s not violating personhood to begin with.
That’s not to say that we cannot have a data commons. In fact, we must. But we must learn to make a core distinction between data about people and data about the world around us.
Our fundamental error when talking about data is that we use a single term when referring to both information about people as well as information about things. And yet, there is a world of difference between data about a rock and data about a human being. I cannot deprive a rock of its freedom or its life, I cannot emotionally or physically hurt a rock, and yet I can do all those things to people. When we posit what is permissible to do with data, if we are not specific in whether we are talking about rocks or people, one of those two groups is going to get the short end of the stick and it’s not going to be the rocks.
Here is a simple rule of thumb:
Data about individuals must belong to the individuals themselves. Data about the commons must belong to the commons.
I implore anyone working in this area – especially professors writing books and looking to shape public policy – to understand and learn this core distinction.
I mentioned above that the second fundamental flaw in Mariana’s article is that it perpetuates a false dichotomy. That false dichotomy is that the Silicon Valley/surveillance capitalist model of building modern/digital/networked technology is the only possible way to build modern/digital/networked technology and that we must accept it as a given.
This is patently false.
It’s true that all modern technology works by gathering data. That’s not the problem. The core question is “who owns and controls that data and the technology by which it is gathered?” The answer to that question today is “corporations do.” Corporations like Google and Facebook own and control our data not because of some inevitable characteristic of modern technology but because of how they designed their technology in line with the needs of their business model.
Specifically, surveillance capitalists like Google and Facebook design proprietary and centralised technologies to addict people and lock them in. In such systems, your data originates in a place you do not own. On “other people’s computers,” as the Free Software Foundation calls it. Or on “the cloud” as we colloquially reference it.
The crucial point here, however, is that this toxic way of building modern technology is not the only way to design and build modern technology.
We know how to build free and open, decentralised, and interoperable systems where your data originates in a place that you – as an individual – own and control.
In other words, we know how to build technology where the algorithms remain on your own devices and where you are not farmed for personal information to begin with.
To say that we must take as given that some third party will gather our personal data is to capitulate to surveillance capitalism. It is to accept the false dichotomy that either we have surveillance-based technology or we forego modern technology.
This is neither true, nor necessary, nor acceptable.
We can and we must build ethical technology instead.
As I’m increasingly hearing these defeatist arguments that inherently accept surveillance as a foregone conclusion of modern technology, I want to reiterate what a true solution looks like.
There are two things we must do to create an ethical alternative to surveillance capitalism:
Whether they are the punk rockers of the tech world or its ragamuffins – and perhaps a little bit of both – what is certain is that they lead a precarious existence on the fringes of mainstream technology. They rely on anything from personal finances to selling the things they make, to crowdfunding and donations – and usually combinations thereof – to etch out an existence that both challenges and hopes to alter the shape of mainstream technology (and thus society) to make it fairer, kinder, and more just.
While they build everything from computers and phones (Puri.sm) to federated social networks (Mastodon) and decentralised alternatives to the centralised Web (DAT), they do so usually with little or no funding whatsoever. And many are a single personal tragedy away from not existing at all.
Meanwhile, we use taxpayer money in the EU to fund surveillance-based startups. Startups, which, if they succeed will most likely be bought by larger US-based surveillance capitalists like Google and Facebook. If they fail, on the other hand, the European taxpayer foots the bill. Europe, bamboozled by and living under the digital imperialism of Silicon Valley, has become its unpaid research and development department.
This must change.
Ethical technology does not grow on trees. Venture capitalists will not fund it. Silicon Valley will not build it.
A meaningful counterpoint to surveillance capitalism that protects human rights and democracy will not come from China. If we fail to create one in Europe then I’m afraid that humankind is destined for centuries of feudal strife. If it survives the unsustainable trajectory that this social system has set it upon, that is.
If we want ethical technological infrastructure – and we should, because the future of our human rights, democracy, and quite possibly that of the species depends on it – then we must fund and build it.
The answer to surveillance capitalism isn’t to better distribute the rewards of its injustices or to normalise its practices at the state level.
The answer to surveillance capitalism is a socio-techno-economic system that is just at its core. To create the technological infrastructure for such a system, we must fund independent organisations from the common purse to work for the common good to build ethical technology to protect individual sovereignty and nurture a healthy commons.
The post Out of the Frying Pan and Into the Fire appeared first on P2P Foundation.
]]>The post The EU’s Copyright Proposal is Extremely Bad News for Everyone, Even (Especially!) Wikipedia appeared first on P2P Foundation.
]]>Cory Doctorow: The pending update to the EU Copyright Directive is coming up for a committee vote on June 20 or 21 and a parliamentary vote either in early July or late September. While the directive fixes some longstanding problems with EU rules, it creates much, much larger ones: problems so big that they threaten to wreck the Internet itself.
Under Article 13 of the proposal, sites that allow users to post text, sounds, code, still or moving images, or other copyrighted works for public consumption will have to filter all their users’ submissions against a database of copyrighted works. Sites will have to pay to license the technology to match submissions to the database, and to identify near matches as well as exact ones. Sites will be required to have a process to allow rightsholders to update this list with more copyrighted works.
Even under the best of circumstances, this presents huge problems. Algorithms that do content-matching are frankly terrible at it. The Made-in-the-USA version of this is YouTube’s Content ID system, which improperly flags legitimate works all the time, but still gets flack from entertainment companies for not doing more.
There are lots of legitimate reasons for Internet users to upload copyrighted works. You might upload a clip from a nightclub (or a protest, or a technical presentation) that includes some copyrighted music in the background. Or you might just be wearing a t-shirt with your favorite album cover in your Tinder profile. You might upload the cover of a book you’re selling on an online auction site, or you might want to post a photo of your sitting room in the rental listing for your flat, including the posters on the wall and the picture on the TV.
Wikipedians have even more specialised reasons to upload material: pictures of celebrities, photos taken at newsworthy events, and so on.
But the bots that Article 13 mandates will not be perfect. In fact, by design, they will be wildly imperfect.
Article 13 punishes any site that fails to block copyright infringement, but it won’t punish people who abuse the system. There are no penalties for falsely claiming copyright over someone else’s work, which means that someone could upload all of Wikipedia to a filter system (for instance, one of the many sites that incorporate Wikpedia’s content into their own databases) and then claim ownership over it on Twitter, Facebook and WordPress, and everyone else would be prevented from quoting Wikipedia on any of those services until they sorted out the false claims. It will be a lot easier to make these false claims that it will be to figure out which of the hundreds of millions of copyrighted claims are real and which ones are pranks or hoaxes or censorship attempts.
Article 13 also leaves you out in the cold when your own work is censored thanks to a malfunctioning copyright bot. Your only option when you get censored is to raise an objection with the platform and hope they see it your way—but if they fail to give real consideration to your petition, you have to go to court to plead your case.
Article 13 gets Wikipedia coming and going: not only does it create opportunities for unscrupulous or incompetent people to block the sharing of Wikipedia’s content beyond its bounds, it could also require Wikipedia to filter submissions to the encyclopedia and its surrounding projects, like Wikimedia Commons. The drafters of Article 13 have tried to carve Wikipedia out of the rule, but thanks to sloppy drafting, they have failed: the exemption is limited to “noncommercial activity”. Every file on Wikipedia is licensed for commercial use.
Then there’s the websites that Wikipedia relies on as references. The fragility and impermanence of links is already a serious problem for Wikipedia’s crucial footnotes, but after Article 13 becomes law, any information hosted in the EU might disappear—and links to US mirrors might become infringing—at any moment thanks to an overzealous copyright bot. For these reasons and many more, the Wikimedia Foundation has taken a public position condemning Article 13.
Speaking of references: the problems with the new copyright proposal don’t stop there. Under Article 11, each member state will get to create a new copyright in news. If it passes, in order to link to a news website, you will either have to do so in a way that satisfies the limitations and exceptions of all 28 laws, or you will have to get a license. This is fundamentally incompatible with any sort of wiki (obviously), much less Wikipedia.
It also means that the websites that Wikipedia relies on for its reference links may face licensing hurdles that would limit their ability to cite their own sources. In particular, news sites may seek to withhold linking licenses from critics who want to quote from them in order to analyze, correct and critique their articles, making it much harder for anyone else to figure out where the positions are in debates, especially years after the fact. This may not matter to people who only pay attention to news in the moment, but it’s a blow to projects that seek to present and preserve long-term records of noteworthy controversies. And since every member state will get to make its own rules for quotation and linking, Wikipedia posts will have to satisfy a patchwork of contradictory rules, some of which are already so severe that they’d ban any items in a “Further Reading” list unless the article directly referenced or criticized them.
The controversial measures in the new directive have been tried before. For example, link taxes were tried in Spain and Germany and they failed, and publishers don’t want them. Indeed, the only country to embrace this idea as workable is China, where mandatory copyright enforcement bots have become part of the national toolkit for controlling public discourse.
Articles 13 and 11 are poorly thought through, poorly drafted, unworkable—and dangerous. The collateral damage they will impose on every realm of public life can’t be overstated. The Internet, after all, is inextricably bound up in the daily lives of hundreds of millions of Europeans and an entire constellation of sites and services will be adversely affected by Article 13. Europe can’t afford to place education, employment, family life, creativity, entertainment, business, protest, politics, and a thousand other activities at the mercy of unaccountable algorithmic filters. If you’re a European concerned about these proposals, here’s a tool for contacting your MEP.
Photo by ccPixs.com
The post The EU’s Copyright Proposal is Extremely Bad News for Everyone, Even (Especially!) Wikipedia appeared first on P2P Foundation.
]]>The post Project of the Day: The Algorithm Observatory appeared first on P2P Foundation.
]]>The following texts are taken from Algorithm Observatory’s Website.
We know that social computing algorithms are used to categorize us, but the way they do so is not always transparent. To take just one example, ProPublica recently uncovered that Facebook allows housing advertisers to exclude users by race.
Even so, there are no simple and accessible resources for us, the public, to study algorithms empirically, and to engage critically with the technologies that are shaping our daily lives in such profound ways.
That is why we created Algorithm Observatory.
Part media literacy project and part citizen experiment, the goal of Algorithm Observatory is to provide a collaborative online lab for the study of social computing algorithms. The data collected through this site is analyzed to compare how a particular algorithm handles data differently depending on the characteristics of users.
Algorithm Observatory is a work in progress. This prototype only allows users to explore Facebook advertising algorithms, and the functionality is limited. We are currently looking for funding to realize the project’s full potential: to allow anyone to study any social computing algorithm.
This project was conceived and is currently being developed by Dr. Ulises Mejias, Associate Professor at SUNY Oswego.
Initial funding for the prototype was generously provided by LINGOs/Humentum.
Holly Reitmeier is research assistant. Tahira Abdo is project assistant.
We would also like to thank students in Prof. Mejias’ BRC 421/521 and HON 301 classes for helping us test the prototype.
Data generated through this site (ie., data included in the Results page) is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license. You can use it on any reports or projects that you want, but please cite Algorithm Observatory as the source.
This is a prototype, which only begins to showcase the things that Algorithm Observatory will be able to do in the future.
Eventually, the website will allow anyone to design an experiment involving a social computing algorithm. The platform will allow researchers to recruit volunteer participants, who will be able to contribute content to the site securely and anonymously. Researchers will then be able to conduct an analysis to compare how the algorithm handles users differently depending on individual characteristics. The results will be shared by publishing a report evaluating the social impact of the algorithm. All data and reports will become publicly available and open for comments and reviews. Researchers will be able to study any algorithm, because the site does not require direct access to the source code, but relies instead on empirical observation of the interaction between the algorithm and volunteer participants.
We are currently seeking funding to develop the full version of the project.
For more information, please email [email protected].
Intentionally, we do not have any social media accounts.
The post Project of the Day: The Algorithm Observatory appeared first on P2P Foundation.
]]>The post Cathy O’Neil on Algorithms as Harmful Weapons of Math Destruction appeared first on P2P Foundation.
]]>Algorithms decide who gets a loan, who gets a job interview, who gets insurance and much more — but they don’t automatically make things fair. Mathematician and data scientist Cathy O’Neil coined a term for algorithms that are secret, important and harmful: “weapons of math destruction.” Learn more about the hidden agendas behind the formulas.
In 2008, as a hedge-fund quant, mathematician Cathy O’Neil saw firsthand how really really bad math could lead to financial disaster. Disillusioned, O’Neil became a data scientist and eventually joined Occupy Wall Street’s Alternative Banking Group.
With her popular blog mathbabe.org, O’Neil emerged as an investigative journalist. Her acclaimed book Weapons of Math Destruction details how opaque, black-box algorithms rely on biased historical data to do everything from sentence defendants to hire workers. In 2017, O’Neil founded consulting firm ORCAA to audit algorithms for racial, gender and economic inequality.
“When there is wrongdoing in fields that are both complex and opaque, it often takes a whistle-blower to inform the public. That’s exactly what former quant trader turned social activist Cathy O’Neil has become for the world of Big Data.” — Time, August 29, 2016
The post Cathy O’Neil on Algorithms as Harmful Weapons of Math Destruction appeared first on P2P Foundation.
]]>The post Google: The world’s biggest employer? appeared first on P2P Foundation.
]]>“I was recently alerted to a documentary on Dutch TV highlighting malpractice among locksmiths in the Netherlands. The focus of the program was on the exorbitant prices locksmiths charge for door opening and lock replacement. The problem was traced to competition between companies that use Google ad-words to reach customers. Many of these companies do nothing more than broker contact between a customer and a handyman who may or may not have professional accreditation and the skills to open a lock. In many instances, such ‘independent workers’ are not registered with the Chamber of Commerce or paying tax. Neither do they provide warrantees on their services and products. The brokers may charge up to 70% for the mere act of running a website and employing someone to answer the telephone.
Once we just had the Yellow pages as a middleman, and they caught on quickly to the fact that some sectors are totally dependent on advertising to get customers and were quick to exploit this. One way they went about it is by getting companies offering the same sort of services or products to compete with each other for a prominent place on a page. So, for instance, a butcher might pay a few hundred Euro per year for an A5 size ad, but a locksmith could expect to pay 10.000 Euro for the same space. I know for a fact that representatives would visit competing companies on the same day to prevent them from contacting each other and finding out what each was paying for their ads. Clever, but not clever enough, because I wrote to as many of my competitors as possible and offered to work together, to form a cartel, to prevent the Yellow pages from setting us up by getting us to bid against each other and this actually worked. I had less success though, when I realized how things were developing with Google ad-words.
The days of the Yellow pages are now long gone, but a similar situation now exists with respect to the top search results on Google search engine, with the first 2-4 search results dominated by Google’s own advertisements, Google-ads. The problem with this, in case you can’t already smell it, is that Google now not only controls a good 90% of the market for search results, it also makes money by promoting its own search results. The only thing that prevents them from asking money to show ‘organic’ search results, these are the results of an open search using a key word, is their pretense of providing a ‘neutral’ search engine, or providing ‘relevant’ search results based on their algorithms which attempt to probe which search results you ‘really need.’
The problem with this situation is that it literally casts Google as an employer. First of all, Google charges somewhere around 38 to 50 Euro per click for locksmiths, generating huge income and a powerful incentive to grab more of the market. These expenses are of course transferred to the customer, for instance, when a company sends out a locksmith to your house, somewhere between 30-40% of the fee may go directly to Google. Add to this the fact that Google pays virtually no tax in any country on earth and what we have here is a recipe for totalitarian control of employment. Indeed, Google is currently already the world’s biggest employer.
From the point of view of the early adopter, Google’s ad-words product offered the advantage of being able to pay one’s way to the top of the search results, however at the time, around the year 2000, Google ads were displayed fairly discretely in the sidebar on the right, and all other organic search results were displayed in the main content section.
For a mere 0.01 cent per click, you could start advertising. You filled out a form, saying how much you were prepared to pay for a click on a specific key-word or search term, and whenever someone typed that key word into a browser, Google would display your ad, discretely in the sidebar. But there was a catch: for a mere 0.01 cents, your competitors could outbid you and then their ad would be displayed above yours. This inevitably led to companies attempting to outbid each other and the prices rose, within a year, to 1-2 euros and we have now reached 25-40 Euro, depending on the time of day, the key word, the number of competitors and the number of displays etc.
I predicted this system would rapidly force up the prices and contacted my competitors to get them to agree to boycott it, and to rely on organic search results instead. The results were predictable: big companies with lots of employees were seeking to consolidate their place in the new marketplace and they didn’t give a hoot, because they thought they were outbidding their competitors. Of course, what they didn’t really think about was what the future would look like when 40-50% of their hard earned cash would be going straight to Google.
I have been contemplating how to beat this system for a long time. I have managed to keep my company afloat, and even to stay in the top percentage of organic search results with a very simple website and good SEO. But the problem is becoming too complex to handle:
Google’s algorithms are becoming ever more complex and unfathomable. Search results now show companies and businesses closest to you, which means that competition has focused on setting up hundreds of fake websites for every district, and soon probably every street in my hometown. Google keeps producing new products and they are getting so technically complex that specialists have to be called in to work on SEO (search engine optimalization), on Google+ Google Maps and Google Locations. Companies try to keep up with these changes and lose track of where their businesses are registered, their passwords etc. It’s total mayhem. But there is another problem:
Screen real estate on the new generation of mobile devices is extremely valuable. Google is only one search engine, but it is the most widely used so lets take it as an example: if you type in locksmith and the name of your city, the first 3-4 results at the top of your screen are commercial Google Ads. If there is any screen space left, that is what is reserved for ‘organic’ search results. These are the raw data of websites offering locksmith services, but these include the search results for companies already advertising commercially with Ad-words. In effect, clients are not being offered a ‘neutral’ view of what is available on the web, but a very small section of all legitimate businesses, many of whom are dependent on Google. If you want to get away from those results, you need to click through several pages, but who in their right mind is going to do that in an emergency? When you are locked out in the middle of the night and its raining, you are going to go for the first results you get. And so, Google owns and controls that market.
In an effort to beat the system, I have gone back to real-world advertising, using stickers to market out services and updating the design of my website to convey the fact that we are not only reliable, but also local, friendly and small-scale. Of course its only a matter of time before Google starts competing for public spaces, to put up its own branded screens displaying ‘useful information.’ Indeed, with the introduction of Chrome and Google glasses, Google will be able to project its ads directly into your head.
Its time for public (not necessarily government) regulation of the Internet. The problem is of course that government institutions are notoriously inept at understanding the Internet, let alone regulating it in ways that would are effective, that achieve what they set out to do without being overly inclusive or imprecise. More problematic is the fact that our present masters are also deeply in the pocket of companies like Google, not prepared to regulate them, to make them pay taxes or publicly accountable because that conflicts with the free market ideology according to which total lack of government oversight or control equates with a “level playing field.”
In the meantime I cannot think of any better way to beat the Google’s of this world than to advise everyone to shop local, to barter, and to make sure the names of real world businesses are remembered, shared and praised. Don’t click on Google ads, don’t even look at them! And for God’s sake, start using other search engines!
After putting some more thought into the matter, I realize that questions are being raised by many different groups about Google’s backroom deal with the Tax Departments of various European States and the European Parliament. Google is not alone in raking in huge amounts of cash while paying next to nothing in taxes. To offer just a small example, in the Netherlands consumers pay up to 21% VAT. At the very least we would expect financial institutions to contribute that amount in taxes, Value Added Tax is supposed to go to the tax department, but these companies are registered offshore, in places like the Cayman Islands where they pay no taxes at all. Here is a brief list of financial institutions I use to run my company as well as for personal ends:
All of the above companies route their transactions through offshore banks and none of them contribute much to the local economies in which they are increasingly exercising power of control.
This situation is increasingly ludicrous and dangerous. Money is withdrawn from local economies and ends up in the hands of big corporations. As a consequence, governments are finding it harder to finance public spending on infrastructure, education, transport, media, communications, health care, pensions, and so the balance tips evermore in the direction of privatization of institutions and resources that should morally and practically belong to everybody collectively.
We need to see more class action and litigation by private individuals and small and mid-scale businesses to force tax authorities to open up the accounts of these big companies to public inspection and force them to pay taxes or lose access to markets. This is of course exactly what the TTIP aims to prevent, effectively depriving governments and civilians of the ability to challenge such financial profiteering through courts and legislative bodies.”
The post Google: The world’s biggest employer? appeared first on P2P Foundation.
]]>