Google – P2P Foundation https://blog.p2pfoundation.net Researching, documenting and promoting peer to peer practices Thu, 13 May 2021 21:11:46 +0000 en-US hourly 1 https://wordpress.org/?v=5.5.15 62076519 Algorithms, Capital, and the Automation of the Common https://blog.p2pfoundation.net/algorithms-capital-and-the-automation-of-the-common/2019/01/15 https://blog.p2pfoundation.net/algorithms-capital-and-the-automation-of-the-common/2019/01/15#respond Tue, 15 Jan 2019 09:38:36 +0000 https://blog.p2pfoundation.net/?p=74010 “autonomous ones not subsumed by or subjected to the capitalist drive to accumulation and exploitation.” This essay was written by Tiziana Terranova and originally published in Euromade.info Tiziana Terranova: This essay is the outcome of a research process which involves a series of Italian institutions of autoformazione of post-autonomist inspiration (‘free’ universities engaged in grassroots organization of public seminars,... Continue reading

The post Algorithms, Capital, and the Automation of the Common appeared first on P2P Foundation.

]]>
“autonomous ones not subsumed by or subjected to the capitalist drive to accumulation and exploitation.”

This essay was written by Tiziana Terranova and originally published in Euromade.info

Tiziana Terranova: This essay is the outcome of a research process which involves a series of Italian institutions of autoformazione of post-autonomist inspiration (‘free’ universities engaged in grassroots organization of public seminars, conferences, workshops etc) and anglophone social networks of scholars and researchers engaging with digital media theory and practice officially affiliated with universities, journals and research centres, but also artists, activists, precarious knowledge workers and such likes. It refers to a workshop which took place in London in January 2014, hosted by the Digital Culture Unit at the Centre for Cultural Studies (Goldsmiths’ College, University of London). The workshop was the outcome of a process of reflection and organization that started with the Italian free university collective Uninomade 2.0 in early 2013 and continued across mailing lists and websites such as EuronomadeEffimeraCommonwareI quaderni di San Precarioand others. More than a traditional essay, then, it aims to be a synthetic but hopefully also inventive document which plunges into a distributed ‘social research network’ articulating a series of problems, theses and concerns at the crossing between political theory and research into science, technology and capitalism.

What is at stake in the following is the relationship between ‘algorithms’ and ‘capital’—that is, the increasing centrality of algorithms ‘to organizational practices arising out of the centrality of information and communication technologies stretching all the way from production to circulation, from industrial logistics to financial speculation, from urban planning and design to social communication.1 These apparently esoteric mathematical structures have also become part of the daily life of users of contemporary digital and networked media. Most users of the Internet daily interface or are subjected to the powers of algorithms such as Google’s Pagerank (which sorts the results of our search queries) or Facebook Edgerank (which automatically decides in which order we should get our news on our feed) not to talk about the many other less known algorithms (Appinions, Klout, Hummingbird, PKC, Perlin noise, Cinematch, KDP Select and many more) which modulate our relationship with data, digital devices and each other. This widespread presence of algorithms in the daily life of digital culture, however, is only one of the expressions of the pervasiveness of computational techniques as they become increasingly co-extensive with processes of production, consumption and distribution displayed in logistics, finance, architecture, medicine, urban planning, infographics, advertising, dating, gaming, publishing and all kinds of creative expressions (music, graphics, dance etc).

The staging of the encounter between ‘algorithms’ and ‘capital’ as a political problem invokes the possibility of breaking with the spell of ‘capitalist realism’—that is, the idea that capitalism constitutes the only possible economy while at the same time claiming that new ways of organizing the production and distribution of wealth need to seize on scientific and technological developments2. Going beyond the opposition between state and market, public and private, the concept of the common is used here as a way to instigate the thought and practice of a possible post-capitalist mode of existence for networked digital media.

Algorithms, Capital and Automation

Looking at algorithms from a perspective that seeks the constitution of a new political rationality around the concept of the ‘common’ means engaging with the ways in which algorithms are deeply implicated in the changing nature of automation. Automation is described by Marx as a process of absorption into the machine of the ‘general productive forces of the social brain’ such as ‘knowledge and skills’3,which hence appear as an attribute of capital rather than as the product of social labour. Looking at the history of the implication of capital and technology, it is clear how automation has evolved away from the thermo-mechanical model of the early industrial assembly line toward the electro-computational dispersed networks of contemporary capitalism. Hence it is possible to read algorithms as part of a genealogical line that, as Marx put it in the ‘Fragment on Machines’, starting with the adoption of technology by capitalism as fixed capital, pushes the former through several metamorphoses ‘whose culmination is the machine, or rather, an automatic system of machinery…set in motion by an automaton, a moving power that moves itself’4.The industrial automaton was clearly thermodynamical, and gave rise to a system ‘consisting of numerous mechanical and intellectual organs so that workers themselves are cast merely as its conscious linkages’5. The digital automaton, however, is electro-computational, it puts ‘the soul to work’ and involves primarily the nervous system and the brain and comprises ‘possibilities of virtuality, simulation, abstraction, feedback and autonomous processes’6. The digital automaton unfolds in networks consisting of electronic and nervous connections so that users themselves are cast as quasi-automatic relays of a ceaseless information flow. It is in this wider assemblage, then, that algorithms need to be located when discussing the new modes of automation.

Quoting a textbook of computer science, Andrew Goffey describes algorithms as ‘the unifying concept for all the activities which computer scientists engage in…and the fundamental entity with which computer scientists operate’7. An algorithm can be provisionally defined as the ‘description of the method by which a task is to be accomplished’ by means of sequences of steps or instructions, sets of ordered steps that operate on data and computational structures. As such, an algorithm is an abstraction, ‘having an autonomous existence independent of what computer scientists like to refer to as “implementation details,” that is, its embodiment in a particular programming language for a particular machine architecture’8. It can vary in complexity from the most simple set of rules described in natural language (such as those used to generate coordinated patterns of movement in smart mobs) to the most complex mathematical formulas involving all kinds of variables (as in the famous Monte Carlo algorithm used to solve problems in nuclear physics and later also applied to stock markets and now to the study of non-linear technological diffusion processes). At the same time, in order to work, algorithms must exist as part of assemblages that include hardware, data, data structures (such as lists, databases, memory, etc.), and the behaviours and actions of bodies. For the algorithm to become social software, in fact, ‘it must gain its power as a social or cultural artifact and process by means of a better and better accommodation to behaviors and bodies which happen on its outside’.9

Furthermore, as contemporary algorithms become increasingly exposed to larger and larger data sets (and in general to a growing entropy in the flow of data also known as Big Data), they are, according to Luciana Parisi, becoming something more then mere sets of instructions to be performed: ‘infinite amounts of information interfere with and re-program algorithmic procedures…and data produce alien rules’10. It seems clear from this brief account, then, that algorithms are neither a homogeneous set of techniques, nor do they guarantee ‘the infallible execution of automated order and control11.

From the point of view of capitalism, however, algorithms are mainly a form of ‘fixed capital’—that is, they are just means of production. They encode a certain quantity of social knowledge (abstracted from that elaborated by mathematicians, programmers, but also users’ activities), but they are not valuable per se. In the current economy, they are valuable only in as much as they allow for the conversion of such knowledge into exchange value (monetization) and its (exponentially increasing) accumulation (the titanic quasi-monopolies of the social Internet). In as much as they constitute fixed capital, algorithms such as Google’s Page Rank and Facebook’s Edgerank appear ‘as a presupposition against which the value-creating power of the individual labour capacity is an infinitesimal, vanishing magnitude’12. And that is why calls for individual retributions to users for their ‘free labor’ are misplaced. It is clear that for Marx what needs to be compensated is not the individual work of the user, but the much larger powers of social cooperation thus unleashed, and that this compensation implies a profound transformation of the grip that the social relation that we call the capitalist economy has on society.

From the point of view of capital, then, algorithms are just fixed capital, means of production finalized to achieve an economic return. But that does not mean that, like all technologies and techniques, that is all that they are. Marx explicitly states that even as capital appropriates technology as the most effective form of the subsumption of labor, that does not mean that this is all that can be said about it. Its existence as machinery, he insists, is not ‘identical with its existence as capital… and therefore it does not follow that subsumption under the social relation of capital is the most appropriate and ultimate social relation of production for the application of machinery’.13 It is then essential to remember that the instrumental value that algorithms have for capital does not exhaust the ‘value’ of technology in general and algorithms in particular—that is, their capacity to express not just ‘use value’ as Marx put it, but also aesthetic, existential, social, and ethical values. Wasn’t it this clash between the necessity of capital to reduce software development to exchange value, thus marginalizing the aesthetic and ethical values of software creation, that pushed Richard Stallman and countless hackers and engineers towards the Free and Open Source Movement? Isn’t the enthusiasm that animates hack-meetings and hacker-spaces fueled by the energy liberated from the constraints of ‘working’ for a company in order to remain faithful to one’s own aesthetics and ethics of coding?

Contrary to some variants of Marxism which tend to identify technology completely with ‘dead labor’, ‘fixed capital’ or ‘instrumental rationality’, and hence with control and capture, it seems important to remember how, for Marx, the evolution of machinery also indexes a level of development of productive powers that are unleashed but never totally contained by the capitalist economy. What interested Marx (and what makes his work still relevant to those who strive for a post-capitalist mode of existence) is the way in which, so he claims, the tendency of capital to invest in technology to automate and hence reduce its labor costs to a minimum potentially frees up a ‘surplus’ of time and energy (labor) or an excess of productive capacity in relation to the basic, important and necessary labor of reproduction (a global economy, for example, should first of all produce enough wealth for all members of a planetary population to be adequately fed, clothed, cured and sheltered). However, what characterizes a capitalist economy is that this surplus of time and energy is not simply released, but must be constantly reabsorbed in the cycle of production of exchange value leading to increasing accumulation of wealth by the few (the collective capitalist) at the expense of the many (the multitudes).

Automation, then, when seen from the point of view of capital, must always be balanced with new ways to control (that is, absorb and exhaust) the time and energy thus released. It must produce poverty and stress when there should be wealth and leisure. It must make direct labour the measure of value even when it is apparent that science, technology and social cooperation constitute the source of the wealth produced. It thus inevitably leads to the periodic and widespread destruction of this accumulated wealth, in the form of psychic burnout, environmental catastrophe and physical destruction of the wealth through war. It creates hunger where there should be satiety, it puts food banks next to the opulence of the super-rich. That is why the notion of a post-capitalist mode of existence must become believable, that is, it must become what Maurizio Lazzarato described as an enduring autonomous focus of subjectivation. What a post-capitalist commonism then can aim for is not only a better distribution of wealth compared to the unsustainable one that we have today, but also a reclaiming of ‘disposable time’—that is, time and energy freed from work to be deployed in developing and complicating the very notion of what is ‘necessary’.

The history of capitalism has shown that automation as such has not reduced the quantity and intensity of labor demanded by managers and capitalists. On the contrary, in as much as technology is only a means of production to capital, where it has been able to deploy other means, it has not innovated. For example, industrial technologies of automation in the factory do not seem to have recently experienced any significant technological breakthroughs. Most industrial labor today is still heavily manual, automated only in the sense of being hooked up to the speed of electronic networks of prototyping, marketing and distribution; and it is rendered economically sustainable only by political means—that is, by exploiting geo-political and economic differences (arbitrage) on a global scale and by controlling migration flows through new technologies of the border. The state of things in most industries today is intensified exploitation, which produces an impoverished mode of mass production and consumption that is damaging to both to the body, subjectivity, social relations and the environment. As Marx put it, disposable time released by automation should allow for a change in the very essence of the ‘human’ so that the new subjectivity is allowed to return to the performing of necessary labor in such a way as to redefine what is necessary and what is needed.

It is not then simply about arguing for a ‘return’ to simpler times, but on the contrary a matter of acknowledging that growing food and feeding populations, constructing shelter and adequate housing, learning and researching, caring for the children, the sick and the elderly requires the mobilization of social invention and cooperation. The whole process is thus transformed from a process of production by the many for the few steeped in impoverishment and stress to one where the many redefine the meaning of what is necessary and valuable, while inventing new ways of achieving it. This corresponds in a way to the notion of ‘commonfare’ as recently elaborated by Andrea Fumagalli and Carlo Vercellone, implying, in the latter’s words, ‘the socialization of investment and money and the question of the modes of management and organisation which allow for an authentic democratic reappropriation of the institutions of Welfare…and the ecologic re-structuring of our systems of production13. We need to ask then not only how algorithmic automation works today (mainly in terms of control and monetization, feeding the debt economy) but also what kind of time and energy it subsumes and how it might be made to work once taken up by different social and political assemblages—autonomous ones not subsumed by or subjected to the capitalist drive to accumulation and exploitation.

The Red Stack: Virtual Money, Social Networks, Bio-Hypermedia

In a recent intervention, digital media and political theorist Benjamin H. Bratton has argued that we are witnessing the emergence of a new nomos of the earth, where older geopolitical divisions linked to territorial sovereign powers are intersecting the new nomos of the Internet and new forms of sovereignty extending in electronic space14. This new heterogenous nomos involves the overlapping of national governments (China, United States, European Union, Brasil, Egypt and such likes), transnational bodies (the IMF, the WTO, the European Banks and NGOs of various types), and corporations such as Google, Facebook, Apple, Amazon, etc., producing differentiated patterns of mutual accommodation marked by moments of conflict. Drawing on the organizational structure of computer networks or ‘the OSI network model, upon with the TCP/IP stack and the global internet itself is indirectly based’, Bratton has developed the concept and/or prototype of the ‘stack’ to define the features of ‘a possible new nomos of the earth linking technology, nature and the human.’15 The stack supports and modulates a kind of ‘social cybernetics’ able to compose ‘both equilibrium and emergence’. As a ‘megastructure’, the stack implies a ‘confluence of interoperable standards-based complex material-information systems of systems, organized according to a vertical section, topographic model of layers and protocols…composed equally of social, human and “analog” layers (chthonic energy sources, gestures, affects, user-actants, interfaces, cities and streets, rooms and buildings, organic and inorganic envelopes) and informational, non-human computational and “digital” layers (multiplexed fiber optic cables, datacenters, databases, data standards and protocols, urban-scale networks, embedded systems, universal addressing tables)’16.

In this section, drawing on Bratton’s political prototype, I would like to propose the concept of the ‘Red Stack’—that is, a new nomos for the post-capitalist common. Materializing the ‘red stack’ involves engaging with (at least) three levels of socio-technical innovation: virtual money, social networks, and bio-hypermedia. These three levels, although ‘stacked’, that is, layered, are to be understood at the same time as interacting transversally and nonlinearly. They constitute a possible way to think about an infrastructure of autonomization linking together technology and subjectivation.

Virtual money

The contemporary economy, as Christian Marazzi and others have argued, is founded on a form of money which has been turned into a series of signs, with no fixed referent (such as gold) to anchor them, explicitly dependent on the computational automation of simulational models, screen media with automated displays of data (indexes, graphics etc) and algo-trading (bot-to-bot transactions) as its emerging mode of automation17. As Toni Negri also puts it, ‘money today—as abstract machine—has taken on the peculiar function of supreme measure of the values extracted out of society in the real subsumption of the latter under capital’18.

Since ownership and control of capital-money (different, as Maurizio Lazzarato remind us, from wage-money, in its capacity to be used not only as a means of exchange, but as a means of investment empowering certain futures over others) is crucial to maintaining populations bonded to the current power relation, how can we turn financial money into the money of the common? An experiment such as Bitcoin demonstrates that in a way ‘the taboo on money has been broken’19 and that beyond the limits of this experience, forkings are already developing in different directions. What kind of relationship can be established between the algorithms of money-creation and ‘a constituent practice which affirms other criteria for the measurement of wealth, valorizing new and old collective needs outside the logic of finance’?20

Current attempts to develop new kinds of cryptocurrencies must be judged, valued and rethought on the basis of this simple question as posed by Andrea Fumagalli: Is the currency created not limited solely to being a means of exchange, but can it also affect the entire cycle of money creation – from finance to exchange?21.

Does it allow speculation and hoarding, or does it promote investment in post-capitalist projects and facilitate freedom from exploitation, autonomy of organization etc.? What is becoming increasingly clear is that algorithms are an essential part of the process of creation of the money of the common, but that algorithms also have politics (What are the gendered politics of individual ‘mining’, for example, and of the complex technical knowledge and machinery implied in mining bitcoins?) Furthermore, the drive to completely automate money production in order to escape the fallacies of subjective factors and social relations might cause such relations to come back in the form of speculative trading. In the same way as financial capital is intrinsically linked to a certain kind of subjectivity (the financial predator narrated by Hollywood cinema), so an autonomous form of money needs to be both jacked into and productive of a new kind of subjectivity not limited to the hacking milieu as such, but at the same time oriented not towards monetization and accumulation but towards the empowering of social cooperation. Other questions that the design of the money of the common might involve are: Is it possible to draw on the current financialization of the Internet by corporations such as Google (with its Adsense/Adword programme) to subtract money from the circuit of capitalist accumulation and turn it into a money able to finance new forms of commonfare (education, research, health, environment etc)? What are the lessons to be learned from crowdfunding models and their limits in thinking about new forms of financing autonomous projects of social cooperation? How can we perfect and extend experiments such as that carried out by the Inter-Occupy movement during the Katrina hurricane in turning social networks into crowdfunding networks which can then be used as logistical infrastructure able to move not only information, but also physical goods?22.

Social Networks

Over the past ten years, digital media have undergone a process of becoming social that has introduced genuine innovation in relation to previous forms of social software (mailing lists, forums, multi-user domains, etc). If mailing lists, for example, drew on the communicational language of sending and receiving, social network sites and the diffusion of (proprietary) social plug-ins have turned the social relation itself into the content of new computational procedures. When sending and receiving a message, we can say that algorithms operate outside the social relation as such, in the space of the transmission and distribution of messages; but social network software places intervenes directly on the social relationship. Indeed, digital technologies and social network sites ‘cut into’ the social relation as such—that is, they turn it into a discrete object and introduce a new supplementary relation.23

If, with Gabriel Tarde and Michel Foucault, we understand the social relation as an asymmetrical relation involving at least two poles (one active and the other receptive) and characterized by a certain degree of freedom, we can think of actions such as liking and being liked, writing and reading, looking and being looked at, tagging and being tagged, and even buying and selling as the kind of conducts that transindividuate the social (they induce the passage from the pre-individual through the individual to the collective). In social network sites and social plug-ins these actions become discrete technical objects (like buttons, comment boxes, tags etc) which are then linked to underlying data structures (for example the social graph) and subjected to the power of ranking of algorithms. This produces the characteristic spatio-temporal modality of digital sociality today: the feed, an algorithmically customized flow of opinions, beliefs, statements, desires expressed in words, images, sounds etc. Much reviled in contemporary critical theory for their supposedly homogenizing effect, these new technologies of the social, however, also open the possibility of experimenting with many-to-many interaction and thus with the very processes of individuation. Political experiments (se the various internet-based parties such as the 5 star movement, Pirate Party, Partido X) draw on the powers of these new socio-technical structures in order to produce massive processes of participation and deliberation; but, as with Bitcoin, they also show the far from resolved processes that link political subjectivation to algorithmic automation. They can function, however, because they draw on widely socialized new knowledges and crafts (how to construct a profile, how to cultivate a public, how to share and comment, how to make and post photos, videos, notes, how to publicize events) and on ‘soft skills’ of expression and relation (humour, argumentation, sparring) which are not implicitly good or bad, but present a series of affordances or degrees of freedom of expression for political action that cannot be left to capitalist monopolies. However, it is not only a matter of using social networks to organize resistance and revolt, but also a question of constructing a social mode of self-Information which can collect and reorganize existing drives towards autonomous and singular becomings. Given that algorithms, as we have said, cannot be unlinked from wider social assemblages, their materialization within the red stack involves the hijacking of social network technologies away from a mode of consumption whereby social networks can act as a distributed platform for learning about the world, fostering and nurturing new competences and skills, fostering planetary connections, and developing new ideas and values.

Bio-hypermedia

The term bio-hypermedia, coined by Giorgio Griziotti, identifies the ever more intimate relation between bodies and devices which is part of the diffusion of smart phones, tablet computers and ubiquitous computation. As digital networks shift away from the centrality of the desktop or even laptop machine towards smaller, portable devices, a new social and technical landscape emerges around ‘apps’ and ‘clouds’ which directly ‘intervene in how we feel, perceive and understand the world’.24). Bratton defines the ‘apps’ for platforms such as Android and Apple as interfaces or membranes linking individual devices to large databases stored in the ‘cloud’ (massive data processing and storage centres owned by large corporations).25

This topological continuity has allowed for the diffusion of downloadable apps which increasingly modulate the relationship of bodies and space. Such technologies not only ‘stick to the skin and respond to the touch’ (as Bruce Sterling once put it), but create new ‘zones’ around bodies which now move through ‘coded spaces’ overlayed with information, able to locate other bodies and places within interactive, informational visual maps. New spatial ecosystems emerging at the crossing of the ‘natural’ and the artificial allow for the activation of a process of chaosmotic co-creation of urban life.26 Here again we can see how apps are, for capital, simply a means to ‘monetize’ and ‘accumulate’ data about the body’s movement while subsuming it ever more tightly in networks of consumption and surveillance. However, this subsumption of the mobile body under capital does not necessarily imply that this is the only possible use of these new technological affordances. Turning bio-hypermedia into components of the red stack (the mode of reappropriation of fixed capital in the age of the networked social) implies drawing together current experimentation with hardware (shenzei phone hacking technologies, makers movements, etc.) able to support a new breed of ‘imaginary apps’ (think for example about the apps devised by the artist collective Electronic Disturbance Theatre, which allow migrants to bypass border controls, or apps able to track the origin of commodities, their degrees of exploitation, etc.).

Conclusions

This short essay, a synthesis of a wider research process, means to propose another strategy for the construction of a machinic infrastructure of the common. The basic idea is that information technologies, which comprise algorithms as a central component, do not simply constitute a tool of capital, but are simultaneously constructing new potentialities for postneoliberal modes of government and postcapitalist modes of production. It is a matter here of opening possible lines of contamination with the large movements of programmers, hackers and makers involved in a process of re-coding of network architectures and information technologies based on values others than exchange and speculation, but also of acknowledging the wide process of technosocial literacy that has recently affected large swathes of the world population. It is a matter, then, of producing a convergence able to extend the problem of the reprogramming of the Internet away from recent trends towards corporatisation and monetisation at the expense of users’ freedom and control. Linking bio-informational communication to issues such as the production of a money of the commons able to socialize wealth, against current trends towards privatisation, accumulation and concentration, and saying that social networks and diffused communicational competences can also function as means to organize cooperation and produce new knowledges and values, means seeking for a new political synthesis which moves us away from the neoliberal paradigm of debt, austerity and accumulation. This is not a utopia, but a program for the invention of constituent social algorithms of the common.

In addition to the sources cited above, and the texts contained in this volume, we offer the following expandable bibliographical toolkit or open desiring biblio-machine. (Instructions: pick, choose and subtract/add to form your own assemblage of self-formation for the purposes of materialization of the red stack):

— L. Baroniant and C. Vercellone, Moneta Del Comune e Reddito Sociale Garantito (2013), Uninomade.

— M. Bauwens, The Social Web and Its Social Contracts: Some Notes on Social Antagonism in Netarchical Capitalism (2008), Re-Public Re-Imaging Democracy.

— F. Berardi and G. Lovink, A call to the army of love and to the army of software (2011), Nettime.

— R. Braidotti, The posthuman (Cambridge: Polity Press, 2013).

— G. E. Coleman, Coding Freedom: The Ethics and Aesthetics of Hacking (Princeton and Oxford: Princeton University Press, 2012).

— A. Fumagalli, Trasformazione del lavoro e trasformazioni del welfare: precarietà e welfare del comune (commonfare) in Europa, in P. Leon and R. Realfonso (eds), L’Economia della precarietà (Rome: Manifestolibri, 2008), 159–74.

— G. Giannelli and A. Fumagalli, Il fenomeno Bitcoin: moneta alternativa o moneta speculativa? (2013), I Quaderni di San Precario.

— G. Griziotti, D. Lovaglio and T. Terranova, Netwar 2.0: Verso una convergenza della “calle” e della rete (2012), Uninomade 2.0.

— E. Grosz, Chaos, Territory, Art (New York: Columbia University Press, 2012).

— F. Guattari, Chaosmosis: An Ethico-Aesthetic Paradigm (Indianapolis, IN: Indiana University Press, 1995).

S. Jourdan, Game-over Bitcoin: Where Is the Next Human-Based Digital Currency? (2014).

— M. Lazzarato, Les puissances de l’invention (Paris: L’empecheurs de penser ronde, 2004).

— M. Lazzarato, The Making of the Indebted Man (Los Angeles: Semiotext(e), 2013).

— G. Lovink and M. Rasch (eds), Unlike Us Reader: Social Media Monopolies and their Alternatives (Amsterdam: Institute of Network Culture, 2013).

— A. Mackenzie (2013), Programming subjects in the regime of anticipation: software studies and subjectivity in In: Subjectivity. 6, p. 391-405

— L. Manovich, The Poetics of Augmented Space, Virtual Communication 5:2 (2006), 219–40.

— S. Mezzadra and B. Neilson, Border as Method or the Multiplication of Labor (Durham, NC: Duke University Press, 2013).

— P. D. Miller aka DJ Spooky and S. Matviyenko, The Imaginary App (Cambridge, MA: MIT Press, forthcoming).

— A. Negri, Acting in common and the limits of capital (2014), in Euronomade.

— A. Negri and M. Hardt, Commonwealth (Cambridge, MA: Belknap Press, 2009).

— M. Pasquinelli, Google’s Page Rank Algorithm: A Diagram of the Cognitive Capitalism and the Rentier of the Common Intellect(2009).

— B. Scott, Heretic’s Guide to Global Finance: Hacking the Future of Money (London: Pluto Press, 2013).

— G. Simondon, On the Mode of Existence of Technical Objects (1958), University of Western Ontario

— R. Stallman, Free Software: Free Society. Selected Essays of Richard M. Stallman (Free Software Foundation, 2002).

— A. Toscano, Gaming the Plumbing: High-Frequency Trading and the Spaces of Capital (2013), in Mute.

— I. Wilkins and B. Dragos, Destructive Distraction? An Ecological Study of High Frequency Trading, in Mute.

Download this article as an e-book


  1. In the words of the programme of the worshop from which this essay originated: http://quaderni.sanprecario.info/2014/01/workshop-algorithms/ ↩
  2. M. Fisher, Capitalist Realism: Is There No Alternative (London: Zer0 Books, 2009); 2009, A. Williams and N. Srnciek, ‘#Accelerate: Manifesto for an Accelerationist Politics’, this volume XXX-XXX. ↩
  3. K. Marx, ‘Fragment on Machines’, this volume, XXX–XXX. ↩
  4. Ibid., XXX. ↩
  5. Ibid., XXX. ↩
  6. M. Fuller, Software Studies: A Lexicon (Cambridge, MA: The MIT Press, 2008); F. Berardi, The Soul at Work: From Alienation to Autonomy, Cambridge, Mass: MIT Press, 2009)  ↩
  7. A. Goffey, ‘Algorithm’, in Fuller (ed), Software Studies, 15–17: 15. ↩
  8. Ibid. ↩
  9. Fuller, Introduction to Fuller (ed), Software Studies, 5 ↩
  10. L. Parisi, Contagious Architecture: Computation, Aesthetics, Space (Cambridge, Mass. and Sidney: MIT Press, 2013), x. ↩
  11. Ibid., ix. ↩
  12. Marx, XXX. ↩
  13. C. Vercellone, ‘From the crisis to the “commonfare” as new mode of production’, in special section on Eurocrisis (ed. G. Amendola, S. Mezzadra and T. Terranova), Theory, Culture and Society, forthcoming; also A. Fumagalli, ‘Digital (Crypto) Money and Alternative Financial Circuits: Lead the Attack to the Heart of the State, sorry, of Financial Market’ ↩
  14. B. Bratton, On the Nomos of the Cloud (2012). ↩
  15. Ibid. ↩
  16. Ibid. ↩
  17. C. Marazzi, Money in the World Crisis: The New Basis of Capitalist Power ↩
  18. T. Negri, Reflections on the Manifesto for an Accelerationist Politics(2014), Euronomade ↩
  19. Jaromil Rojio, Bitcoin, la fine del tabù della moneta (2014), in I Quaderni di San Precario. ↩
  20. S. Lucarelli, Il principio della liquidità e la sua corruzione. Un contributo alla discussione su algoritmi e capitale (2014), in I Quaderni di san Precario ↩
  21. A. Fumagalli, Commonfare: Per la riappropriazione del libero accesso ai beni comuni (2014), in Doppio Zero ↩
  22. Common Ground Collective, Common Ground Collective, Food, not Bombs and Occupy Movement form Coalition to help Isaac & Kathrina Victims (2012), Interoccupy.net  ↩
  23. B. Stiegler, The Most Precious Good in the Era of Social Technologies, in G. Lovink and M. Rasch (eds), Unlike Us Reader: Social Media Monopolies and Their Alternatives (Amsterdam: Institute of Network Culture, 2013), 16–30. ↩
  24. G. Griziotti, Biorank: algorithms and transformations in the bios of cognitive capitalism (2014), in I Quaderni di san Precario; also S. Portanova, Moving without a Body (Boston, MA: MIT Press, 2013 ↩
  25. B. Bratton, On Apps and Elementary Forms of Interfacial Life: Object, Image, Superimposition  ↩
  26. S. Iaconesi and O. Persico, The Co-Creation of the City: Re-programming Cities using Real-Time User-Generated Content ↩

Photo by ahisgett

The post Algorithms, Capital, and the Automation of the Common appeared first on P2P Foundation.

]]>
https://blog.p2pfoundation.net/algorithms-capital-and-the-automation-of-the-common/2019/01/15/feed 0 74010
How nonprofits are organizing tech workers for social change https://blog.p2pfoundation.net/how-nonprofits-are-organizing-tech-workers-for-social-change/2018/09/29 https://blog.p2pfoundation.net/how-nonprofits-are-organizing-tech-workers-for-social-change/2018/09/29#respond Sat, 29 Sep 2018 07:19:43 +0000 https://blog.p2pfoundation.net/?p=72778 Cross-posted from Shareable. Nithin Coca: As tensions between tech companies and their surrounding communities in cities like San Francisco, Seattle, and Austin continue to escalate, there’s an effort underway to find meaningful, collaborative solutions. From driving up the costs of housing to increasing traffic congestion, employees of large-scale tech corporations have been blamed for intensifying... Continue reading

The post How nonprofits are organizing tech workers for social change appeared first on P2P Foundation.

]]>

Cross-posted from Shareable.

Nithin Coca: As tensions between tech companies and their surrounding communities in cities like San Francisco, Seattle, and Austin continue to escalate, there’s an effort underway to find meaningful, collaborative solutions. From driving up the costs of housing to increasing traffic congestion, employees of large-scale tech corporations have been blamed for intensifying socio-economic inequalities. But some workers are taking matters into their own hands. Recently, Google dropped its Project Maven collaboration with the Pentagon after employee pressure.

Coworker.org, a nonprofit based in the U.S. that enables workers to start campaigns to change their workplaces, received more inquiries from employees at tech firms about using the platform following the election in 2016. Yana Calou, the group’s engagement and training manager said: “They were really concerned about their jobs being used towards things that they were not really comfortable with.”

Another organization leading this effort in the San Francisco Bay Area, home to several of the world’s largest technology companies, is the TechEquity Collaborative, which is taking more of a grassroots approach.

“No one was looking at the rank and file tech worker as a constituent group to be organized in a political way,” says Catherine Bracy, executive director of the TechEquity Collaborative. “There is a critical mass of tech workers who feel a huge sense of shame and guilt about the role that the industry is playing in creating these inequitable conditions, and want to do something different about it. They are hungry for opportunities to learn and be out there and contributing to solutions.”

TechEquity’s model — as its names states — is a collaborative one. Instead of dictating solutions, the organization works on connecting tech workers with affected communities to foster a shared approach to reaching potential solutions.

“It’s not just a political strategy, it’s an end in of itself,” Bracy says. “We need to develop stronger relationships based on trust if we’re going to live in a world where tech can be a value-add for everybody, not just the people who are getting rich from it.”

This connects with the challenges facing another key group — gig workers. Many gig workers have seen their livelihoods directly impacted by the growth of platforms like Uber, Taskrabbit, and Amazon Mechanical Turk. Coworker.org is also helping gig and contract workers organize campaigns. One of those campaigns, started by the App-Based Drivers Association, a group for drivers working for various app-based companies, targeted Uber, which refused to make in-app tipping available to all of its drivers based in the U.S. Organizers believe this campaign played a role in the ride-hailing giant adding tipping in June 2017.

Coworker.org’s platform allows for a similar function — workers can build networks within the platform to stay connected after the completion of a campaign. For gig workers who work in isolation, this can be a powerful organizing tool. There are currently approximately 6,300 Uber drivers on Coworker.org. Calou sees potential for these networks to increase the power of gig or contract workers who are often at the periphery of the tech industry.

“One of things that we’re doing is thinking about is how can workers at these companies join employee networks where anyone has ever signed a petition on Uber then has a platform where they can connect with each other and have a more sustained, long-term view of things they want to get together and work on,” says Calou.

For Bracy, building worker power within the industry and partnerships with communities everywhere are key steps towards restoring the promise of the internet and digital technology to connect people.

“I still think the internet is the most powerful for democratizing communication in human history, and we’ve seen a lot of bad, but there is a lot of potential for good, but we have to do the work to pull the industry in that direction to make sure that promise of the internet is kept,” Bracy says.

Header image by Raquel Torres, courtesy of TechEquity Collaborative

The post How nonprofits are organizing tech workers for social change appeared first on P2P Foundation.

]]>
https://blog.p2pfoundation.net/how-nonprofits-are-organizing-tech-workers-for-social-change/2018/09/29/feed 0 72778
What does Google know about me? https://blog.p2pfoundation.net/what-does-google-know-about-me/2018/09/26 https://blog.p2pfoundation.net/what-does-google-know-about-me/2018/09/26#respond Wed, 26 Sep 2018 08:00:00 +0000 https://blog.p2pfoundation.net/?p=72743 This post by Gabriel Weinberg, CEO & Founder at DuckDuckGo (2008-present) is republished from Quora Did you know that unlike searching on DuckDuckGo, when you search on Google, they keep your search history forever? That means they know every search you’ve ever done on Google. That alone is pretty scary, but it’s just the shallow... Continue reading

The post What does Google know about me? appeared first on P2P Foundation.

]]>
This post by Gabriel Weinberg, CEO & Founder at DuckDuckGo (2008-present) is republished from Quora

Did you know that unlike searching on DuckDuckGo, when you search on Google, they keep your search history forever? That means they know every search you’ve ever done on Google. That alone is pretty scary, but it’s just the shallow end of the very deep pool of data that they try to collect on people.

What most people don’t realize is that even if you don’t use any Google products directly, they’re still trying to track as much as they can about you. Google trackers have been found on 75% of the top million websites. This means they’re also trying to track most everywhere you go on the internet, trying to slurp up your browsing history!

Most people also don’t know that Google runs most of the ads you see across the internet and in apps – you know those ones that follow you around everywhere? Yup, that’s Google, too. They aren’t really a search company anymore – they’re a tracking company. They are tracking as much as they can for these annoying and intrusive ads, including recording every time you see them, where you saw them, if you clicked on them, etc.

But even that’s not all…

If You Use Google Products

If you do use Google products, they try to track even more. In addition to tracking everything you’ve ever searched for on Google (e.g. “weird rash”), Google also tracks every video you’ve ever watched on YouTube. Many people actually don’t know that Google owns YouTube; now you know.

And if you use Android (yeah, Google owns that too), then Google is also usually tracking:

If you use Gmail, they of course also have all your e-mail messages. If you use Google Calendar, they know all your schedule. There’s a pattern here: For all Google products (Hangouts, Music, Drive, etc.), you can expect the same level of tracking: that is, pretty much anything they can track, they will.

Oh, and if you use Google Home, they also store a live recording of every command you’ve (or anyone else) has ever said to your device! Yes, you heard that right (err… they heard it) – you can check out all the recordings on your Google activity page.

Essentially, if you allow them to, they’ll track pretty close to, well, everything you do on the Internet. In fact, even if you tell them to stop tracking you, Google has been known to not really listen, for example with location history.

You Become the Product

Why does Google want all of your information anyway? Simple: as stated, Google isn’t a search company anymore, they’re a tracking company. All of these data points allow Google to build a pretty robust profile about you. In some ways, by keeping such close tabs on everything you do, they, at least in some ways, may know you better than you know yourself.

And Google uses your personal profile to sell ads, not only on their search engine, but also on over three million other websites and apps. Every time you visit one of these sites or apps, Google is following you around with hyper-targeted ads.

It’s exploitative. By allowing Google to collect all this info, you are allowing hundreds of thousands of advertisers to bid on serving you ads based on your sensitive personal data. Everyone involved is profiting from your information, except you. You are the product.

It doesn’t have to be this way. It is entirely possible for a web-based business to be profitable without making you the product – since 2014, DuckDuckGo has been profitable without storing or sharing any personal information on people at all. You can read more about our business model here.

The Myth of “Nothing to Hide”

Some may argue that they have “nothing to hide,” so they are not concerned with the amount of information Google has collected and stored on them, but that argument is fundamentally flawed for many reasons.

Everyone has information they want to keep private: Do you close the door when you go to the bathroom? Privacy is about control over your personal information. You don’t want it in the hands of everyone, and certainly don’t want people profiting on it without your consent or participation.

In addition, privacy is essential to democratic institutions like voting and everyday situations such as getting medical care and performing financial transactions. Without it, there can be significant harms.

On an individual level, lack of privacy leads to putting you into a filter bubble, getting manipulated by ads, discrimination, fraud, and identity theft. On a societal level, it can lead to deepened polarization and societal manipulation like we’ve unfortunately been seeing multiply in recent years.

You Can Live Google Free

Basically, Google tries to track too much. It’s creepy and simply just more information than one company should have on anyone.

Thankfully, there are many good ways to reduce your Google footprint, even close to zero! If you are ready to live without Google, we have recommendations for services to replace their suite of products, as well as instructions for clearing your Google search history. It might feel like you are trapped in the Google-verse, but it is possible to break free.

For starters, just switching the search engine for all your searches goes a long way. After all, you share your most intimate questions with your search engine; at the very least, shouldn’t those be kept private? If you switch to the DuckDuckGo app and extension you will not only make your searches anonymous, but also block Google’s most widespread and invasive trackers as you navigate the web.

If you’re unfamiliar with DuckDuckGo, we are an Internet privacy company that empowers you to seamlessly take control of your personal information online, without any tradeoffs. We operate a search engine alternative to Google at http://duckduckgo.com, and offer a mobile app and desktop browser extension to protect you from Google, Facebook and other trackers, no matter where you go on the Internet.

We’re also trying to educate users through our blog, social media, and a privacy “crash course” newsletter.


Photo by stockcatalog www.thoughtcatalog.com

The post What does Google know about me? appeared first on P2P Foundation.

]]>
https://blog.p2pfoundation.net/what-does-google-know-about-me/2018/09/26/feed 0 72743
Matt Stoller on Modern Monopolies https://blog.p2pfoundation.net/matt-stoller-on-modern-monopolies/2018/09/10 https://blog.p2pfoundation.net/matt-stoller-on-modern-monopolies/2018/09/10#respond Mon, 10 Sep 2018 09:00:00 +0000 https://blog.p2pfoundation.net/?p=72556 Republished from Econtalk Matt Stoller of the Open Market Institute talks with EconTalk host Russ Roberts about the growing influence of Google, Facebook, and Amazon on commercial and political life. Stoller argues that these large firms have too much power over our options as consumers and creators as well as having a large impact on... Continue reading

The post Matt Stoller on Modern Monopolies appeared first on P2P Foundation.

]]>
Republished from Econtalk

Matt Stoller of the Open Market Institute talks with EconTalk host Russ Roberts about the growing influence of Google, Facebook, and Amazon on commercial and political life. Stoller argues that these large firms have too much power over our options as consumers and creators as well as having a large impact on our access to information.

About Matt Stoller

Matt Stoller is a Fellow at the Open Markets Institute. He is writing a book on monopoly power in the 20th century for Simon and Schuster. Previously, he was a Senior Policy Advisor and Budget Analyst to the Senate Budget Committee. He also worked in the U.S. House of Representatives on financial services policy, including Dodd-Frank, the Federal Reserve, and the foreclosure crisis. He has written for the New York Times, the Washington Post, The New Republic, Vice, and Salon. He was a producer for MSNBC’s The Dylan Ratigan Show, and served as a writer and actor on the short-lived FX television series Brand X with Russell Brand. You can follow him on Twitter at @matthewstoller.

 

Header photo by GrungeTextures

The post Matt Stoller on Modern Monopolies appeared first on P2P Foundation.

]]>
https://blog.p2pfoundation.net/matt-stoller-on-modern-monopolies/2018/09/10/feed 0 72556
Out of the Frying Pan and Into the Fire https://blog.p2pfoundation.net/out-of-the-frying-pan-and-into-the-fire/2018/08/04 https://blog.p2pfoundation.net/out-of-the-frying-pan-and-into-the-fire/2018/08/04#respond Sat, 04 Aug 2018 08:00:00 +0000 https://blog.p2pfoundation.net/?p=72084 Republished from Aral Balkan  Mariana Mazzucato1 has an article in MIT Technology Review titled Let’s make private data into a public good. Let’s not. While Mariana’s criticisms of surveillance capitalism are spot on, her proposed remedy is as far from the mark as it possibly could be. Yes, surveillance capitalism is bad Mariana starts off... Continue reading

The post Out of the Frying Pan and Into the Fire appeared first on P2P Foundation.

]]>
Republished from Aral Balkan 

Mariana Mazzucato1 has an article in MIT Technology Review titled Let’s make private data into a public good.

Let’s not.

While Mariana’s criticisms of surveillance capitalism are spot on, her proposed remedy is as far from the mark as it possibly could be.

Yes, surveillance capitalism is bad

Mariana starts off by making the case, and rightly so, that surveillance capitalists2 like Google or Facebook “are making huge profits from technologies originally created with taxpayer money.”

Google’s algorithm was developed with funding from the National Science Foundation, and the internet came from DARPA funding. The same is true for touch-screen displays, GPS, and Siri. From this the tech giants have created de facto monopolies while evading the type of regulation that would rein in monopolies in any other industry. And their business model is built on taking advantage of the habits and private information of the taxpayers who funded the technologies in the first place.

There’s nothing to argue with here. It’s a succinct summary of the tragedy of the commons that lies at the heart of surveillance capitalism and, indeed, that of neoliberalism itself.

Mariana also accurately describes the business model of these companies, albeit without focusing on the actual mechanism by which the data is gathered to begin with3:

Facebook’s and Google’s business models are built on the commodification of personal data, transforming our friendships, interests, beliefs, and preferences into sellable propositions. … The so-called sharing economy is based on the same idea.

So far, so good.

But then, things quickly take a very wrong turn:

There is indeed no reason why the public’s data should not be owned by a public repository that sells the data to the tech giants, rather than vice versa.

There is every reason why we shouldn’t do this.

Mariana’s analysis is fundamentally flawed in two respects: First, it ignores a core injustice in surveillance capitalism – violation of privacy – that her proposed recommendation would have the effect of normalising. Second, it perpetuates a fundamental false dichotomy ­– that there is no other way to design technology than the way Silicon Valley and surveillance capitalists design technology – which then means that there is no mention of the true alternatives: free and open, decentralised, interoperable ethical technologies.

No, we must not normalise violation of privacy

The core injustice that Mariana’s piece ignores is that the business model of surveillance capitalists like Google and Facebook is based on the violation of a fundamental human right. When she says “let’s not forget that a large part of the technology and necessary data was created by all of us” it sounds like we voluntarily got together to create a dataset for the common good by revealing the most intimate details of our lives through having our behaviour tracked and aggregated. In truth, we did no such thing.

We were farmed.

We might have resigned ourselves to being farmed by the likes of Google and Facebook because we have no other choice but that’s not a healthy definition of consent by any standard. If 99.99999% of all investment goes into funding surveillance-based technology (and it does), then people have neither a true choice nor can they be expected to give any meaningful consent to being tracked and profiled. Surveillance capitalism is the norm today. It is mainstream technology. It’s what we funded and what we built.

It is also fundamentally unjust.

There is a very important reason why the public’s data should not be owned by a public repository that sells the data to the tech giants because it’s not the public’s data, it is personal data and it should never have been collected by a third party to begin with. You might hear the same argument from people who say that we must nationalise Google or Facebook.

No, no, no, no, no, no, no! The answer to the violation of personhood by corporations isn’t violation of personhood by government, it’s not violating personhood to begin with.

That’s not to say that we cannot have a data commons. In fact, we must. But we must learn to make a core distinction between data about people and data about the world around us.

Data about people ≠ data about rocks

Our fundamental error when talking about data is that we use a single term when referring to both information about people as well as information about things. And yet, there is a world of difference between data about a rock and data about a human being. I cannot deprive a rock of its freedom or its life, I cannot emotionally or physically hurt a rock, and yet I can do all those things to people. When we posit what is permissible to do with data, if we are not specific in whether we are talking about rocks or people, one of those two groups is going to get the short end of the stick and it’s not going to be the rocks.

Here is a simple rule of thumb:

Data about individuals must belong to the individuals themselves. Data about the commons must belong to the commons.

I implore anyone working in this area – especially professors writing books and looking to shape public policy – to understand and learn this core distinction.

There is an alternative

I mentioned above that the second fundamental flaw in Mariana’s article is that it perpetuates a false dichotomy. That false dichotomy is that the Silicon Valley/surveillance capitalist model of building modern/digital/networked technology is the only possible way to build modern/digital/networked technology and that we must accept it as a given.

This is patently false.

It’s true that all modern technology works by gathering data. That’s not the problem. The core question is “who owns and controls that data and the technology by which it is gathered?” The answer to that question today is “corporations do.” Corporations like Google and Facebook own and control our data not because of some inevitable characteristic of modern technology but because of how they designed their technology in line with the needs of their business model.

Specifically, surveillance capitalists like Google and Facebook design proprietary and centralised technologies to addict people and lock them in. In such systems, your data originates in a place you do not own. On “other people’s computers,” as the Free Software Foundation calls it. Or on “the cloud” as we colloquially reference it.

The crucial point here, however, is that this toxic way of building modern technology is not the only way to design and build modern technology.

We know how to build free and open, decentralised, and interoperable systems where your data originates in a place that you – as an individual – own and control.

In other words, we know how to build technology where the algorithms remain on your own devices and where you are not farmed for personal information to begin with.

To say that we must take as given that some third party will gather our personal data is to capitulate to surveillance capitalism. It is to accept the false dichotomy that either we have surveillance-based technology or we forego modern technology.

This is neither true, nor necessary, nor acceptable.

We can and we must build ethical technology instead.

Regulate and replace

As I’m increasingly hearing these defeatist arguments that inherently accept surveillance as a foregone conclusion of modern technology, I want to reiterate what a true solution looks like.

There are two things we must do to create an ethical alternative to surveillance capitalism:

    1. Regulate the shit out of surveillance capitalists.The goal here is to limit their abuses and harm. This includes limiting their ability to gather, process, and retain data, as well as fining them meaningful amounts and even breaking them up.4
    2. Fund and build ethical alternatives.In other words, replace them with ethical alternatives.Ethical alternatives do exist today but they do so mainly thanks to the extraordinary personal efforts of disjointed bands of so-called DIY rebels.

Whether they are the punk rockers of the tech world or its ragamuffins – and perhaps a little bit of both – what is certain is that they lead a precarious existence on the fringes of mainstream technology. They rely on anything from personal finances to selling the things they make, to crowdfunding and donations – and usually combinations thereof – to etch out an existence that both challenges and hopes to alter the shape of mainstream technology (and thus society) to make it fairer, kinder, and more just.

While they build everything from computers and phones (Puri.sm) to federated social networks (Mastodon) and decentralised alternatives to the centralised Web (DAT), they do so usually with little or no funding whatsoever. And many are a single personal tragedy away from not existing at all.

Meanwhile, we use taxpayer money in the EU to fund surveillance-based startups. Startups, which, if they succeed will most likely be bought by larger US-based surveillance capitalists like Google and Facebook. If they fail, on the other hand, the European taxpayer foots the bill. Europe, bamboozled by and living under the digital imperialism of Silicon Valley, has become its unpaid research and development department.

This must change.

Ethical technology does not grow on trees. Venture capitalists will not fund it. Silicon Valley will not build it.

A meaningful counterpoint to surveillance capitalism that protects human rights and democracy will not come from China. If we fail to create one in Europe then I’m afraid that humankind is destined for centuries of feudal strife. If it survives the unsustainable trajectory that this social system has set it upon, that is.

If we want ethical technological infrastructure – and we should, because the future of our human rights, democracy, and quite possibly that of the species depends on it – then we must fund and build it.

The answer to surveillance capitalism isn’t to better distribute the rewards of its injustices or to normalise its practices at the state level.

The answer to surveillance capitalism is a socio-techno-economic system that is just at its core. To create the technological infrastructure for such a system, we must fund independent organisations from the common purse to work for the common good to build ethical technology to protect individual sovereignty and nurture a healthy commons.


  1. According to the bio in the article: “Mariana Mazzucato is a professor in the economics of innovation and public value at University College London, where she directs the Institute for Innovation and Public Purpose.” The article I’m referencing is an edited excerpt from her new book The Value of Everything: Making and Taking in the Global Economy. [return]
  2. Although she never explicitly uses that term in the article. [return]
  3. Centralised architectures based on surveillance. [return]
  4. Break them up, by all means. But don’t do anything silly like nationalising them (for all the reasons I mention in this post). Nationalising a surveillance-based corporation would simply shift the surveillance to the state. We must embrace the third alternative: funding and building technology that isn’t based on surveillance to begin with. In other words, free and open, decentralised, interoperable technology. [return]

Photo by JForth

The post Out of the Frying Pan and Into the Fire appeared first on P2P Foundation.

]]>
https://blog.p2pfoundation.net/out-of-the-frying-pan-and-into-the-fire/2018/08/04/feed 0 72084
Are the Digital Commons condemned to become “Capital Commons”? https://blog.p2pfoundation.net/are-the-digital-commons-condemned-to-become-capital-commons/2018/08/03 https://blog.p2pfoundation.net/are-the-digital-commons-condemned-to-become-capital-commons/2018/08/03#respond Fri, 03 Aug 2018 08:00:00 +0000 https://blog.p2pfoundation.net/?p=72035 By Calimaq; original article in French translated by Maïa Dereva (with DeepL) and edited by Ann Marie Utratel Last week, Katherine Maher, the executive director of the Wikimedia Foundation, published a rather surprising article on the Wired site entitled: “Facebook and Google must do more to support Wikipedia”. The starting point of her reasoning was... Continue reading

The post Are the Digital Commons condemned to become “Capital Commons”? appeared first on P2P Foundation.

]]>
By Calimaq; original article in French translated by Maïa Dereva (with DeepL) and edited by Ann Marie Utratel


Last week, Katherine Maher, the executive director of the Wikimedia Foundation, published a rather surprising article on the Wired site entitled: “Facebook and Google must do more to support Wikipedia”. The starting point of her reasoning was to point out that Wikipedia content is increasingly being used by digital giants, such as Facebook or Google:

You may not realise how ubiquitous Wikipedia is in your everyday life, but its open, collaboratively-curated data is used across semantic, search and structured data platforms  on the web. Voice assistants such as Siri, Alexa and Google Home source Wikipedia articles for general knowledge questions; Google’s knowledge panel features Wikipedia content for snippets and essential facts; Quora contributes to and utilises the Wikidata open data project to connect topics and improve user recommendations.

More recently, YouTube and Facebook have turned to Wikipedia for a new reason: to address their issues around fake news and conspiracy theories. YouTube said that they would begin linking to Wikipedia articles from conspiracy videos, in order to give users additional – often corrective – information about the topic of the video. And Facebook rolled out a feature using Wikipedia’s content to give users more information about the publication source of articles appearing in their feeds.

With Wikipedia being solicited more and more by these big players, Katherine Maher believes that they should contribute in return to help the project to guarantee its sustainability:

But this work isn’t free. If Wikipedia is being asked to help hold back the ugliest parts of the internet, from conspiracy theories to propaganda, then the commons needs sustained, long-term support – and that support should come from those with the biggest monetary stake in the health of our shared digital networks.

The companies which rely on the standards we develop, the libraries we maintain, and the knowledge we curate should invest back. And they should do so with significant, long-term commitments that are commensurate with our value we create. After all, it’s good business: the long-term stability of the commons means we’ll be around for continued use for many years to come.

As the non-profits that make the internet possible, we already know how to advocate for our values. We shouldn’t be afraid to stand up for our value.

An image that makes fun of a famous quote by Bill Gates who had described the Linux project as “communist”. But today, it is Capital that produces or recovers digital Commons – starting with Linux – and maybe that shouldn’t make us laugh..

Digital commons: the problem of sustainability

There is something strange about the director of the Wikimedia Foundation saying this kind of thing. Wikipedia is in fact a project anchored in the philosophy of Free Software and placed under a license (CC-BY-SA) that allows commercial reuse, without discriminating between small and large players. The “SA”, for Share Alike, implies that derivative works made from Wikipedia content are licensed under the same license, but does not prohibit commercial reuse. For Wikidata data, things go even further since this project is licensed under CC0 and does not impose any conditions on reuse, not even mentioning the source.

So, if we stick strictly to the legal plan, players like Facebook or Google are entitled to draw from the content and data of Wikimedia projects to reuse them for their own purposes, without having to contribute financially in return. If they do, it can only be on a purely voluntary basis and that is the only thing Katherine Maher can hope for with her platform: that these companies become patrons by donating money to the Wikimedia Foundation. Google has already done so in the past, with a donation of $2 million in 2010 and another $1 million last year. Facebook, Apple, Microsoft and Google have also put in place a policy whereby these companies pledge to pay the Wikimedia Foundation the same amount as their individual employees donate.

Should digital giants do more and significantly address the long-term sustainability of the Digital Commons that Wikipedia represents? This question refers to reciprocity for the Commons, which is both absolutely essential and very ambivalent. If we broaden the perspective to free software, it is clear that these Commons have become an essential infrastructure without which the Internet could no longer function today (90% of the world’s servers run on Linux, 25% of websites use WordPress, etc.) But many of these projects suffer from maintenance and financing problems, because their development depends on communities whose means are unrelated to the size of the resources they make available to the whole world. This is shown very well in the book, “What are our digital infrastructures based on? The invisible work of web makers”, by Nadia Eghbal:

Today, almost all commonly used software depends on open source code, created and maintained by communities of developers and other talents. This code can be taken up, modified and used by anyone, company or individual, to create their own software. Shared, this code thus constitutes the digital infrastructure of today’s society…whose foundations threaten, however, to yield under demand!

Indeed, in a world governed by technology, whether Fortune 500 companies, governments, large software companies or startups, we are increasing the burden on those who produce and maintain this shared infrastructure. However, as these communities are quite discreet, it has taken a long time for users to become aware of this.

Like physical infrastructure, however, digital infrastructure requires regular maintenance and servicing. Faced with unprecedented demand, if we do not support this infrastructure, the consequences will be many.

This situation corresponds to a form of tragedy of the Commons, but of a different nature from that which can strike material resources. Indeed, intangible resources, such as software or data, cannot by definition be over-exploited and they even increase in value as they are used more and more. But tragedy can strike the communities that participate in the development and maintenance of these digital commons. When the core of individual contributors shrinks and their strengths are exhausted, information resources lose quality and can eventually wither away.

The progression of the “Capital Commons”

Market players are well aware of this problem, and when their activity depends on a Digital Commons, they usually end up contributing to its maintenance in return. The best known example of this is Linux software, often correctly cited as one of the most beautiful achievements of FOSS. As the cornerstone of the digital environment, the Linux operating system was eventually integrated into the strategies of large companies such as IBM, Samsung, Intel, RedHat, Oracle and many others (including today Microsoft, Google, Amazon and Facebook). Originally developed as a community project based on contributions from volunteer developers, Linux has profoundly changed in nature over time. Today, more than 90% of the contributions to the software are made by professional developers, paid by companies. The Tragedy of the Commons “by exhaustion” that threatens many Open Source projects has therefore been averted with regard to Linux, but only by “re-internalizing” contributors in the form of employees (a movement that is symmetrically opposite to that of uberization).

Main contributors to Linux in 2017. Individual volunteer contributors (none) now represent only 7.7% of project participants…

However, this situation is sometimes denounced as a degeneration of contributing projects that, over time, would become “Commons of capital” or “pseudo-Commons of capital”. For example, as Christian Laval explained in a forum:

Large companies create communities of users or consumers to obtain opinions, opinions, suggestions and technical improvements. This is what we call the “pseudo-commons of capital”. Capital is capable of organizing forms of cooperation and sharing for its benefit. In a way, this is indirect and paradoxical proof of the fertility of the common, of its creative and productive capacity. It is a bit the same thing that allowed industrial take-off in the 19th century, when capitalism organised workers’ cooperation in factories and exploited it to its advantage.

If this criticism can quite legitimately be addressed to actors like Uber or AirBnB who divert and capture collaborative dynamics for their own interests, it is more difficult to formulate against a project like Linux. Because large companies that contribute to software development via their employees have not changed the license (GNU-GPL) under which the resource is placed, they can never claim exclusivity. This would call into question the shared usage rights allowing any actor, commercial or not, to use Linux. Thus, there is literally no appropriation of the Common or return to enclosure, even if the use of the software by these companies participates in the accumulation of Capital.

On the other hand, it is obvious that a project which depends more than 90% on the contributions of salaried developers working for large companies is no longer “self-governed” as understood in Commons theory. Admittedly, project governance always formally belongs to the community of developers relying on the Linux Foundation, but you can imagine that the weight of the corporations’ interests must be felt, if only through the ties of subordination weighing on salaried developers. This structural state of economic dependence on these firms does make Linux a “common capital”, although not completely captured and retaining a certain relative autonomy.

How to guarantee the independence of digital Commons?

For a project like Wikipedia, things would probably be different if firms like Google or Facebook answered the call launched by Katherine Maher. The Wikipedia community has strict rules in place regarding paid contributions, which means that you would probably never see 90% of the content produced by employees. Company contributions would likely be in the form of cash payments to the Wikimedia Foundation. However, economic dependence would be no less strong; until now, Wikipedia has ensured its independence basically by relying on individual donations to cover the costs associated with maintaining the project’s infrastructure. This economic dependence would no doubt quickly become a political dependence – which, by the way, the Wikimedia Foundation has already been criticised for, regarding a large number of personalities with direct or indirect links with Google included on its board, to the point of generating strong tensions with the community. The Mozilla Foundation, behind the Firefox browser, has sometimes received similar criticism. Their dependence on Google funding may have attracted rather virulent reproach and doubts about some of its strategic choices.

In the end, this question of the digital Commons’ state of economic dependence is relatively widespread. There are, in reality, very few free projects having reached a significant scale that have not become more or less “Capital Commons”. This progressive satellite-isation is likely to be further exacerbated by the fact that free software communities have placed themselves in a fragile situation by coordinating with infrastructures that can easily be captured by Capital. This is precisely what just happened with Microsoft’s $7.5 billion acquisition of GitHub. Some may have welcomed the fact that this acquisition reflected a real evolution of Microsoft’s strategy towards Open Source, even that it could be a sign that “free software has won”, as we sometimes hear.

Microsoft was already the firm that devotes the most salaried jobs to Open Source software development (ahead of Facebook…)

But, we can seriously doubt it. Although free software has acquired an infrastructural dimension today – to the point that even a landmark player in proprietary software like Microsoft can no longer ignore it – the developer communities still lack the means of their independence, whether individually (developers employed by large companies are in the majority) or collectively (a lot of free software depends on centralized platforms like GitHub for development). Paradoxically, Microsoft has taken seriously Platform Cooperativism’s watchwords, which emphasize the importance of becoming the owner of the means of production in the digital environment in order to be able to create real alternatives. Over time, Microsoft has become one of the main users of GitHub for developing its own code; logically, it bought the platform to become its master. Meanwhile – and this is something of a grating irony – Trebor Scholz – one of the initiators, along with Nathan Schneider, of the Platform Cooperativism movement – has accepted one million dollars in funding from Google to develop his projects. This amounts to immediately making oneself dependent on one of the main actors of surveillance capitalism, seriously compromising any hope of building real alternatives.

One may wonder if Microsoft has not better understood the principles of Platform Cooperativism than Trebor Scholtz himself, who is its creator!

For now, Wikipedia’s infrastructure is solidly resilient, because the Wikimedia Foundation only manages the servers that host the collaborative encyclopedia’s contents. They have no title to them, because of the free license under which they are placed. GitHub could be bought because it was a classic commercial enterprise, whereas the Wikimedia Foundation would not be able to resell itself, even if players like Google or Apple made an offer. The fact remains that Katherine Maher’s appeal for Google or Facebook funding risks weakening Wikipedia more than anything else, and I find it difficult to see something positive for the Commons. In a way, I would even say that this kind of discourse contributes to the gradual dilution of the notion of Commons that we sometimes see today. We saw it recently with the “Tech For Good” summit organized in Paris by Emmanuel Macron, where actors like Facebook and Uber were invited to discuss their contribution “to the common good”. In the end, this approach is not so different from Katherine Maher’s, who asks that Facebook or Google participate in financing the Wikipedia project, while in no way being able to impose it on them. In both cases, what is very disturbing is that we are regressing to the era of industrial paternalism, as it was at the end of the 19th century, when the big capitalists launched “good works” on a purely voluntary basis to compensate for the human and social damage caused by an unbridled market economy through philanthropy.

Making it possible to impose reciprocity for the Commons on Capital

The Commons are doomed to become nothing more than “Commons of Capital” if they do not give themselves the means to reproduce autonomously without depending on the calculated generosity of large companies who will always find a way to instrumentalize and void them of their capacity to constitute a real alternative. An association like Framasoft has clearly understood that after its program “Dégooglisons Internet”, aimed at creating tools to enable Internet users to break their dependence on GAFAMs, has continued with the Contributopia campaign. This aims to raise public awareness of the need to create a contribution ecosystem that guarantees conditions of long-term sustainability for both individual contributors and collective projects. This is visible now, for example, with the participatory fundraising campaign organized to boost the development of PeerTube, a software allowing the implementation of a distributed architecture for video distribution that could eventually constitute a credible alternative to YouTube.

But with all due respect to Framasoft, it seems to me that the classic “libriste” (free culture activist) approach remains mired in serious contradictions, of which Katherine Maher’s article is also a manifestation. How can we launch a programme such as “Internet Negotiations” that thrashes the model of Surveillance Capitalism, and at the same time continue to defend licences that do not discriminate according to the nature of the actors who reuse resources developed by communities as common goods? There is a schizophrenia here due to a certain form of blindness that has always marked the philosophy of the Libre regarding its apprehension of economic issues. This in turn explains Katherine Maher’s – partly understandable – uneasiness at seeing Wikipedia’s content and data reused by players like Facebook or Google who are at the origin of the centralization and commodification of the Internet.

To escape these increasingly problematic contradictions, we must give ourselves the means to defend the digital Commons sphere on a firmer basis than free licenses allow today. This is what actors who promote “enhanced reciprocity licensing” are trying to achieve, which would prohibit lucrative commercial entities from reusing common resources, or impose funding on them in return. We see this type of proposal in a project like CoopCycle for example, an alternative to Deliveroo; or Uber Eats, which refuses to allow its software to be reused by commercial entities that do not respect the social values it stands for. The aim of this new approach, defended in particular by Michel Bauwens, is to protect an “Economy of the Commons” by enabling it to defend its economic independence and prevent it from gradually being colonised and recovered into “Commons of Capital”.

.

With a project like CHATONS, an actor like Framasoft is no longer so far from embracing such an approach, because to develop its network of alternative hosts, a charter has been drawn up including conditions relating to the social purpose of the companies participating in the operation. It is a first step in the reconciliation between the Free and the SSE, also taking shape through a project like “Plateformes en Communs”, aiming to create a coalition of actors that recognize themselves in both Platform Cooperativism and the Commons. There has to be a way to make these reconciliations stronger, and lead to a clarification of the contradictions still affecting Free Software.

Make no mistake: I am not saying that players like Facebook or Google should not pay to participate in the development of free projects. But unlike Katherine Maher, I think that this should not be done on a voluntary basis, because these donations will only reinforce the power of the large centralized platforms by hastening the transformation of the digital Commons into “Capital Commons”. If Google and Facebook are to pay, they must be obliged to do so, just as industrial capitalists have come to be obliged to contribute to the financing of the social state through compulsory contributions. This model must be reinvented today, and we could imagine states – or better still the European Union – subjecting major platforms to taxation in order to finance a social right to the contribution open to individuals. It would be a step towards this “society of contribution” Framasoft calls for, by giving itself the means to create one beyond surveillance capitalism, which otherwise knows full well how to submit the Commons to its own logic and neutralize their emancipatory potential.

Photo by Elf-8

The post Are the Digital Commons condemned to become “Capital Commons”? appeared first on P2P Foundation.

]]>
https://blog.p2pfoundation.net/are-the-digital-commons-condemned-to-become-capital-commons/2018/08/03/feed 0 72035
Gabriel Weinberg on why you should use DuckDuckGo instead of Google https://blog.p2pfoundation.net/gabriel-weinberg-use-duckduckgo-instead-google/2018/02/23 https://blog.p2pfoundation.net/gabriel-weinberg-use-duckduckgo-instead-google/2018/02/23#comments Fri, 23 Feb 2018 08:00:00 +0000 https://blog.p2pfoundation.net/?p=69691 Gabriel Weinberg, the CEO and Founder at DuckDuckGo explains the benefits of the project. Extracted from Quora. Gabriel Wienberg: #1 — Google tracks you. We don’t. You share your most intimate secrets with your search engine without even thinking: medical, financial and personal issues, along with all the day to day things that make you,... Continue reading

The post Gabriel Weinberg on why you should use DuckDuckGo instead of Google appeared first on P2P Foundation.

]]>
Gabriel Weinberg, the CEO and Founder at DuckDuckGo explains the benefits of the project. Extracted from Quora.

Gabriel Wienberg:

#1 — Google tracks you. We don’t.

You share your most intimate secrets with your search engine without even thinking: medical, financial and personal issues, along with all the day to day things that make you, well, you. All of that personal information should be private, but on Google it’s not. On Google, your searches are tracked, mined, and packaged up into a data profile for advertisers to follow you around the Internet through those intrusive and annoying ever-present banner ads, using Google’s massive ad networks, embedded across millions of sites and apps.

So-called incognito mode won’t protect you either. That’s a myth. “Incognito” mode isn’t really incognito at all. It’s an extremely misleading name and in my opinion should be changed. All it does is delete your local browsing history after your session on your device, but does nothing from stopping any website you visit, including Google, from tracking you via your IP address and other tracking mechanisms like browser fingerprinting.

To keep your searches private and out of data profiles, the government, and other legal requests, you need to use DuckDuckGo. We don’t track you at all, regardless what browsing mode you are in.

Each time you search on DuckDuckGo, it’s as if you’ve never been there before. We simply don’t store anything that can tie your searches to you personally, or even tie them together into a search history that could later be tied back to you. For more details, check out our privacy policy.

#2 — Block Google trackers lurking everywhere.

Google tracks you on more than just their search engine. You may realize they also track you on YouTube, Gmail, Chrome, Android, Gmaps, and all the other services they run. For those, we recommend using private alternatives like DuckDuckGo for search. Yes, you can live Google-free. I’ve been doing it for many years.

What you may not realize, though, is Google trackers are actually lurking behind the scenes on 75% of the top million websites. To give you a sense of how large that is, Facebook is the next closest with 25%. It’s a good bet that any random site you land on the Internet will have a Google tracker hiding on it. Between the two of them, they are truly dominating online advertising, by some measures literally making up 74%+ of all its growth. A key component of how they have managed to do that is through all these hidden trackers.

Google Analytics is installed on most sites, tracking you behind the scenes, letting website owners know who is visiting their sites, but also feeding that information back to Google. Same for the ads themselves, with Google running three of the largest non-search ad networks installed on millions of sites and apps: Adsense, Admob, and DoubleClick.

At DuckDuckGo, we’ve expanded beyond our roots in search, to protect you no matter where you go on the Internet. Our DuckDuckGo browser extension and mobile app is available for all major browsers and devices, and blocks these Google trackers, along with the ones from Facebook and countless other data brokers. It does even more to protect you as well like providing smarter encryption.

#3 — Get unbiased results, outside the Filter Bubble.

When you search, you expect unbiased results, but that’s not what you get on Google. On Google, you get results tailored to what they think you’re likely to click on, based on the data profile they’ve built on you over time from all that tracking I described above.

That may appear at first blush to be a good thing, but when most people say they want personalization in a search context they actually want localization. They want local weather and restaurants, which can actually be provided without tracking, like we do at DuckDuckGo. That’s because approximate location info is automatically embedded by your computer in the search request, which we can use to serve you local results and immediately throw away without tracking you.

Beyond localization, personalized results are dangerous because to show you results they think you’ll click on, they must filter results they think you’ll skip. That’s why it’s called the Filter Bubble.

So if you have political leanings one way or another, you’re more likely to get results you already agree with, and less likely to ever see opposing viewpoints. In the aggregate this leads to increased echo chambers that are significantly contributing to our increasingly polarized society.

This Filter Bubble is especially pernicious in a search context because you have the expectation that you’re seeing what others are seeing, that you’re seeing the “results.” We’ve done studies over the years where we have people search for the same topics on Google at the same time and in “Incognito” mode, and found they are significantly tailored.

On DuckDuckGo, we are committed to not putting you in the Filter Bubble. We don’t even force people into a local country index unless they explicitly opt-in.

#4 — We listen.

Google is notoriously hard to get a hold of. Locked out of your Gmail account? Sorry, we can’t help you. The Knowledge Graph says you’re dead? That’s unfortunate. Unless you’re a journalist or influencer of some kind, good luck getting anyone at Google to listen.

Meanwhile at DuckDuckGo we read every piece of feedback we get. We respond on social media. In short, we listen. My DMs are open and I read all the email sent to me personally. Feel free to reach out.

#5 — We don’t try to trap you in our “ecosystem.”

It used to be that you search on Google and then you click off to the top result. Over time, Google bought more and more companies and launched more and more of their own competing services, favoring them over others in their search results. Google Places instead of Yelp, TripAdvisor, etc. Google Products instead of Amazon, Target, etc. They’re in travel, health, and soon jobs. Anywhere there is money to be made, you can expect them to get into it eventually.

Even when you do click off, Google AMP tries to still trap you you in Google. And these tactics are not just on the search engine.

On Android on many implementations there is immovable Google search widget and you can’t even change its search engine if you want to. By just installing it by default, this behavior is a direct analogue to Microsoft putting IE on Windows in the 1990s, but worse since it takes up more of the smaller screen. The same is true for other Google services on Android as well, forcing carriers to bundle and promote them. We personally have similar issues with Chrome search engine integration.

At DuckDuckGo, we aren’t trying to take over the world. We don’t have an “ecosystem” to trap you in. We just want to help you get to where you want to go as fast as possible, and protect you as much as we can in that process.

#6 — We have !bangs.

To further this point, we have a built-in feature called bangs that enables you to search other sites directly, completely skipping DuckDuckGo if you like. Here’s how it works. Let’s say you know you want to go to the Wikipedia article for ducks. You can just search for “!w duck” and we will take you right there.

The ! tells DuckDuckGo you want to use a bang shortcut, and the w is an abbreviation for Wikipedia. You can use the full name, though we have a lot of shortcuts such as !a for Amazon, !r for Reddit, etc. There are literally thousands of sites that this feature works with, and so most sites you think of will probably work. It also works with our autocomplete so you can see what’s there easily.

If you routinely search a particular site, like Stack Overflow for coding answers or Baseball Reference for stats or All Recipes for something to make, you can just go right there.

If DuckDuckGo is your default search engine, you can just type this right into your browser’s address bar, and skip loading our search engine altogether. We will just route you to the right place, without tracking you of course!

#7 — We strive for a world where you have control over your personal information.

Our vision is to raise the standard of trust online. If you share this vision, supporting DuckDuckGo helps us make progress towards it. For the past seven years, we’ve been donating a substantial portion of our profits to organizations that also work towards the Internet we want — an open Internet where you can take control of your personal information.

We believe that privacy policies shouldn’t be default “collect it all,” but instead offer a clear and compelling case as to what benefits you get by giving up your personal information. If you share this view for the future of data privacy, you can vote with your feet.

#8 — Our search results aren’t loaded up with ads.

For many Google searches, the entire first page is ads. On mobile it can be even worse, multiple pages of ads. Not so on DuckDuckGo. We keep ads to a minimum, and naturally they’re non-tracking ads, based only on search keywords and not on a personal profile or search history.

#9 — Search without fear.

When people know they are being watched, they change their behavior. It’s a well-documented behavior called the chilling effect, and it happens on Google. For example, an MIT study showed that people started doing fewer health searches on Google after the Snowden revelations, fearing that their personal ailments might get out.

“Suppressing health information searches potentially harms the health of search engine users and… In general, our results suggest that there is a chilling effect on search behavior from government surveillance on the Internet.”

Your searches are your business, and you should feel free to search whatever you want, whenever you want. You can easily escape this chilling effect on DuckDuckGo where you are anonymous.

#10 — Google is simply too big, and too powerful.

Google is GIANT, the epitome of Silicon Valley big tech, with a market cap of around 750 Billion dollars (at the time of writing), 75,000 employees, dominating search, browsing, online advertising, and more, with tentacles in everything tech, online and offline. Last year they outspent every other company on lobbying Washington.

By comparison, DuckDuckGo is tiny. We’re currently a team of about 45 people, scattered across the globe; I’m in Pennsylvania. We have a very narrow focus: helping you take control of your personal information online.

The world could use more competition, less focus on ad tracking, fewer eggs in one basket.

Join the Duck Side!

 

Photo by Parin S.

The post Gabriel Weinberg on why you should use DuckDuckGo instead of Google appeared first on P2P Foundation.

]]>
https://blog.p2pfoundation.net/gabriel-weinberg-use-duckduckgo-instead-google/2018/02/23/feed 1 69691
Is manufacturing of the future OPEN SOURCE? https://blog.p2pfoundation.net/is-manufacturing-of-the-future-open-source/2018/02/21 https://blog.p2pfoundation.net/is-manufacturing-of-the-future-open-source/2018/02/21#respond Wed, 21 Feb 2018 08:00:00 +0000 https://blog.p2pfoundation.net/?p=69758 In the spring of 2016, Elon Musk and his company Tesla stopped enforcing their patents, and Google, Facebook, Microsoft and IBM are all going open source with various robotics, artificial intelligence and phone projects. A trend is emerging: Is future manufacturing open source? Christian Villum: Giants such as Google and IBM have lately been followed by... Continue reading

The post Is manufacturing of the future OPEN SOURCE? appeared first on P2P Foundation.

]]>
In the spring of 2016, Elon Musk and his company Tesla stopped enforcing their patents, and Google, Facebook, Microsoft and IBM are all going open source with various robotics, artificial intelligence and phone projects. A trend is emerging: Is future manufacturing open source?

Christian Villum: Giants such as Google and IBM have lately been followed by Canadian D-Wave, the leading developer of quantum computers, which opened up parts of their platform in January. But it’s not just the large, financially strong American technology companies who are painting the picture of open source as a global megatrend. Start-ups and small to medium-sized companies all over the world, and not just within the tech industry, are creating new and exciting open source-based physical products. 3D Robotics, Arduino and the British furniture company Open Desk, which is creating open design furniture in collaboration with 600 furniture creators all over the world, are just a few examples of how open source has become the foundation of some of the most innovative and interesting business models of our time.

Danish Design Centre has dived into this trend for the past year; a trend which is part of a large wave of technological disruption and digitization and which is currently top of mind for many companies. How do you get started with digitizing your business model, and how do you know if open source manufacturing is the future of your company? These questions aren’t easy to answer.

Growth programme for curious Danish production companies

This is why we, in collaboration with a range of partners, have initiated REMODEL, which is a growth programme for Danish manufacturing companies who wish to explore and develop new business models based on open-source principles, and which are tailor-made to fit their industry and their specific situation. REMODEL demystifies a complex concept and helps the company develop economically sustainable business models which can open op new markets and new economies.

We do this by using strategic design tools, which make up the foundation of the programme, and which are based on strong design virtues such as iterative experimentation, the development of rapid prototypes and most importantly, focusing on the needs of the end-user. On top of this, REMODEL also involves a global panel of experts, CEOs and researchers within the field of open source, which allows the programme to pull on expertise from some of the world’s most visionary innovators.

Timeline for the programme

REMODEL consists of a series of design-driven stages. Last year the programme was launched in a testing phase in which the Danish Design Centre collaborated with a handful of Danish manufacturing companies, including renowned hifi-manufacturing company Bang & Olufsen, who went through early modules of the programme over the course of the spring 2017. These modules were reiterated along the way based on the feedback from those tests.

The key learnings from these test as well as workshops with members of the expert panel then became the foundation for the official REMODEL programme, which launched on February 5, 2018, and where 10 pioneering companies are currently working their way through the programme, which has been set up as an 8 week design sprint. The outcome is for them to have gained a thorough strategic understand of the concept of open source hardware as it relates to their industry and furthermore a draft strategy to open one of the existing products in their portfolio.

Radical sharing of knowledge

Learnings, tools and methods from both the test runs and the main programme will be collected and shared in a REMODEL open source hardware business model toolkit, which will be freely available after the program.

On top of this we will be organising a REMODEL knowledge sharing summit in October 2018, where participating companies, the international expert panel, prominent speakers and anyone else who are interested are invited to Denmark to share their experiences and think about the next steps for open sourced-based business models and strategies for manufacture companies.

Discussing REMODEL internationally

In March 2018, Danish Design Centre is yet again participating in the world’s largest technology event, SXSW Interactive, in Austin, Texas. We have been invited to host a panel debate as part of the official schedule under the title ‘Open Source Innovation: The Internet on Your Team‘, where speakers from Bang & Olufsen, Thürmer Tools and Wikifactory will discuss the topic in general as well as tell stories from the REMODEL program.

Learn more

Curious to follow the REMODEL program in more depth? Read more here or sign up for the newsletter. Eager to discuss? Join the conversation on Twitter under the #remodelDK hashtag or contact Danish Design Centre Programme Director Christian Villum on [email protected]


Originally published in danskdesigncenter.dk

Lead image: Open Desk builds furniture as open design. (c) Rory Gardiner

Text image: CC-BY-NC Agnete Schlichtkrull

The post Is manufacturing of the future OPEN SOURCE? appeared first on P2P Foundation.

]]>
https://blog.p2pfoundation.net/is-manufacturing-of-the-future-open-source/2018/02/21/feed 0 69758
Frank Pasquale on the Shift from Territorial to Functional Sovereignty https://blog.p2pfoundation.net/frank-pasquale-on-the-shift-from-territorial-to-functional-sovereignty/2018/01/16 https://blog.p2pfoundation.net/frank-pasquale-on-the-shift-from-territorial-to-functional-sovereignty/2018/01/16#respond Tue, 16 Jan 2018 09:00:00 +0000 https://blog.p2pfoundation.net/?p=69274 It is very clear that power in our societies is changing. After the financialization of our economies under neoliberal globalization, we have a new layer of corporate power emerging from the platform economy. This process is very well described by Frank Pascuale in the recommended text we excerpt below, under the concept of Functional Governance.... Continue reading

The post Frank Pasquale on the Shift from Territorial to Functional Sovereignty appeared first on P2P Foundation.

]]>
It is very clear that power in our societies is changing. After the financialization of our economies under neoliberal globalization, we have a new layer of corporate power emerging from the platform economy. This process is very well described by Frank Pascuale in the recommended text we excerpt below, under the concept of Functional Governance. Please read the full text carefully, as well as the videotaped presentation. As Pacuale explains, these netarchical platforms, privately owned platforms that extract value from our own peer to peer exchanges, through their ownership of our data, their ability to nudge our behaviours, and the capacity to overtake a number of formerly public sector functions, are also threatening any democratic accountability and possibilities of commons-based co-production, co-governance and co-ownership of value creation.

However, this doesn’t mean that we are powerless and in a next installment, we will propose a strategy that is also learning from the innovations of platform capitalism. The following extracts have been sourced from Open Democracy:

Frank Pasquale: As digital firms move to displace more government roles over time, from room-letting to transportation to commerce, citizens will be increasingly subject to corporate, rather than democratic, control.

Economists tend to characterize the scope of regulation as a simple matter of expanding or contracting state power. But a political economy perspective emphasizes that social relations abhor a power vacuum. When state authority contracts, private parties fill the gap. That power can feel just as oppressive, and have effects just as pervasive, as garden variety administrative agency enforcement of civil law. As Robert Lee Hale stated, “There is government whenever one person or group can tell others what they must do and when those others have to obey or suffer a penalty.”

We are familiar with that power in employer-employee relationships, or when a massive firm extracts concessions from suppliers. But what about when a firm presumes to exercise juridical power, not as a party to a conflict, but the authority deciding it? I worry that such scenarios will become all the more common as massive digital platforms exercise more power over our commercial lives.


Focusing on the identity and aspirations of major digital firms. They are no longer market participants. Rather, in their fields, they are market makers, able to exert regulatory control over the terms on which others can sell goods and services. Moreover, they aspire to displace more government roles over time, replacing the logic of territorial sovereignty with functional sovereignty. In functional arenas from room-letting to transportation to commerce, persons will be increasingly subject to corporate, rather than democratic, control.

For example: Who needs city housing regulators when AirBnB can use data-driven methods to effectively regulate room-letting, then house-letting, and eventually urban planning generally? Why not let Amazon have its own jurisdiction or charter city, or establish special judicial procedures for Foxconn? Some vanguardists of functional sovereignty believe online rating systems could replace state occupational licensure—so rather than having government boards credential workers, a platform like LinkedIn could collect star ratings on them.


This shift from territorial to functional sovereignty is creating a new digital political economy.


Forward-thinking legal thinkers are helping us grasp these dynamics. For example, Rory van Loo has described the status of the “corporation as courthouse”—that is, when platforms like Amazon run dispute resolution schemes to settle conflicts between buyers and sellers. Van Loo describes both the efficiency gains that an Amazon settlement process might have over small claims court, and the potential pitfalls for consumers (such as opaque standards for deciding cases). I believe that, on top of such economic considerations, we may want to consider the political economic origins of e-commerce feudalism. For example, as consumer rights shrivel, it’s rational for buyers to turn to Amazon (rather than overwhelmed small claims courts) to press their case. The evisceration of class actions, the rise of arbitration, boilerplate contracts—all these make the judicial system an increasingly vestigial organ in consumer disputes. Individuals rationally turn to online giants for powers to impose order that libertarian legal doctrine stripped from the state. And in so doing, they reinforce the very dynamics that led to the state’s etiolation in the first place.

This weakness has become something of a joke with Amazon’s recent decision to incite a bidding war for its second headquarters. Mayors have abjectly begged Amazon to locate jobs in their jurisdictions. As readers of Richard Thaler’s “The Winner’s Curse” might have predicted, the competitive dynamics have tempted far too many to offer far too much in the way of incentives. As journalist Danny Westneat recently confirmed,

  • Chicago has offered to let Amazon pocket $1.32 billion in income taxes paid by its own workers.
  • Fresno has a novel plan to give Amazon special authority over how the company’s taxes are spent.
  • Boston has offered to set up an “Amazon Task Force” of city employees working on the company’s behalf.

Stonecrest, Georgia even offered to cannibalize itself, to give Bezos the chance to become mayor of a 345 acre annex that would be known as “Amazon, Georgia.

The example of Amazon

Amazon’s rise is instructive. As Lina Khan explains, “the company has positioned itself at the center of e-commerce and now serves as essential infrastructure for a host of other businesses that depend upon it.” The “everything store” may seem like just another service in the economy—a virtual mall. But when a firm combines tens of millions of customers with a “marketing platform, a delivery and logistics network, a payment service, a credit lender, an auction house…a hardware manufacturer, and a leading host of cloud server space,” as Khan observes, it’s not just another shopping option.

Digital political economy helps us understand how platforms accumulate power. With online platforms, it’s not a simple narrative of “best service wins.” Network effects have been on the cyberlaw (and digital economics) agenda for over twenty years. Amazon’s dominance has exhibited how network effects can be self-reinforcing. The more merchants there are selling on (or to) Amazon, the better shoppers can be assured that they are searching all possible vendors. The more shoppers there are, the more vendors consider Amazon a “must-have” venue. As crowds build on either side of the platform, the middleman becomes ever more indispensable. Oh, sure, a new platform can enter the market—but until it gets access to the 480 million items Amazon sells (often at deep discounts), why should the median consumer defect to it? If I want garbage bags, do I really want to go over to Target.com to re-enter all my credit card details, create a new log-in, read the small print about shipping, and hope that this retailer can negotiate a better deal with Glad? Or do I, ala Sunstein, want a predictive shopping purveyor that intimately knows my past purchase habits, with satisfaction just a click away?

As artificial intelligence improves, the tracking of shopping into the Amazon groove will tend to become ever more rational for both buyers and sellers. Like a path through a forest trod ever clearer of debris, it becomes the natural default. To examine just one of many centripetal forces sucking money, data, and commerce into online behemoths, play out game theoretically how the possibility of online conflict redounds in Amazon’s favor. If you have a problem with a merchant online, do you want to pursue it as a one-off buyer? Or as someone whose reputation has been established over dozens or hundreds of transactions—and someone who can credibly threaten to deny Amazon hundreds or thousands of dollars of revenue each year? The same goes for merchants: The more tribute they can pay to Amazon, the more likely they are to achieve visibility in search results and attention (and perhaps even favor) when disputes come up. What Bruce Schneier said about security is increasingly true of commerce online: You want to be in the good graces of one of the neo-feudal giants who bring order to a lawless realm. Yet few hesitate to think about exactly how the digital lords might use their data advantages against those they ostensibly protect.

Photo by thisisbossi

The post Frank Pasquale on the Shift from Territorial to Functional Sovereignty appeared first on P2P Foundation.

]]>
https://blog.p2pfoundation.net/frank-pasquale-on-the-shift-from-territorial-to-functional-sovereignty/2018/01/16/feed 0 69274
“Developing dissident knowledges”: Geert Lovink on the Social Media Abyss https://blog.p2pfoundation.net/developing-dissident-knowledges-geert-lovink-social-media-abyss/2017/07/12 https://blog.p2pfoundation.net/developing-dissident-knowledges-geert-lovink-social-media-abyss/2017/07/12#comments Wed, 12 Jul 2017 07:00:00 +0000 https://blog.p2pfoundation.net/?p=66547 This post by Jorge San Vicente Feduchi was originally published on La Grieta The hypnotic documentary Hypernormalization, by British director Adam Curtis, takes its name from a concept developed by Soviet writer Alexei Yurchak. In his book Everything was Forever, Until it was No More, Yurchak describes the tense social and cultural atmosphere during the... Continue reading

The post “Developing dissident knowledges”: Geert Lovink on the Social Media Abyss appeared first on P2P Foundation.

]]>
This post by Jorge San Vicente Feduchi was originally published on La Grieta

The hypnotic documentary Hypernormalization, by British director Adam Curtis, takes its name from a concept developed by Soviet writer Alexei Yurchak. In his book Everything was Forever, Until it was No More, Yurchak describes the tense social and cultural atmosphere during the years prior to the collapse of the Soviet Union. As Curtis describes, after decades of attempting to plan and manage a new kind of socialist society, the technocrats at the top of the post-Stalinist USSR realized that their goal of controlling and predicting everything was unreachable. Unwilling to admit their failure, they “began to pretend that everything was still going according to plan”. The official narrative created a parallel version of the Soviet society, a fake reality (like in the home videos of Good Bye Lenin) that everyone would eventually unveil. But even though they saw that the economy was trembling and the regime’s discourse was fictitious, the population had to play along and pretend it was real… “because no one could imagine any alternative. (…) You were so much a part of the system that it was impossible to see beyond it”.

Nowadays, our society is driven by very different forces. We don’t need technocrats to predict our actions; the last advancements in information technology, in addition to our constant disposition to share everything that happens to us, are enough for an invisible —and, apparently, non-human— power to define and limit our behaviour. In his book Social Media Abyss, the Dutch theorist Geert Lovink —founding director of the Institute of Network Cultures in Amsterdam— speaks about the dark side of these new technologies and the consequences of our blind trust in the digital industry.

The closest comparison that we have today to the New Soviet Man is perhaps the cult to the cyberlibertarian entrepreneur of Silicon Valley. We are now used to thirty-somethings in sweaters telling us, from the ping-pong tables in their offices, that the only road to success both personal and collectivelies in technology. To oppose them is no easy task: who is going to question a discourse that has innovation and “the common good” at its core? But the internet today hardly resembles the technology that, in its origins, seemed to promise a source of decentralization, democratization and citizen empowerment. Nowadays, the giants of Silicon Valley lead by Facebook and Googlehave mutated towards a monopolistic economic model and flirt with intelligence agencies for the exchange of their precious data.

Our relationship with the internet seems to be on its way to becoming something very similar to the later years of the Soviet Union. The Spanish sociologist Cesar Rendueles formulates this concern when questioning the capacities of technology to guarantee a plural and open space: “the network ideology has generated a diminished social reality”, he claims on his essay Sociophobia: Political Change in the Digital Utopia. Lovink shares the “healthy scepticism” of Rendueles when elaborating what we could call an “Internet critical theory”. In Social Media Abyss, he inaugurates the post-Snowden era — “the secular version of God is Dead”— as the beginning of a general disillusionment with the development of the internet: now we can say that the internet “has become almost everything no one wanted it to be”. But even though we know that everything we do online may be used against us, we still click, share and rate whatever appears on our screen. Can we look at the future with optimism? Or are we too alienated, too precarized, too desocialized (despite being constantly “connected”) to design alternatives? In the words of Lovink, “what is citizen empowerment in the age of driver-less cars”?

The year did not start all that well. The big political changes of 2017 have been, as Amador Fernández Savater has described, “a kind of walking paradox: anti-establishment establishment, anti-elitist elite, antiliberal neoliberalism, etc.”. But fortunately, politics not only consists of electoral processes. Lovink has spent decades studying the “organized networks” that operate outside the like economy: “The trick is to achieve a form of collective invisibility without having to reconstitute authority”. We spoke with him not only about the degradation of the democratic possibilities of the internet (and the possibilities for coming up with an equitative revenue model for the internet) but also about how to design the alternative.

We may opt for hypernormalizing everything: “nothing to see here, let’s keep browsing”. Any other option involves theorization as we advance on our objectives. The answer lays on creating “dissident knowledges”.

“Radical disillusionment”

Your latest book starts with the idea that the internet, initially portrayed as a democratizing and decentralising force, “has become precisely everything no one wanted it to be”. The once uncontested Californian ideology is now being challenged for the first time, after the Snowden revelations showed us that we have lost any controlled, pragmatic rule over internet governance. What is our next move?

Geert Lovink

I don’t want to make it too schematic, in terms of chronology. But because the internet is still growing so fast, it is really important to ask ourselves: “where are we“? This was really the “beginners” question, but for a while, the discussion turned to what it could become. The Snowden revelations, together with the 2008 crisis, should make us go back to the original question: where is the internet now?

I like to see the internet as a facilitating ideology. This is a notion that comes from Arthur Kroker, a Canadian philosopher working in the tradition of Marshall McLuhan. It is obviously not repressive, let alone aggressive, as it does not cause any physical violence on you. But what it does is that it facilitates.

Since the 2000s and the so called Web 2.0, the internet has been primarily focused on its participatory aspect. Everywhere you go you are asked not simply to create a profile, but to contribute, to say something, to click here, to like… The internet these days is a huge machine that seduces the average user without people necessarily understanding that what they do creates an awful load of data.

The fact that we are not aware of what the data we produce is used for seems to be the problematic aspect. Precisely one of the defining phrases of the book is that “tomorrow’s challenge will not be the internet’s omnipresence but its very invisibility. That’s why Big Brother is the wrong framing”. In the internet, power operates in the collective unconscious, more subtly than a repressive force. In fact, “the Silicon Valley tech elite refuses to govern”, you say; “its aim is to achieve the right for corporations to be left alone to pursue their own interests”. So how do you better describe this?

Yes, you can see that even after Trump’s win. They take the classic position of not governing. This is in a way a new form of power, because it’s not quite Foucauldian. Even though we would love to see that it is all about surveillance —and the NSA of course invites us to go back to this idea—, the internet is in a way post-Foucauldian. If you read Foucault’s last works, he invites us to that next stage, to see it as the Technology of the Self. That would be the starting point to understand what kind of power structure there is at stake, because it is facilitated from the subject position of the user. And this is really important to understand. All the Silicon Valley propositions or network architectures have that as the starting point.

Nowadays, surveillance is really for the masses and privacy is for the upper class

In a way, this invites us —the activists, the computer programmers, the geeks— to provoke the internet to show its other face. But for the ordinary user this other face is not there. And when I say ordinary I mean very ordinary. If you look at the general strategy, especially of Facebook, the target is this last billion, which is comprised of people really far under poverty levels. When we’re talking about the average internet user, we are not talking about affluent, middle-class, people anymore. This is really something to keep in mind, because we need to shed this old idea that the internet is an elitist technology, that the computers were once in the hands of the few, that the smartphone is a status symbol, etc. We are really talking about an average user that is basically under the new regime of the one percent, really struggling to keep afloat, to stay alive.

So when I say invisibility, I mean that this growing group of people (and we’re talking about billions across continents) are forced to integrate the internet in their everyday struggles. This is what makes it very, very serious. We’re not talking about luxury problems anymore. This is a problem of people that have to fight for their economic survival, but also have to be bothered with their privacy.

That is what I call facilitating. When we are talking about facilitating, it also means that we are dealing with technologies that are vital for survival. This is the context in which we are operating now when we hear that the internet has been democratized. It doesn’t mean that there is no digital divide anymore, but the digital divide works out in a different way: it’s no longer about who has access and who doesn’t. It’s probably more about services, convenience, speed… and surveillance. Nowadays, surveillance is really for the masses and privacy is for the upper class. And then the offline is for the ones who can really afford it. The ones who are offline are absolutely on the top. And it didn’t use to be on top. It used to be reversed. These are really big concerns for civil society activists and pro-privacy advocates.

The social in social media

These brings us to the issue of “the social in social media”. You call it an ‘empty container’, affected by the “shift from the HTML-based linking practices of the open web to liking and recommendations that happen inside closed systems”, and call for a redefinition of the ‘social’ away from Facebook and Twitter. Could you develop this idea?

It is really difficult these days to even imagine how we can contact people outside of social media. In theory it’s still possible. But even if you look at the centralized email services, like Hotmail, Yahoo and Gmail, they are now completely integrated in the social media model and they are, in fact, its forerunners. However, the problem really starts with the monopolistic part of the platform: the invisible aggregator that is happening in the background that most users have no idea about. Even experts find it very hard to really understand how these algorithms operate.

In this field, where there are a lot of academics but no critics, there is an enormous overproduction of real life experience and practice

Why has there not been any attempt from political science or sociology, at least that I know, to theorize the Social in Social Media? Obviously this is because the ‘social’ in scientific terms has really been reduced to the question of classes. But the idea that you can construct the social… sociology has a hard time to understand this. Historically it would understand that the social consists of the tribe, the political party, the Church, the neighborhood, etc. We know all the classic categories. Maybe when they are a bit newer they would talk about subcultures or gender issues. These are the “new” configurations of the Social.

But the idea that communication technology can construct and really configure the social as such, despite all the good efforts of science and technology scholars, has caught them by surprise. I think this is especially due to the speed and the scale; the speed at which the industry established itself and the scale of something like Facebook, which now connects almost two billion people. If you would have told that to someone 20 or 30 years ago it would have been very difficult to imagine, how a single company could do that.

Something that is clear in your work is the need to take technology seriously. Rather than falling in the trap of “offline romanticism” —or its alternative “solutionism”—, you are interested in “organized networks” that are configured in this day and age, because technology is going to stay whether we want it or we don’t. Against this, you appeal to the importance of theory. “What is lacking is a collective imagination (…). We need to develop dissident knowledges”, you say. What is the role of theory in all this? Isn’t there a sense of urgency to act right now?

The urgency is felt by the young people. I can only point to numerous experiments going on at the moment which could tell us something about the models that could work. What is important now is to write down the stories of those who are trying to create alternative models and to really try to understand what went wrong, in order to somehow make those experiences available for everyone who enters this discussion.

In this field, where there are a lot of academics but no critics, there is an enormous overproduction of real life experience and practice. However, there is almost no reflection happening. This is in part because the people who build the technologies are quite entrepreneurial or geeky and they don’t necessarily see the bigger picture. So that is our task, that is what projects like the MoneyLab network aims for.

The entire industry is not changing fast enough to accommodate the rising group of precarious workers

Internet revenue models

One of the big problems of this lack of theorization, as you point out in the book, is that the internet was not built with a revenue model in mind. We pay for access, hardware and software but not for content, so there are fewer and fewer opportunities to make a living from producing it. You call it “anticipatory capitalism”: “if you build it, business will come”, they tell us. What is even more striking is that your own experience from decades ago seems to point out to no advancements. This lack of direction has given place to a number of contradictions; for instance, freelance work, “simultaneously denounced as neoliberal exploitation and praised as the freedom of the individual creative worker”.

In a way, the internet today has a very traditional financial model. It is essentially based on targeted advertisement, which already existed in the past, but it was not focused on the individual. This caught me by surprise as well because I thought, especially in the early 2000s, that advertisement in an internet context was more or less dead, that beyond the web banner there wasn’t really much else. Of course, there was e-commerce but that’s something different, because then you are purchasing something, there is a real money transaction.

What really remains unsolved —and not much has changed since the 1980s— is the problem of how to pay the people that produce the content. The entire industry is not changing fast enough to accommodate the rising group of precarious workers. We can see some solutions on the horizon, going in different directions, but again we have to fight against the free services of Facebook, Google and all these other companies based on advertisement and data resell, who will always try to sabotage or frustrate the implementation, because, obviously, it is not in their interest that these new models start to work.

The only thing we can say is that, luckily, since 2008, there is something happening in different directions. And the more we try, the more certain we can be that, at some point, something will work out. To just wait until the industry solves it is not going to work because, again, we know the main players will frustrate these developments. Because that will be the end of their revenue model.

These strategies will only work if they becomes ubiquitous, if they are somehow integrated in the plan of becoming invisible

What happens with some of these advancements, like crowdfunding, is that while they are portrayed as alternative models, they still don’t solve the question of how to get paid for produced content.

The thing with crowdfunding, for instance, is that while it can work (and I know it has worked for many friends of mine) it usually only works once. It is very difficult to repeat. I find the Patreon model more interesting, in which people subscribe to you as an artist, or a writer, or a magazine, and have the possibility to fund you over time. That goes back to my previous idea that the internet should have developed itself through the subscription model but it didn’t, and I think that’s a lost opportunity. Even if it catches momentum again in 10 or 20 years, it already means that numerous generations, including my own, have been written off. At the moment, we are still supposed to contribute to the internet, to bring their content online, discuss, organize and so on, without anything coming back to us.

Some of these models, however, can easily get mistaken with an act of charity.

At the moment, when we’re still on defense, every attempt that tries to put the revenue model situation on the table and bring the money back to the content producers, is a good thing. Kim Dotcom, for instance, is planning on launching a kind of revenue model system connected to bitcoin. He is of course speaking to really broad, mainstream culture. On the more obscure side we have this cyber currency experiment called Steemit, which also works with the idea that if you read something and you like it, you pay for it.

First, we have to understand that these strategies will only work if they becomes ubiquitous, if they are somehow integrated in the plan of becoming invisible. Because if they aren’t, if time and again you have to make the payment a conscious act, it is not going to work. These payments, or this redistribution of wealth and attention, in the end, need to be part of an automated system. And we have to fully utilize the qualities and the potential that the computer offers us in order for it not to remain a one-off gift. Because it’s not a gift. We are not talking about charity.

Designing alternatives

So you have a precarious youth, with high levels of disenchantment and short attention spans, living within a system that seems to absorb whatever is thrown against them and come up even stronger after crises.

It feels like social media and the entrepreneurial industry is designed for non-revolt. Because “we are Facebook”: you are the user all the time. Some would say that for us to move forward all we have to do is to stop using these platforms. But is that really the move?

I find difficult to make any moral claims because of how it has all turned out. The exodus from Facebook, for instance, is a movement which already has a whole track record in itself. I myself left in 2010, six years after it was launched. And I was already feeling mainstream then because I left with 15,000 other people! So already by then it felt that I was the last to leave. This discussion has been with us for quite some time now and it feels like, especially here in the Netherlands, it never proved to be very productive to call for this mass exodus.

The one approach I am particularly in favor of is that of the smaller groups, the “organized networks”, that do not necessarily operate out in the open of the big platforms. I say that because, if you start operating there, you’ll see that the network itself invites you to enter their logic of very fast growth, if not hyper-growth. For social movements, this is something very appealing.

Yes, it feels like now it’s all measured by followers, even social movements.

Exactly, we cannot distinguish the social movement from the followers anymore. This is the trap we are in at the moment, so in a way we have to go back to a new understanding of smaller networks, or cells, or groups. It is no surprise that many people are now talking of going towards a new localism, because the easiest way to build these smaller groups is to focus on the local environment. But that’s not necessarily what I have in mind: I can also imagine smaller, trans-local networks.

The point is to really focus on what you want to achieve without getting caught in this very seductive network and platform logic. You must be very strong, because it is something like a siren, you’re bound to the ship and seduced by her; but this type of network logic will not work in your favor, not in the short term or in the long term.

Can you build an autonomous structure that maintains its momentum, that can exist over time?

In a recent article on open!, ‘Before Building the Avant-Garde of the Commons’, you defined the commons as an “aesthetic meta-structure”, or a collection of dozens of initiatives and groups that come together but are also in tension. Is there no place, or no need, for a sort of collective plan?

That’s when we enter the debate about organizing. Some people say ‘yes’, and the obvious answer to that is the political party. The political party is not a network, it is not a platform. Of course, there are many ways in which to do this and in different countries there are many traditions on how to operate a political party, but this is not necessarily what I have in mind. I am still trying to understand ways in which to organize the social that might have a political party component but is not reduced or overdetermined by that.

We are not talking anymore of the old division between socialists and anarchists, or the street and the institution. What is interesting now is: can you build an autonomous structure that maintains its momentum, that can exist over time? This is the big issue for both the social networks and social movements these days. Social movements come and go very fast. On the one hand, the speed is exciting if you are into it, it has a seductive side to it, and this is of course related to the network effect. But the frustration is also very big because you come back one week later and it’s gone. You cannot find any trace of it.

The problem, of course, is when the effect stays in the social media and it doesn’t translate into other realms. “When do we stop searching and start making?”, you ask in your book.

Those other realms are very diverse, even in terms of social relations between people, organizational capacities, or even policy, for that matter. The key debate here remains perdurability. Try something that might last for a year, go ahead. That would really transform something. I am talking about those type of commitments, of expression of the Social.

In Spain we had the indignados movement back in 2011. I think one of the successes of that movement was that it showed a lot of people what else was out there. And, while at some point it might have seemed as it was banishing, it actually created all these little networks that we are today seeing translated into a bunch of different initiatives, not all exclusively political —although the discussion has been heavily monopolized by the institution-street dichotomy—. Is there something to learn from these experiences?

Again, what I am interested in is reading what has been going on, and have people outside, but also inside of Spain find out about it. What has worked and what has not worked? Tell the story and share it with others. This is the way forward. One of the problems is to find a trigger, to see where things can accelerate, where can new forms of organization take shape. But again, I think that this only happens if you start to try. If we don’t try and just wait nothing is ever going to happen. This is the same issue as with the internet revenue models: “try something, do it”, because it will not resolve itself, even more so with the more political, social forms.

I still strongly believe in more local experiences because, even the 2011 movement, where there was a very interesting dynamic at play, wasn’t necessarily local. And that experience is still ahead of us. At the moment it feels like things are more defined by lifestyle, by generation or by some kind of general discontent, a very diffused feeling that “it can’t go on like this anymore”. Usually this means that people start to become active when they know they have got very little to lose and they are thinking “the current situation is not going to bring me anything in the foreseeable future”. This is the moment when you can share that discontent with others and start to become active, “get the ball rolling”. And it is possible that these days technology will play a less important role and we forget the whole naive idea that there were Facebook or Twitter revolutions, which we of course know afterwards that it wasn’t quite like that.

What if we take those social media very seriously, so seriously that they become part of the public utilities?

Last year, I listened to Pierre Lévy on Medialab Prado say that it may be a better strategy to use the existing social networks and apps instead of trying to constantly make the public change their platform. Is that too optimistic?

Well, first of all, when the moment is there and people need to do something, it is going to happen regardless. Regardless, also, of what I think or what Pierre Lévy thinks. If you think out of the necessities and the making of history growing out of that the question may not be very important.

The things that I’m talking about are much more on a conceptual level. It means that you need to have a longer term view in which all these things are based upon, and then think of how they can further develop in alternative directions. In technology we know that these concepts are very important. That’s why I emphasize that we need to do a lot of experiments and report about them. Because maybe in the larger scheme, when we are talking about really big events or changes, all these concepts may not be very relevant; but if you take one step down and think in a more evolutionary mode how these technologies further developed, it is indeed very relevant. Just think of what may have happened if 20 or 30 years ago people would have thought more carefully about the revenue model situation, for instance. That may have made the difference for millions of people.

There is another consideration we can make. I understand that Pierre Lévy says we should use the existing technologies more efficiently. But obviously other people say we can only use the social media that exist now in a more emancipatory way if these platforms are socialized, if we really take over their ownership. That is a very interesting and radical proposition that other people have started to work on. What if we take those social media very seriously, so seriously that they become part of the public utilities? This is an interesting development in which you don’t emphasize so much on the alternatives or the conceptual level.

But then again I would say that even if it is socialized, it would be in dire need of radical reform from the inside. I have theorized a lot about that. I think where the social media really fails is that it doesn’t offer any tools and this is a real pity. Google is a bit more interesting in that respect, because it comes from an engineering background… but precisely because of that, Google has failed in social media realm even though they have tried a lot of things. So it is interesting to further investigate how this utility and this invisible nature relates to a more conscious use of the tools they provide.

These are the two directions that are quite contradictory at the moment. On the one hand there is the whole technological development, which is definitely going into that realm of the invisibility; just look at the Internet of Things. On the other hand there is the aspect of democratization and politicization of the tool. These two strategies don’t necessarily have to be opposed, but at the moment it seems quite difficult to bring them together.

Photo by basair

The post “Developing dissident knowledges”: Geert Lovink on the Social Media Abyss appeared first on P2P Foundation.

]]>
https://blog.p2pfoundation.net/developing-dissident-knowledges-geert-lovink-social-media-abyss/2017/07/12/feed 1 66547