P2P Foundation

Researching, documenting and promoting peer to peer practices

Featured Book

Rethinking Money

Book Store




Archive for 'Networks'

Dmytri Kleiner on how to set up a publicly accessible Tor-based forum

photo of Stacco Troncoso

Stacco Troncoso
11th August 2014

My criticism of Facebook and other sites is not they are not useful, it is rather that they are private, centralized, proprietary platforms. Also, simply abstaining from Facebook in the name of my own media purity is not something that I’m interested in, I don’t see capitalism as a consumer choice, I’m more interested in the condition of the masses, than my own consumer correctness. In the end it’s clear that criticizing platforms like Facebook today means using those platforms. Thus, I became a user and set up the Telekommunisten page. Unsurprisingly, it’s been quite successful for us, and reaches a lot more people than our other channels, such as our websites, mailing lists, etc. Hopefully it will also help us promote new decentralized channels as well, as they become viable.

Dmytri Kleiner

Dmytri Kleiner

I couldn’t agree more; using the master’s tools to help bring down a rapidly collapsing house is better than leaving those tools untouched. This certainly doesn’t stop us from finding or creating new tools to repurpose what remains. The shortcomings and profit-driven design imperatives of these platforms should be well understood by their users, and that is what we strive to do at the Foundation: educating users on the full spectrum of ideas related to Social Media.

Personal reflections aside, it’s good to see Kleiner and the gang at ThoughtWorks Werkstatt Berlin still leading the way in combining proprietary access with surveillance-conscious tools. Their latest creation in this burgeoning space is Werkstatt Groups, a web forum running on a Tor hidden service!

How have they achieved this? Kleiner explains it in the article below (originally published in his blog).



ThoughtWorks Werkstatt Berlin hosts many different working groups, including several Cryptoparties, The Kids’ Hacker Club, and the Marx-Engels Werkshau group. In order for the groups to plan and stay in touch with each other in between their meetings at Werkstatt, we have implemented Werkstatt Groups, an online discussion forum based on NodeBB.

Creating a discussion channel for Werkstatt is tricky, since working group participants range from Tor project contributors, who are very knowledgable and concerned about technology and privacy issues, to kids, to political activists, who have other interests and areas of focus, and may be still learning about technology and privacy issues. So the Werkstatt Groups platform needs to be something that is usable across the spectrum, to be a place where privacy experts and privacy novices can intereact online.

Looking at the options available, a simple web forum became the most reasonable choice. With the many working groups at Werkstatt, managing dozens of mailing lists seems unworkable. Usenet, alas, has become entombed behind paywalls, and is inaccessable to most people, except through untrusted interfaces like Google Groups. Platforms that offer groups functionality like Facebook obviously have privacy issues, among many others, and old favourites like IRC and Jabber are not particularly suitable for asynchronous group discussion.

So how to set up a web forum that respects privacy? Run it on a Tor hidden service!

Before I explain how this was done, I need to start with a disclaimer: Werkstatt Groups makes no guarantees of privacy or anonymity, Tor is designed to provide anonymity. However, identifying all the possible ways in which the software running the forum may leak information is not easy, so use caution and report any issues or potential issues to us.

There are two ways to access this site, the recommended way is Tor Browser. Downloading and installing Tor Browser Bundle takes seconds and ensures that all your browser traffic goes over Tor and that your browser doesn’t leak any information and is difficult to fingerprint.

Using Tor Browser, you can access Werkstatt Groups using this url: http://vgnx2fk2co55genc.onion. Note HTTPS is not used, this is because the connection is already encrypted by Tor.

The other way of accessing it is by way of the public URL, http://groups.werkstatt.tw, which links to HTTPS when you access the forum. This is a reverse proxy running on a different server than the one that hosts the hidden service, accessing the hidden service over the tor network, thus making the site publicly accessible outside of the Tor network by way of a public url, while at the same time not revealing the location of the hidden service.

The NodeBB platform itself is a very dynamic, responsive platform which makes heavy use of websockets by way of socket.io, this is very advantageous over Tor, as a request to a hidden service needs to traverse 6 different servers, making page loads very expensive. Minimizing page loads by way of websocket requests compensates for this.

However, NodeBB also has some drawbacks, the platform uses Gravatar and Google Fonts, and socket.io includes a Flash fallback option, so a small Flash object is loaded in the site. All these issues are fixable, and are on our isssues list, however the best way to defend against these kinds of issues is to use Tor Browser. This way, even requests to Gravatar and Google Fonts go over Tor, and potentially dangerous plugins like Flash are blocked. However, JavaScript running in the browser is always a security concern, as exploits are possible. Also, NodeBB is beta software in very active development, and we are running the bleeding-edge head-of-branch, so expect glitches and some downtime.

OK, OK, so with all that out of the way, here is how the setup works. If all you want to do is use the forum, just get started here: http://groups.werkstatt.tw, however if you want to know how the setup works, keep reading. This assumes a relatively expert knowledge of server setup, including node, tor, nginx and iptables.

Please visit Dmytri Kleiner’s blog for the full technical details


Posted in Collective Intelligence, Culture & Ideas, Networks, Open Innovation, P2P Action Items | No Comments »

The P2P Foundation joins the Post-Growth Alliance

photo of Stacco Troncoso

Stacco Troncoso
2nd August 2014


Image Credit: Films for Action

The P2P Foundation is glad to announce its participation in the Post Growth Alliance. The Alliance, created by our friends at the Post Growth Institute, has initiated dialogues among a growing number of collectives spanning 14 countries and with a commonly agreed-on set of values. The groups involved work tirelessly on behalf of ecological and social justice, post-growth strategies, stewardship of the Commons and, of course, P2P. The Alliance aims to mutualise our social media reach by cross-posting curated content on subjects that matter.

Given our informed choice to use netarchical platforms to spread our message (and, incidentally, criticise netarchical platforms and their biases) the Alliance’s combined social media reach of over 2 million followers is not a bad thing.  To be precise, as of July 7th there were 2, 269, 659 followers (Facebook: 2,021,404; Twitter: 248, 255). The combined figure is increasing daily by about 4,000. Over time, we would like to see the incorporation of more decentralised social media channels into that equation.

To give an example of its current reach, Share the World’s Resources’ recent FB post on worldwide economic sharing, republished by the Alliance, has received over 20,000 views on the Post Growth Institute’s page alone. If you want to follow all the groups involved in the PGA on Facebook, you can add them quickly through this list. If you want to follow them on Twitter, click here.

Amongst other things, the PGA organized a record-breaking Silent Skype Meeting with representatives from most of the collectives. We’ve benefited in other ways aside from the social media reach the Alliance offers us, including very constructive dialogues with the Sustainable Economies Law Center on the subject of Open Coops and with Share the World’s Resources on strategies for a Partner State. We really want to thank Donnie Maclurcan, Becky Hollender and the rest of the team for organising such a great initiative. Please read more about the Post Growth Alliance in the material below.

Post Growth Alliance

Together let’s shape the more beautiful world our hearts know is possible!

In a groundbreaking development, 50 organizations have come together to form the Post Growth Alliance. These groups will selectively cross-post the very best content that is helping to motivate systemic change and reshape our world(views).

As of July 7th, the PGA’s combined social media reach is a staggering: 2, 269, 659! (Facebook: 2,021,404; Twitter: 248, 255). The combined figure is increasing daily by 3,856.

Post Growth Alliance – Overview

What is it?

A low-key alliance of like-minded groups, using a simple strategy to harness collective reach in order to enhance the individual impacts of each group and grow the broader Post Growth movement.

How does it work?

The group does two, simple things:

1. Updating – at an annual, 1-3hr typed Skype meeting we keep each other updated of each organization’s plans, offers and needs

2. Sharing – via a ‘blind’ group email, we receive a limited amount of high-quality content from alliance group members, which we agree to consider reposting via our social media and other marketing channels.

How does the updating process work?

Once a year we hold a fast-paced, online, silent (typed) meeting, via Skype (for more about how such a process works, read here). Every group has a representative present who will share pre-prepared information with the group (i.e., copy and pasting information into the chat). Groups will be encouraged to submit their content through an existing form in advance of the meeting, in case, for any reason, their representative is unable to make the meeting for technical, scheduling or other reasons. The meeting’s format is as follows:

  1. Welcome by the chair and explanation of process (10 min.)
  2. Groups then present dot point, pre-prepared organizational updates (organizations present in alphabetical order, at the chair’s prompting) (100 min.).

The chair will then open the floor for all groups to comment/ask questions of the presenting group. In order to keep the meeting fast, members are encouraged to follow-up any matching of offers/needs outside the meeting

  1. Nomination of new members (any alliance member may nominate another group. Acceptance will be by group majority, with the Post Growth Institute holding the right to veto) (10 min.)
  2. Reflections on process/appreciations (10 min.)
  3. Any other business (5 min.)
  4. Close

How does the sharing process work?

Each alliance member organization has a representative who agrees to be on the alliance email list. Each member organization may send a maximum of four items per year (links to articles, campaigns, videos or images – they don’t even have to be your own, or original, work) to the group, using a set template (currently under design, possibly a Google Form) for Facebook, Twitter and newsletter messages. Each alliance member is encouraged to repost (either as a copy/paste function or a retweet/share) all content from other members, but the decision to repost is always voluntary, with each group able to use discretion. The Post Growth Institute will moderate/curate the content.

How is the alliance intellectually/philosophically like-minded?

Alliance members have been selected on the expectation of agreement with the Post Growth Institute’s Starting Positions, as well as their anticipated interest in cross-promoting content from other alliance members. These positions are:

1: All people can live one-planet lifestyles in ways that bring increased peace and prosperity from the personal to the global scale

There are a myriad of inspiring and empowering initiatives occurring worldwide that serve as examples of what our world can look like if we move beyond current trends that focus on personal gain, private profit, materialism and economic growth. By highlighting, connecting and supporting these initiatives we can help accelerate our global transition towards sustainable and resilient prosperity.

2. One-planet lifestyles acknowledge physical limits to economic growth on a planet with finite resources

Economies exist within the physical environment. Their existence relies upon the continued use of natural resources like water, forests and agricultural land. These natural resources are either non-renewable (limited in total amount) or are produced at a rate that is limited by the environment’s ability to regenerate them. The other side of this is nature’s ability to absorb the wastes that we produce.  If economies produce waste faster than nature can absorb that waste, we undermine the planet’s ability to sustain human existence.

We are already using natural resources at a rate higher than that at which they are naturally renewed and creating wastes faster than nature can absorb them (known as ecological overshoot).  Continued economic growth will only worsen this predicament. One-planet living acknowledges that we can, and must, mould our economies to fit within the limits imposed by our physical environment.

3. One-planet lifestyles acknowledge the pressures a growing human population, with highly inequitable patterns of production and consumption, place on a planet with finite physical resources.

Every human on Earth must consume natural resources to live.  If we are to survive and thrive into the future, we must together consume within natural boundaries and produce less waste than nature can absorb. Some of us are consuming far more than our fair share of resources and producing excessive waste, while the total population is growing. We need to address inequalities and find ways to maintain a better balance.

4. One-planet lifestyles also acknowledge that advances in technology do not mean we can keep growing indefinitely

Technology cannot create something from nothing. For example, technology can’t change the fact that there is a limited amount of oil; it can only squeeze a little more use from existing reserves. In a world with more people and higher rates of consumption, increases in technological efficiency can, at best, buy us more time before such gains are cancelled out by further growth.

Globally, improvements in the efficiency of technologies, or even leaps to other substitutes, have not been able to offset overall increases in resource consumption and waste.  In fact, these improvements in efficiency have, in many cases, driven more wasteful attitudes and increased overall consumption (see “Jevons Paradox”).  Rather than relying on technology alone, we must challenge the obsession with infinite growth on a finite planet.




Posted in Activism, Campaigns, Collective Intelligence, Commons, Crowdsourcing, Culture & Ideas, Featured Content, Networks, Original Content, P2P Collaboration, P2P Foundation, Sharing | No Comments »

A new start for the Free Knowledge Institute and the Free Technology Academy

photo of Marco Fioretti

Marco Fioretti
21st July 2014

The Free Knowledge Institute is a hub connecting networks and communities in multiple domains facilitating and enabling the study, sharing and collaborative development of free knowledge and free technologies for a socially just, free knowledge society (disclaimer: I’m on the FKI board since last September). The FKI is happy to announce a reboot of its online infrastructure, as well as that of the Free Technology Academy (original announce here)


as you may have already noticed, the several wikis and other websites of the Free Knowledge Institute and the Free Technology Academy have published very little “good” content lately. This was due to a combination of two factors:

  • the software infrastructure had become too complex to maintain with the available resources. This had both opened it to spammers, and made it too complex to adapt it to the current resources and needs of FKI/FTA
  • the new FKI Board, established last autumn, needed time to define a new structure and a migration strategy compatible with the available resources, but also able to save as much as possible of the old websites

Today, the FKI Board is happy to announce that this analysis is ended, and that, finally we have already started restructuring the websites. For the reasons explained above, all the content hosted until today at:

  • freeknowledge.eu/wiki/
  • campus.ftacademy.org/wiki/
  • campus.ftacademy.org/community/
  • campus.ftacademy.org (where the online courses take place)

will be taken offline immediately, within this week. Immediately after, will migrate and update all and only the content that is still relevant (even if only from an historical point of view) to brand new wikis.

All the accounts of the wikis and portals listed above will be deactivated. Everybody wishing to be part of the FKI/FTA community is kindly requested to register (again) in the new wikis that will be announced. For the time being, the community portal will be replaced by the FTA discussion mailing list, which you are all welcome to join.

This is unfortunate, but it is also the only way to restore an adequate Web presence for FKI/FTA, and let us restart with new
activities to fulfil our mission, including but not limited to:

  • new courses next September, in a brand new Moodle environment.
  • launch of a crowdfunding project to update all the official courseware of the Free Technology Academy

Details of all the actions described in this announcement will be posted soon. Stay tuned, and if you have any question, please
contact us or join our mailing list.


Posted in Free Software, Networks, P2P Collaboration, P2P Education | No Comments »

Essay of the Day: Why the Soviet Internet Failed

photo of Stacco Troncoso

Stacco Troncoso
20th July 2014

Ben Peters

Ben Peters, Assistant Orofessor of Communication at the University of Tulsa, presents preliminary findings of a dissertation chapter examining why the Soviets did not succeed in building an ARPANET equivalent. In particular, he examines Soviet bureaucratic and social structures as decentralized networks, compares them to conventional critiques of centralized power, and speculates on the chapter’s relevance for modern-day practices of power distribution.


“Why wasn’t there a Soviet equivalent to the US ARPA NET? Building on fresh archival evidence, this paper examines several surprising leads: one, that the first person anywhere to conceive of and propose a national computer network for civilian use appears to have been the Soviet cyberneticist and Engineer Colonel Anatolii Kitov; two, that Soviet economic cybernetics tried repeatedly but did not succeed in building such a network; three, that the collective failure comes in part due to unregulated bureaucratic competition and infighting over resources within the Soviet state and academy (while the US ARPANET and French MINITEL networks initially benefited from centralized state subsidy) and in part due to the untenably comprehensive and hierarchically decentralized design in vogue among Soviet cybernetists in the 1960s. The fact that cybernetics was a discursive vehicle for reform-oriented science in the early 1960s makes its failed contributions that much more culturally poignant. These and other ironies are explored.”

Read the full text here


Posted in Empire, Featured Content, Featured Essay, Networks, Politics | No Comments »

Podcast of the Day: Simona Levi on the Anti-Party Aspects of Partido X

photo of Stacco Troncoso

Stacco Troncoso
18th July 2014

This interview precedes the EU elections last May, but it still makes for very interesting listening.


Posted in Activism, Collective Intelligence, Featured Content, Featured Podcast, Media, Networks, Open Government, Open Innovation, Open Models, P2P Public Policy, Podcasts, Politics | No Comments »

Wirearchy 5: Hacking As Purposeful Organizational Change

photo of Stacco Troncoso

Stacco Troncoso
27th June 2014

BlogWalk Seattle 2005 - Jon Husband

Jon Husband

Welcome to the fifth and final essay in our series exploring Wirearchy, “The power and effectiveness of people working together through connection and collaboration…taking responsibility individually and collectively rather than relying on traditional hierarchical status.”

Today’s chapter is short and sweet: Jon Husband, the creator of Wirearchy, talks about the possibility of scaling up grassroots and collaborative organization.

For the past several years we’ve heard lots about BarCamps, WordCamps, BookCamps, GovJams, Unconferences, Hackathons and various forms of collaborative spaces, etc. All of these represent forms of organization in which people come together and group around a purpose with the objective of carrying out some practical experiments. Typically today such groupings are invited, planned and often facilitated by people connected online to other people because of affinities of purpose, interest, values or skills.

The aim is to see what can get done when a bunch of people with passion, similar interests and diverse skills come together and get started at seeing what the results of focused collaboration might be.

Why can’t that be done by larger organizations, and become seen as a ‘strategic business process’, a form of crowd-solving ? Why not hack onerous and out-dated HR processes and policies ? Or ask people to tackle other problematic areas of an organization’s operations ?

I believe there are some early examples in (for example) IBM’s large-scale and sometimes global jams. But it seems to me evident that grouping people around issues and problems that they care about will make useful things happen much more quickly and efficiently than might otherwise be the case.

Wirearchy in action ?


Posted in Collective Intelligence, Culture & Ideas, Economy and Business, Featured Essay, Guest Post, Networks, Open Models, P2P Business Models, P2P Collaboration, P2P Foundation | No Comments »

Wirearchy 3: Knowledge, power, and an historic shift in work and organizational design

photo of Stacco Troncoso

Stacco Troncoso
13th June 2014

Welcome to the third in a series of essays exploring Wirearchy, “The power and effectiveness of people working together through connection and collaboration…taking responsibility individually and collectively rather than relying on traditional hierarchical status.”

In today’s essay Jon Husband, the creator of Wirearchy, talks about knowledge, power and the dissonance that is generated. Check back next Friday for the fourth installment.

“Social business is not dead. I’m learning that the most advanced organizations see social not as a technology movement but instead one of culture and philosophy

But the challenge is that social media strategists may actually be hampering its potential by not helping executives see the bigger picture beyond the technology.”

(Brian Solis)


Horizontal networking often creates dissonance in the vertical enterprise

The hierarchical arrangement of knowledge (and the related assignment of power and authority to roles expressed on organizational charts) did not foresee the arrival of social computing tools and the horizontal networking now shaping today’s workplace.

(Jon Husband)


« Knowledge is power », the saying goes.

Setting aside issues such as what exactly knowledge is, and the many forms of manipulating information and knowledge in order to affect behaviour, voting outcomes, investment decisions and such, the foregoing phrase has been conventional wisdom ever since Sir Francis Bacon first noted « Scientia potentia est » (knowledge is power) in 1651.

This realization and statement came into being a little more than 200 years after Johannes Gutenberg, upon having an idea visit him like « a ray of light », came up with the invention known as movable type, which subsequently took form in the Gutenberg printing press.

The invention of the Gutenberg printing press is widely hailed as a critical turning point in the history of the world. It brought into being a new medium for creating, distributing and using information and knowledge. Due to its effects on how information and knowledge were recorded and published it created fundamental change in the way(s) people communicated ideas, information, knowledge and meaning to and amongst each other.

Given the speed at which many things operate, unfold and evolve these days, we tend to forget that the massive changes in the distribution of information and knowledge afforded by the printing press took several hundred years to have really major impacts. But clearly books, magazine and pamphlets were the ‘radical transparency’ of that era where previously information and knowledge was jealously held, guarded and hoarded by and circulated amongst those who ruled over others.

Those days are long over. Since then we’ve lived through two phases of a massive world-transforming Industrial Revolution and the first phase of what has been called the Information Age and/or the Knowledge Age. We now seem to be entering the second phase of the Information / Knowledge Age, in which we really get hooked up … in this phase interconnectedness and continuous flows of digitized (and thus indexable, searchable and easily manipulated) information characterizes the environment.  Flows of power increasingly are both top-down and bottom-up.  A new, fourth source of power beyond monarchs, clergy and institutions (the traditional sources throughout human history) is rapidly coming into being .. public opinion built from the circulation of information in networks (as depicted in the following info-graphic by Michel Cartier).

Screen Shot 2013-12-04 at 15.53.55

This new set of conditions is also beginning to impose a powerful new sociology onto the core assumptions that defined the use of information and knowledge in the second phase of the Industrial Revolution, wherein F.W. Taylor’s notions of efficiency and effectiveness grew into widespread dominance and provided exactly the right logic for organizing and optimizing many aspects of western societies that were expanding and growing rapidly.

Rapidly-growing knowledge about how to create and deliver many goods and services met equally rapidly expanding and growing needs. Mass production and mass assembly demanded mass and standardized efficiency in order to meet these needs. In addition, these conditions were met by the culmination of a remarkable spurt of innovation and development of scientific and management knowledge.

Along came more modern inventions .. easy credit, mass over-consumption, financial engineering, advertising, marketing, national and international travel and greatly-speeded-up trade .. as key examples of the significant developments of the age.

And most recently another transformational new medium or set of media signalling as much impact or more impact upon human society as delivered by the Gutenberg printing press. .. the Internet and then the Web. In the western world and large parts of the eastern world we now live and work amongst nearly ubiquitous hyperlinks, social networks, DIY publishing, ripping and re-mixing, content piracy, really radical transparency, and so on.

So …

Now, today, there’s a lot of chatter about the power of (people using) social media, the power of the kinds of possibilities that social media enables, bottom-up versus top-down dynamics, the collective wisdom of the organizational crowd, and various other related themes.

One of the last places to begin feeling the impact of the digital hyper-linked environment has been the workplace. Yes, access to knowledge through education and training has brought about huge improvements in productivity over the past 75 or so years. The evolution of progress doesn’t stop .. whether it involves materials, designs or ways of doing things. And much of the progress of the past 75 years has come from deeply embedding the tenets of Taylorism – efficiency, predictability, replicable quality, stability and control – into the means of building and delivering good and services. Taylorism has been refined and then further refined, and then distilled into the essence of the ways things are. Today for most people it’s barely a conscious afterthought; it’s just the way things are done.

It is the dominant and still-firmly-in-place paradigm.

Networks initially create turbulence and dissonance

However, there’s ongoing dissonance between the Taylorism-derived methods .. the ones behind structured, highly-defined organizational activities forms .. and the growing demands imposed by the world of hyper-linked flows in which knowledge and meaning are built layer by layer, exchange by exchange, resulting in the « scaffolding’ of knowledge to feed continuous improvement and innovation . These are the results which, increasingly, networked social computing enable.

A key reason why turbulence and dissonance are created is the way knowledge work has been (and still is) designed and the organizational structure that contains this work.  A primary tool in designing work and its organizational structures is called job evaluation (which is often accompanied by derivatives like accountability mapping and redundancy analysis).  The methods used today were created in the mid-1950?s and haven’t changed much since then.  Their core assumptions are directly derived from, and have helped embed, Taylorism at the core of the modern organization.

The term job evaluation as used here does not mean assessing a person’s performance on the job – rather, it means the function carried out by HR departments (or consultants) that ‘measures’ or ‘weighs’ jobs, and assigns them to levels and pay grades based on job “weight” with respect to skill, effort, responsibility and working conditions (the legal criteria for assessing pay equity).

Taylorism-derived job analysis, evaluation and measurement are the tools (along with their underlying assumptions) that are used to create the skeletal architecture of hierarchical organizations, the pyramid we all know.

Dissonance in job requirements

The methodology of job evaluation is a very useful place to look at some of the key critical reasons for the ongoing dissonance and resistance to change we are seeing and will continue to experience.  The methodology of job evaluation situates jobs in the organizational hierarchy and creates pay grades, pay practices, thresholds for entry into bonus schemes and often is the main criterion for distinguishing between management and non-management jobs.

Fundamentally, job evaluation (work measurement, as noted above) relies on the core assumption that knowledge is structured and used hierarchically.  Thus the job requiring more cumulative and/or or seniority-based knowledge (and the job requirements that demonstrate this) is—on paper—the job that deserves to be “higher up” in the organization, and accordingly is placed there on the organizational chart.

Redesigning work requirements

There are four or five major, well-known methodologies for measuring work.  They all use very similar factors (sometimes described a bit differently semantically, with a couple more or less factors or sub-factors) and they all essentially measure the same thing.

These fundamental principles of work design need to be examined and re-conceived if the significant power of social computing is ever to be realized. As an example I will use the measurement factors used by the Hay Guide Chart Method, as I know them the best.  I have also worked with the other major methodologies – they are essentially all the same: the Aiken Plan, and the Towers Perrin and Watson Wyatt job evaluation methodologies (now Towers Watson) in the past.

The Hay Method describes work as having three phases—input, throughput and output—and it employs three core factors to measure that input/throughput/output:

1.  Know-how (input) – knowledge and skills acquired through education and experience.

2.  Problem-solving (throughput) – the application of the said knowledge to problems encountered in the process of doing the work.

3. Accountability (output)- the level and type of responsibility a given job has for coordinating, managing or otherwise having impact on an organization’s objectives.

There is a fourth factor called working conditions, but in many cases this is treated almost as a throwaway factor, especially when it comes to knowledge work.  It typically relates to physical factors such as lighting, air-conditioning, the presence of fumes or chemicals, outdoor exposure, dangerous physical conditions, unusual exogenous stress, etc.

As noted above, the core assumptions of these methods are derived from the philosophy of Taylorism (aka scientific management) and the divisions of labour and packaging of tasks that have underpinned the search for efficiency and scale ever since the beginning of the 20th century. On the face of it, they seem eminently reasonable and the Hay Method (and the related ones cited above) have since the mid-50?s largely served organizations quite well for segmenting and dividing labour, identifying necessary expertise and specialization and, in effect, designing one or another particular hierarchical pyramid.  Today these methods are put into practice along with other key assumptions from that industrial era when organizations grew and prospered – mid–50?s to approximately 2000.

Changing assumptions about knowledge

These methods set out a fundamental, foundational assumption about the nature of knowledge. They assume that knowledge and its acquisition, development and use is relatively quite stable, that it evolves quite slowly and carefully and that knowledge is based on an official, accepted taxonomy – a vertical arrangement of information and skills that are derived from the official institutions of our society (Jane Jacobs has a fair bit to say about this in her last book Dark Age Ahead(Chapter 3 titled Credentialing vs. Educating) in as do others like John Taylor Gatto and Alfie Kohn, and as does David Weinberger’s Everything Is Miscellaneous – the power of digital disorder).

Above I have offered an example (paraphrasing the Hay Guide Chart Method’s semantic scales for measuring a job’s knowledge).  It describes a vertical arrangement of Know-How (knowledge) and the method creates, supports and sustains vertical reporting relationships.  The other two factors (problem-solving and accountability) derive from, and reinforce, the know-how factor. For example, the rules of job evaluation are such that you cannot have a problem-solving or accountability factor assessment that is of a higher order than the know-how slotting.

The definitions of the know-how (knowledge and skills ) factor levels are paraphrased from the semantic definitions on the actual Hay Guide Chart.

A – Unschooled and unskilled
B – Some school, some skill
C – Basic high school, routine work
D – Vocational school, community college, trades, senior administrative
E – University graduation, senior trades, managerial (reads the books)
F – University plus 10 years experience, grad school (puts the books to use)
G – Deep knowledge and expertise (writes the books)
H – Ultimate expertise (has others write the books)

These methods did not envision or foresee the Web, hyperlinks and the exchanges of information which have spawned and carry the bit-by-bit layering and assembly of knowledge and peer-to-peer negotiation of results and responsibilities we are seeing emerge with greater frequency in this new networked world.

Multiple ways to structure knowledge

We are beginning to understand that the main way we have structured knowledge is only one way, and that this way is captive to core assumptions about the ordering and classification of information as created by some of the great thinkers, organizers and classifiers of information and knowledge who helped build up our growing understanding of the world around us (Linnaeus, Darwin, Dewey, etc.).

What we have developed into solid and maybe seemingly unassailable beliefs about knowledge are built upon the principles we have inherited from a time when human progress benefited greatly from regular and related discoveries about the world around us, both natural and man-made.

For example, it’s clear that there was a proliferation of written / printed material from the 1600’s through the 1900’s, containing amongst other things much codification of discoveries of the knowledge we use today in a wide range of domains and disciplines. More and more (too much ?) of this knowledge is accessible very rapidly on today’s Web in ‘fragments of one’ (nod to Dave Snowden’s assertion that the brain works most effectively with fragments of information) connected by search engines, hyperlinks and a range of easily used publishing platforms.

So … now let’s look at how information is shared and exchanged in order to build and use knowledge amongst networked individuals or groups.  The use of knowledge in a networked context is very often much more horizontal, sideways and based on accessibility and collaboration – much more so than is the (official) use of knowledge in formally structured hierarchies.

Linked knowledge

What we know today is that people with vastly different types and forms of knowledge can be or are linked together for a wide (and potentially limitless) range of purposes (though clearly we are learning quickly about the limits to cognitive attention as lessons in social cognition surplus are offered up to us almost every day).

In networks-of-purpose, addressing Purpose A connects individuals with Skills and Knowledge Set B, Interests and Knowledge Set C , and Connections and Knowledge Set D (and of course the second-order concentric ring of connections each of them brings to any given network in which any of them participate). Each of them subscribes to different sets of feeds and has access to different sources of flows of information than each of the others, but can forward to all those in the on-purpose network anything that comes across their attention that may be pertinent to the purpose at hand.

The dynamics of attention, flow and circulation of pertinent and relevant information are now clearly feeling the impact of the power unleashed by the integration of social computing tools, service and capabilities.  They are rapidly becoming firmly ensconced in the activities of knowledge work, in the guise of platforms for collaboration—the domain increasingly called Enterprise 2.0 and/or Social Business.  And, as many of us know, these monikers are increasingly being called into question as insufficient or only addressing part of the overall story.

As the use of these tools and capabilities spreads, in a networked environment it’s safe to say that problem-solving or accountability is very often dealt with based on negotiation of ‘who knows what’ or ‘how to get something done’. Usually, or often, a call (Tweet, blog post, Skype chat, email) is put out to find and access some additional skill or knowledge that is required, and accountability is negotiated based on the constraints of the purposeful activity at hand.

Any of us familiar with medium to large sized organizations can begin to see, I believe, that the fundamental Taylorist assumption that knowledge is structured vertically and put to use in silo’d pyramidal structures and cascaded down to the execution level must be straining at the seams in the increasingly highly-connected social networks in which many people work today.

Social computing – first dissonance, then participative flow ?

Thus, it seems clear that the introduction of wikis, blogs and RSS feeds (and now micro-blogging a la Twitter) for project work, for analysis and planning, for research and development and for other knowledge-intensive work is likely to introduce some reasonable levels of dissonance into the common and accepted organizational dynamics (or “organizational sociology”) of formal, traditionally structured organizations.

This is an area where David Weinberger’s phrase from the Cluetrain Manifesto — “hyperlinks subvert hierarchy” (or expose it, which may be better)—is likely to have real impact.

Take Weinberger’s additional concept of first- , second- , and third-order organization of emergent knowledge (outlined in his “Everything Is Miscellaneous”), combine it with hyperlinks and spaces designed for interaction based on core usability principles and you have a potent recipe for looking at the design of socially-networked work groups.

In some senses, we’ve been here before … social interaction with other knowledge workers is the foundation of (for example) Fred & Merrilyn Emery’s theory and method of Participative Work Design and is at the heart of socio-technical systems (STS) methodologies for organizational development and change.  These theories and methods by and large reflect “getting the whole system into the room”.

Of course, with the arrival of the Internet and the advent of the interactive participative environment that is generally called Web 2.0, “the room” is larger and “the whole system” increasingly does indeed mean everyone, or at least the whole of the organizational crowd that makes up that organization.

Reams have been written about the Internet’s potential to democratize the access to and use of information. It does seem clear that the use of the Web, collaboration platforms, software-as-a-service, and cloud-based social computing by organizations that see information, knowledge and responsive innovation as mission-critical are core factors enabling the growth of network-based ways of creating pertinent and useful just-in-time knowledge and putting it to work.

Knowledge arranged and used top-down is disrupted

This causes dissonance and ambiguity because typically performance objectives, job assignments, compensation arrangements and bonus schemes are generally almost always predicated on causality derived from the vertical arrangements of knowledge and its use in planned and structured initiatives

As more and more knowledge work is carried out by people communicating and exchanging information using hyperlinks in social networks (where knowledge lives ) and routing it to where it is needed at any point in time, vertical arrangements of knowledge are disrupted, if not subverted.

Call for organizational (re)development

Based on the notions explored above and in a wide range of texts about organizations previous writings, it seems there is a rapidly-growing need for the return to prominence of the domain of Organisational Development (OD).  With greater fanfare and less emphasis on the core principles, key parts of the framework known as organisational development have been resuscitated, dressed up and called « Social Business ». As social business initiatives continue to proliferate, I cannot see how the latent dissonance described earlier in this essay will be avoided.

The turbulence and discipline that characterizes the power shifts going on in today’s interconnected knowledge workplace will have to be addressed by using new design principles for knowledge work.

Many parts of knowledge work have been routinized and standardized with the ongoing marriages of business processes and integrated enterprise information systems. What has not changed much yet is the adaptation of structures and culture to permit easily building flows of information into pertinent, useful and just-in-time knowledge, or fanning out problem-solving and accountability into networks of connected workers.

I think many executives and senior managers sense massive challenges to the power and status relationships (the core of yet-to-change organizational structure) that exist in most of today’s larger organizations.  This sense of a growing challenge is behind many senior managers’ and executives’ struggles to understand or become enthusiastic about the possibilities of Enterprise 2.0.

There is no Guide Chart yet about networked know-how, problem-solving or accountability.

Never mind that there is much rhetoric about the need for leadership at all levels, or about the empowerment and democratization of workers in organization X or Y.  Performance management, grade levels and compensation have yet to recognize how work gets done in networked environments and in a networked world.

I’ll close with a dare .. if any of you have any experience with performance management programs or with assigning someone currently in a job to a new and different grade level, or in making changes to levels of pay or bonus schemes, you know what I’ve been setting out above can easily become a real and potentially very explosive minefield.

And yet .. the way(s) we go about these core issues of work design are almost certainly going to have to undergo significant revision if not complete re-invention.


Posted in Collective Intelligence, Culture & Ideas, Economy and Business, Featured Content, Featured Essay, Guest Post, Networks, Open Content, Open Innovation, Open Models, P2P Business Models, P2P Education, P2P Foundation | No Comments »

Chattanooga gigabit internet enables local economy, shames internet behemoths

photo of Sepp Hasslberger

Sepp Hasslberger
11th June 2014

Al Jazeera reports about the City of Chattanooga’s super high speed internet that enables all kinds of new economic activity. It was the public local electricity company that laid fiber optic cables to transmit customers’ meter readings and at the same time provide high speed (one gigabit) internet connections to homes and businesses in the city.


The incumbent telecoms are trying to eliminate the competition. Comcast sued them twice, and state legislators are apparently thinking along the same lines – that the municipal electricity company has no business competing with telecoms as an internet access provider.

The article in Al Jazeera America: As Internet behemoths rise, Chattanooga highlights a different path

Some excerpts:

Chattanooga’s Internet, named the Gig, has won the small, postindustrial city a host of accolades and attention from the tech industry, entrepreneurs and the press since it was started as part of a project to modernize the area’s electric grid by local power company EPB in 2009.

Politicians have credited the Gig with creating upward of 1,000 jobs in Chattanooga, and some have even wondered if Chattanooga could be the country’s next Silicon Valley.

But perhaps the biggest challenge to the future of municipally owned broadband access is getting the idea past legislators and corporations used to the status quo.

Comcast has attempted to block EPB’s expansion twice, suing the company by saying EPB illegally subsidized its Internet service with money obtained through its electricity service. Those suits have been dismissed, but EPB is still facing an uphill battle from state lawmakers in its quest to expand service to customers just outside the Chattanooga city limits.

According to Michel Bauwens, commenting on a post in the P2P Facebook group, Chattanooga’s gigabit internet shows that “public provision (i.e. all of us) is much more efficient and cheaper than private provision of essential utilities” and he adds “I see it as a good example of the partner state, in which a public investment in civic infrastructure enables the rest of us to do our autonomous work better”.


Posted in Economy and Business, Networks, P2P Infrastructures | No Comments »

Essay of the day: Snowden, the Terminator, and Us

photo of Stacco Troncoso

Stacco Troncoso
8th June 2014

A very recent, stirring essay, written by Jérémie Zimmermann, co founder of Quadrature du la Net and originally published in Mediapart.

“Fortunately, Edward Snowden also showed us a pathway out. Governments can maybe made accountable, and mass surveillance can surely be evaded, and made much more costly. By moving away from technology that controls us, we can use, promote and develop technology that makes us more free. It is a long path, requiring efforts, to break away with the habits and the blind trust we placed in the Machine, and requiring an appropriation of technology by everyone. Through the use of free software, decentralized architectures and end-to-end encryption, we can –probably– take back control of the Machine.”

One year ago, Edward Snowden’s revelations make us learn and understand how our relationship to technology has changed forever, and how the trust we place in machines shall never be the same. Edward Snowden also shows us a path for taking back control of the machines, an urgent task that no one today can ignore. By Jéremie Zimmermann, co-founder of La Quadrature du Net.

© Caption by Terminator Studies

One year ago day by day, a courageous young man named Edward Snowden sacrificed most of his life and his freedoms to show us the crude reality of the world we are living in. His ongoing revelations make us learn and understand how our relationship to technology has changed forever, and how the trust we place in machines shall never be the same. Edward Snowden also shows us a path for taking back control of the machines, an urgent task that no one today can ignore.

We live already in the era of the Cyborg. Our Humanities are practically indistinct from the Machine. Functions of our bodies such as communicating, remembering, recognizing each other, our personal and shared memories and most of our works are now indivisible from the functions of the machines.

Computers, phones and servers are all interconnected through software and communication networks. This global interconnected Machine is increasingly merging with our global interconnected humanity – soon on faces, wrists and under the skin – and so far most of us trusted it with about everything.

Yet in the era of the Cyborg, what we see thanks to Ed Snowden is that this global Machine has been turned as a whole against us. It has been turned as a tool for global surveillance and for control of individuals, at the cost of massive violations of our fundamental rights. With many abuses already demonstrated, the Machine bears an immense, horrendous, potential for abuse and repression, from political to economic espionage. Any political movement, any revolution, any idea could potentially be crushed in a snap.

The Machine as a whole has been repurposed. From obeying us, its users, its owners, it has been reprogrammed to obey its real masters, comprised of an ill-defined alliance of some of the biggest companies in the world such as Google, Apple, Facebook and Microsoft, of unaccountable spying agencies such as the NSA, GHCQ or the DGSE, and of thousands of their private or public partners (among which a myriad of private contractors and at least 950.000 US citizens cleared with a Top Secret clearance).

Many of us still find more comfortable to ignore the truth than to change their habits. Perhaps truth is so violent and scary that it becomes too difficult to admit. Perhaps the gap between reality and comfortable illusions is too big.

Still, we have an immense responsibility to ask ourselves questions that will shape the future of our societies, our relationship to power, as well as our relationships between individuals. Where is the boundary between our humanities and the Machine? Did we consciously accept it as it is? How can we take back control of that Machine, which is now part of ourselves?

What is at stake is the very definition of our humanities. For massive surveillance implies violation and potential annihilation of our intimacies, these spaces where we decide, in full trust –alone or with others– to be truly ourselves, to experiment with ourselves, to develop new ideas and theories, to write, sing and create. In these spaces we develop our identities, our very definitions of who we are…

Fortunately, Edward Snowden also showed us a pathway out. Governments can maybe made accountable, and mass surveillance can surely be evaded, and made much more costly. By moving away from technology that controls us, we can use, promote and develop technology that makes us more free. It is a long path, requiring efforts, to break away with the habits and the blind trust we placed in the Machine, and requiring an appropriation of technology by everyone. Through the use of free software, decentralized architectures and end-to-end encryption, we can –probably– take back control of the Machine.

It is our duty as a civilisation and as individuals. We must fight this Machine of oppression by all means, before it is too late, in order to reconquer and reclaim our humanity.


Posted in Activism, Anti-P2P, Cognitive Capitalism, Copyright/IP, Culture & Ideas, Empire, Featured Content, Featured Essay, Networks, Politics | 1 Comment »

Too small to have internet? Tiny German village builds own broadband service

photo of Sepp Hasslberger

Sepp Hasslberger
7th June 2014

A German village on the border with Denmark and close to the North Sea, Löwenstedt had internet at appalling speeds and no prospect to get anything better. The telecom companies love the revenue from connecting whole city populations, but they hate to spend money to bring decent high speed internet to the countryside. What to do?


The villagers decided to try and build their own network but it was going to be expensive.

First of all they worked out how much they’d need to raise and came to the figure of €2.5 million euros ($3.4 million). This meant at least 68 percent of Löwenstedt residents had to sign up to the scheme for it to be viable.

Each person had to pay in at least €1,000 into the company; €100 to become a shareholder and another €900 as a loan. None of the villagers were forced to join the scheme but in the end 94 percent put down cash.

The Citizens’ Broadband Network managed to raise the necessary sum and negotiated a long term bank loan, which they expect to be able to pay off in 30 years’ time.

Quoted from RT http://rt.com/news/163412-german-village-own-broadband/

“It’s nothing new to people here to do things for themselves. This is about preserving a culture and a way of life. These villages will not survive without a broadband connection,” said Sabine Birkigt, who is one of five local women who run Bürger Breitband Netz (Citizens’ Broadband Network), the company the villagers set-up.

Eventually, the goal is to extend the network to 59 villages in the area that have similar problems, but so far, the uptake is slow as each village has to come up with the basic finances needed to join the network.

Another article on this:

German villagers build own broadband network


Posted in Networks, P2P Development | No Comments »