Netsukuku and the construction of a p2p cloud

Netsukuku is a p2p based new routing protocol that could – for the purpose of linking users’ computers in a p2p cloud – replace the IP numbers-based addressing and routing that is the system currently in use to link servers to users in the internet.

The Italian language edition of Wired Magazine featured Andrea Lo Pumo and the Sicilian hacker group that gave birth to the idea. A summary translation of the article is posted here:

Netsukuku’s fractal address system for a p2p cloud

The vision of Andrea and his friends at freaknet is a wide band wireless internet, created and controlled directly by users without the need for a telco operator. The only conditions for this to work are that the software must be up and running and the wireless objects have to be sufficiently close to each other to connect. At that point, one of those ‘bubbles’ that Andrea envisions will automatically form. A Netsukuku bubble is therefore a small, wireless and perfectly functional local version of the internet. It is sufficient for one of those nodes that form the bubble to be connected to the internet for everyone to be in communication with the larger net.

In Netsukuku there is no difference between private and public networks, because whenever the software is active, computers are automatically connected with their peers. The bubbles extend and connect with others. In theory, a network of this kind cannot be controlled or destroyed, because it is completely decentralized, anonymous and distributed. Everything is decentralized and works even with devices of moderate computing power and memory.

Stephen Downes, in response to a PEW Institute survey, said something that links in with this. The PEW survey question (actually a choice of two statements) was:

1) The hot gadgets and applications that will capture the imagination of users in 2020 are pretty evident today and will not take many of today’s savvy innovators by surprise.

2) The hot gadgets and applications that will capture the imagination of users in 2020 will often come “out of the blue” and not have been anticipated by many of today’s savviest innovators.
Stephen Downes’ reply:
“I choose to see personal web?server technology (Opera Unite, Firefox POW, etc) as a breakthrough technology, so people can put their own data into the cloud without paying Flickr or whomever. It is this sort of ‘personal technology’ I believe will characterize (what we now call) web 3.0 (and not 3D, or semantic web, etc.). So my dilemma is that, while these technologies are pretty evident today, it is not clear that the people I suspect Pew counts as “the savviest innovators” are looking at them. So I pick “out of the blue” even though (I think) I can see them coming from a mile away.”
– Stephen Downes, National Research Council, Canada

Marco Fioretti then commented on this in a related email thread:
What Downes says above is the same thing I explained here last year in the whole “p2p email” thread which starts here:
but you should read it all. Anyway, here’s a summary:

– cloud services as offered today (including Gmail, Google Wave/buzz, Twitter, Facebook, YouTube…) are bad (engineering-wise) and the opposite of P2P/empowerment/democracy… because they (re)introduce and make even look trendy and cool centralized points of technical failure and political/economical control. See gmail outages, Google shutting down musical blogs some days ago, Berlusconi’s Mediaset demanding that YouTube removes all clips of their reality shows, Iran tracking or blocking dissidents via Twitter…

– what Downes calls “personal technology” is already here TODAY. You could either invent futuristic solutions and try to make the huge effort they need to happen asap (which is where I and M. Fawzi diverged in that thread), OR get today 99% of that, ie solve almost completely the problem described in my paragraph above if you have:

1) a few euros per month

2) some ICT skills, OR somebody creating easy interfaces to self-manage all the software pieces that ALREADY exist

3) interest in doing this, that is studying software or alternatively creating enough demand for the interfaces of #2

My take on this (also from the same email discussion):

I agree with Marco that all that’s needed to bring the future closer is to find ways to implement the technologies that are already at our disposal. Some fancy footwork (keyboarding 😉 might be required to make everything work together to form the cloud that things will be sitting on, and to connect up all those user-controlled computers to do so. There is a light (fractal-based?) address system in Netsukuku, developed by Sicilian hacker and newly graduated mathematician, Andrea Lo Pumo, that would be useful for the connection part (

Wired Magazine’s Italian issue reported on this one, and I might do a summary translation for either the p2p blog or the ning group. (Now on line as Netsukuku’s fractal address system for a p2p cloud)

Some work needs to be done to improve and strengthen the local pipes, the connectivity. An example for this are the – for now – thinly distributed local wifi networks, things like the German freifunk, and the Roman ninux. There are many others, but for now they are separate and very experimental bubbles. Netsukuku provides a possibility of virtually piping the connection through the conventional internet, in case a node isn’t within reach of another’s wifi connection, and presumably also to bridge the long range connections between the single wifi bubbles that may be quite a distance removed from each other, making wifi inoperable for that purpose.

With rudimentary publishing possibilities provided by such programs as Opera Unite and Firefox POW as mentioned by Downes, we have a good beginning, as people can put on the cloud their own content. This may be either their own creations, but it may also be stuff they have collected and wish to re-publish.

What is not quite clear to me yet is what application can make sure that the data stays available in the cloud, i.e. what application will fill in the times the individual computer may be off line. For that purpose, data as published will have to be redundantly stored on other computers. Something similar to bittorrent comes to mind, where anyone who downloads (reads) content will store that content for some time on their own computer, and re-publish it for others. They should also have the choice to make that data part of their own permanent contribution to the cloud. This way, data can propagate and become more available, depending on how popular it is. In the case of data that isn’t very popular, there is always the (probably intermittent) availability through the original publisher.

This is the idea in broad outline.

Leave A Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.