P2P Foundation's blog

Researching, documenting and promoting peer to peer practices



    More in Diigo »


    Free Software, Free Society



Featured Book

“Stop, Thief!” – Peter Linebaugh's New Collection of Essays

Open Calls

Mailing List



  • Recent Comments:

    • Elias Crim: Brilliant, timely and much needed. I do hope this letter will draw a good deal of attention!

    • Keith: Re-posted and shared https://medium.com/p/ca78e03a9 664

    • John Medaille: This is no more than a call to the Church to return to the role it had before the State displaced the Church in the regulation of...

    • Eimhin: “…projecting on to the English riots of 2011 a political motivation that simply wasn’t there.” I want to comment on this...

    • Ellie Kesselman: I retract every bad thought I’ve had about the P2P Foundation, most recently about some of the more Blue Sky aspects of...

Using the P2P-based cyberinfrastructure as a weapon against global warming?

photo of Michel Bauwens

Michel Bauwens
2nd December 2007

The general internet infrastructure that we love so much because it enables peer to peer dynamics, is also itself part of the problem in causing global warming. How can we preserve this important infrastructure under the pressure of the environmental cost?

This issue is causing some p2p-advocates sleepless nights, so it is with great joy that I discovered the creative proposals of Canadian ‘green broadband‘ activist Bill St. Arnaud. He turns the problem around: rather than a problem, the cyber-infrastructure is actually one of our greatest tools in combatting global warming.

Bill first states the problem:

“There are various estimates that ICT hardware in terms of computers,
routers and switches consumes upwards of 9% of the energy production in
North America. The first challenge for the ICT research community should
be, at least, to reduce this carbon footprint.”

He continues:

“Large, centralized and extreme high efficiency ICT equipment using renewable sources of energy such as wind and solar power may be the future physical architecture of the Internet and Cyber-infrastructure. But no one wants to go back to the bad old days of large centralized mainframes and carrier networks.”

The general answer is to go towards ‘virtualization‘ and a ‘Green Grid‘:

“Virtualization allows multiple independently managed network and virtual organizations to exist on a common very high energy efficiency network substrate and computational fabric. So all the modern advantages of intelligence and control at the edge can be maintained and new applications and service such as P2P, Web 2.0, etc can be deployed by users without getting permission of the owners of the underlying substrate.”

The second part of the answer is to induce the right motivation from energy/internet users, for which he proposes not negative punishment such as carbon taxes, but rewards, in the form of ‘bits for carbon trading‘.

He explains:

“To date the most obvious approaches to mitigate against global warming is to impose carbon taxes or implement various forms of carbon trading such as cap and trade or carbon offsetting.

Carbon taxes however, even if revenue neutral, are going to meet with stiff political resistance. Rather than imposing taxes can we instead provide carbon “rewards” where consumers and businesses are rewarded for reducing their carbon footprint, rather than being penalized if they don’t?

To date carbon trading has been associated with various government mandated cap and trade systems or unregulated carbon offset trading. In cap and trade systems large carbon emitters are allocated carbon emission targets and can only exceed these targets by purchasing carbon permits from organizations who produce far less carbon. In offset trading there are a number of independent companies that audit and trade carbon offsets of individuals and businesses for high carbon emission activities such as air travel offset against telecommuting and other energy saving practices.

However these markets are very immature and relatively small.

Instead of trading carbon emission for carbon reduction, perhaps a better scheme would be to trade bits and bandwidth which have an extremely small footprint against activities that have a heavy carbon footprint.”

There is a lot more at the Green Broadband site, such as a general explanation and his proposal to create decentralized ‘follow the sun’ and ‘follow the wind’ computing grids.

Because of the key importance of this line of thinking, I will keep track of such proposals in our Ecology section, which already has various material about peer to peer energy grids.


6 Responses to “Using the P2P-based cyberinfrastructure as a weapon against global warming?”

  1. Anna Says:

    What I would really like to know is how the various broadband options compare in terms of carbon footprint. What are the footprints of DSL (I know…the dinosaur) vs. Fiber vs. wireless towers or WISPs?

  2. Michel Bauwens Says:

    Dear Anna,

    You may want to write to Bill St. Arnaud, an expert on green computing, at
    bill.st.arnaud at canarie.ca

  3. Bill St. Arnaud Says:

    There are a lot of variables in analyzing the carbon foot print of fiber versus copper versus wireless or DSL. From an operational perspective “home run” fiber has the smallest carbon footprint, but the carbon footprint of the installation and deployment of the fiber or wireless towers can be significantly greater than already existing installed copper. But the bottom line, is that differences between the carbon footprint of carbon for fiber vesus wireless versus DSL is minuscule in the great scheme of things. It’s the applications that fiber enables that may will allow consumers to reduce their carbon footprint that is most significant.

    See free-fiber-to-the-home.blogspot.com/ for more details


  4. Craig Hubley Says:


    How the “various broadband options compare in terms of carbon footprint” is the wrong question. The right question is how total ecological footprint of all solutions end to end compare across their entire lifecycle.

    In my opinion BPL, especially in the home, is the clear winner. Why? Because such comparisons would have to consider a lot more factors than the draw on the router:

    - Impact of extraction of required natural resources on ecosystems including roads that are required to reach places where rare earths (coltan etc.) are extracted, biodiversity impact of that exctraction, etc.

    - Manufacturing processes and waste output of those – some processes like silicon fab are notoriously dirty, this is a factor in chips and silicon photovoltaics – gallium arsenide etc are worse

    - Carbon and other impacts of deploying and maintaining the technology – wireless may be better in these respects if only the central tower is considered, but if you have to deploy a lot of masts at each receiving location, that must also be considered – sending trucks and humans out to fix things is very ecologically damaging so if someone can fix something or replace a part themselves with minimal instruction, piggybacking the trip to go get the part with other trips, that’s better for everyone.

    - Fibre optics take less power to run but they can’t carry power on the same wires, unlike ethernet (802.3at) or AC wiring (P1901/G.9960/BPL), once you get past the transformer this is the decisive factor as technologies that carry power and data on the same wires can regulate it more easily and radically reduce the overall power draw – fibre optics plus wall warts, in other words, is going to waste a lot more power than powered ethernet over copper, so if you are under 1000 mbps, the latter is going to win any honest footprint test in the home. Not necessarily between the poles though, where it may be better to use fibre universally for various reasons.

    - Fibre within the home or any need to run cat5 or cat6 or cat6e between rooms adds an impact of expert contractors visiting a home, which implies again more emissions and waste and expense. Must be added in, making G.9960 a big winner within the home.

    - E-waste disposal of the actual devices – probably a small win for fibre in the long run, but not if the devices turn over fast.

    - Since all these devices require power, an honest assessment must include any and all maintenance of the power grid multiplied by the percentage of power that is required by consumer electronic and data devices (phones, TVs, computers, DVD players, MP3 players, stereo speakers etc). These expenses are minimized by real time line monitoring and visible power metering, which are part of BPL smart metering, but not part of other non-power-integrated broadband solutions. Having a power monitoring system that can put unused devices into hiberation modes automatically and ask permission to turn them off, detect shorts and line losses in the home and report them to be dealt with, prevent fires or shut things down during a fire (or potentially even a water leak if plumbing pressures are also monitored), is going to save potentially 10-60% of power, radically reduce needs to send trucks and linemen anywhere, and reduce some disaster recovery costs/risks significantly.

    So, all things considered, the most likely long term solution that minimizes power use and the need for transport or waste disposal will be:

    1. fibre optic lines between the poles to every transformer, powerco using it to monitor its devices (powerline data cannot pass the transformer) – possibly two or four fibre optic lines available for different services to use into the home all using existing wiring, building owner in control of links back to the transit exchange for maximum competition

    2. G.9960 over existing power service cable into the home, with a smart meter integrated into the same device (maybe also grid intertie/inverter connected to it to accomodate distributed generation)

    3. existing AC wiring used in the home to carry a gigabit to each outlet; gigabit 802.3at routers replacing wall warts for smaller DC devices; every device integrated into a single home control net capable of responding to peak curtailment, visible metering, any malfunctioning/short or line loss alert

    4. wireless used only as backup for any or all of the above, possibly initial powerco or rural broadband deployment could start with the wireless, until the above is cheap enough to deploy universally

    5. guaranteed access back to neighbourhood transit exchanges since without competitors able to cut power and other footprint-related costs, replacement of inefficient or dirty services with clean ones will be slower; also telework and telehealth solutions will be slow to deploy and these also cut footprint a lot

    6. Brutally punitive regulatory regime for monopoly service providers who so much as resist the change – notice in Australia and Switzerland, telcos did not resist at all and saw major opportunities in such a network-centric environment for themselves, but that was partly because the political will to expropriate and take over, or regulate to the point of leaving no profit possible, was signalled very clearly to them. An advantage of powercos being the tariffed service provider is they have no legacy data services to protect, are used to being very regulated, accept a truly universal service provision regime, and can be told to subsidize the rollout of networks by over-provisioning of fibre and sale of bandwidth and sale of both negawatts (saved watts) and of power, resulting in zero cost to the end consumer.

    Here’s what the future probably looks like

    Notice the CEO says Canopy didn’t work in that location, DSL was expensive and slow, and that he needed the SCADA information for his power utility anyway. No brainer.

  5. Craig Hubley Says:


    I’m concerned that we see a lot of advocacy of fibre not just “to” but “into” the home, and (as you’ve pointed out often) a failure by policy-makers to understand the implications of smart grid technologies once extended into the home. To address these misunderstandings, I suggest that the terminology “fibre to the home” and even the “tails” terminology ought to be ditched in favour of “fibre from transit exchange to transformer” (for the hardware) and “gigabit guaranteed to the meter” (for the software) for the following reasons:

    1. Fibre should normally not be deployed in the home – it’s fragile, requires some expensive upgrades and could never (unlike the power-integrated copper networking of 802.3at/PoE and G.9960/EoP) reach every DC and AC device to maximize power conserving

    2. Anyone who needs more than a gigabit is going to also want a redundant network to fail over to when there’s a problem – thus a bona fide fibre-into-the-home application also requires hot copper backup. It would need it for power control anyway even without the data need.

    Addressing 1 and 2:

    To say “fibre from transit exchange to transformer” is specifying one of several possible ways to connect transit exchanges (or “neighbourhood closets”) to the pole next to the house. It emphasizes that the transformer must be bridged around to get into the home with powerline technology and that it is where responsibility for maintenance may/should also change hands. It’s deliberately agnostic on how one gets into the home once past the transformer – it could be one fibre continuing on past to a private home, could be G.9960 over cat3 or coax, or (best) G.9960/EoP on the existing AC power lines including a meter. Or all of these set up for failover.

    Advocacy of linking the transit exchange to the transformer with fibre needs to be clearly separated from advocacy of fully integrated smart meters and home area networks (or “home grids”). They’re two different issues, though both important to achieving the lowest long term footprint.

    The in-home issues are different:

    3. Between transit exchange and the meter (entry point to the home, ignoring the transformer), what users really need is bandwidth and service level guarantees of low latency (say under 10ms) to the transit exchange, high (one gigabit) bandwidth and operation for some period of time even in a power outage (say 72 hours). Training users to demand these things, rather than “high speed” or “broadband” or “bandwidth enough to do VoIP” (no such thing, it’s a latency concern not a bandwidth conern, G.729 runs on 7 kbps <– notice the k w/ low latency).

    4. The central role of the power meter in the optimal configuration is hard to over-emphasize. You need one anyway and this will be the authoritative device for any line loss, short or surge monitoring done by the powerco. If it’s not fully integrated with the in-home devices then most of the benefits of integrated power and data disappear. Letting powercos put in non-IP or even just non-G.9960-compliant meters that can’t participate in the home network is just another barrier to a genuine “smart grid” that can ask every AC outlet to perform a curtailment to level peaks, more dumb devices wasting power.

    Addressing 3 and 4:

    To say “gigabit guaranteed to the meter” is emphasizing the capacity G.9960 and 802.3at currently carry as a baseline, emphasizing it’s a service guarantee not a type of hardware specified, and reinforcing the central role of the meter and responsibility of powercos to make it easy, not difficult, to smarten up homes.

    Notice that the meter and the transformer are separated by the power mast and the service cable (which hangs between the pole and the mast, if you have a pole). Ownership of these devices varies somewhat in different regulatory regimes, but it’s probably important for powercos to cede all control of data over the service cable and into the mast to whoever provides that gigabit guarantee. The motives for the powerco to roll out tariffed gigabit data service itself are: Enforce high authentication standards, not have to trust metering from third parties, control interface to emergency response agencies. However, if a responsible third party such as a local emergency management agency or insurance co-operative wants to take over both metering and service guarantees, that should not only be possible but facilitated by government. Powercos should not be in the position, for instance, of deciding which medically vulnerable persons should continue to get power to their medical devices in outages. Nor what other services should be considered essential.

    Resilient buildings and communities will have to make such choices. We’re best off thinking hard about them right now, not once we actually have smart devices out there and making life-critical decisions.

    I don’t have a catch phrase to describe the position optimal to advocate on this issue, but, we might think about “citizen control of life-critical service priority” rather than utility control. This should also replace the useless “net neutrality” slogan which seems to many people to justify giving P2P file sharing the same priority as VoIP calls for help from dying people or text messages from paramedics to the hospital. Rather than “neutrality” I suggest we need “sensitivity” to the real life implications of the traffic, rather than to the priorities of geeks or of the commercially-motivated utilities or govts seeking to facilitate their own friends.

    Karl Auerbach correctly points out that type of service traffic filtering was part of IP from day one, and always will be. It’s a question of who defines the types and whether they do so with reference to bona fide functionality and life support requirements, or with some other agenda.

    A regular public outreach such as Oregon held once to decide which medical services had highest priority might put citizens in charge of the type of service map used to decide packet throttling, failover priority or power outage coverage.

    We’re just going to lose all these fights if we haven’t made operational distinctions like the above. It’s fine to have cute terminology (like “tails”) to describe the configurations, but ultimately any such metaphor misleads – we have at least not to mislead each other on what the eventual configuration is we need.

  6. Craig Hubley Says:

    Doing this right probably starts in the home with devices like this:


    Notice “optional manual or automatic standby functionality for efficient power consumption”. Eventually you’ll see 802.3at plugs to eliminate wall warts, UPS features and full software control in a multi-port device like this one:


    When people are used to AC and DC power and data all on one cable to each device, it’ll be a no-brainer to think of fridges or water heaters just doing what the TV already does (by then). And then demanding that the power grid be at least as smart.

Leave a Reply

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>