AJ fisher defines the sensor commons as “A future state whereby we have data available to us, in real time, from a multitude of sensors that are relatively similar in design and method of data acquisition and that data is freely available.”
As a population we are deciding that governments and civic planners no longer have the ability to provide meaningful information at a local level. … the true Sensor Commons, as I see it, we need to have deep engagement with the population as a whole, regardless of technical ability or knowledge. … My definition is not just about “lots of data from lots of sensors” – there is a subtlety to it implied by the “relatively similar in design and method of data acquisition” statement. In order to be useful, we need to ensure we can compare data relatively faithfully across multiple sensors.”
Here’s how he characterizes the 5 requirements of the Sensor Commons.
“I believe there are five critical requirements for getting a Sensor Commons project off the ground and making it a viable endeavour. A Sensor Commons project must:
Be highly visible
Be entirely open
Each of these will be dealt with in the sections below. A project that has these characteristics will generate a momentum that will exist beyond a core group of technical evangelists and will find a home within the mainstream community.
Many sensor commons projects shine the light on our human behaviour. Ostensibly the goals are noble – to try and understand our environs such that we can make them better and change our behaviour – yet we must stay on the side of data and fact and not move towards blame; others can carry that torch. For example the project that seeks to close down the local smoke stack due to its impact on air quality will have a hard time fostering trust due to their agenda. We all want to have clean air but my kids go to school with the kids whose parents work in said smoke stack – how will I look at them when they lose their jobs?
In the section on dispersal I’ll talk about using existing community assets and infrastructure and trust plays a part in this. If you are piggy-backing the local library’s WiFi so you can get a network connection down in your stream bed it is imperative you don’t abuse their network by sending or requesting too much data – or harvest anything you shouldn’t do.
Trust is provided by having stated project objectives, clear policies around what data you’re going to capture, where it will go and how it will be used and available. Someone responsible for dealing with these issues and being the “go to person” for any issues or questions that arise will provide credibility as well as probably opening up some opportunities for partnership as well.
Note how trust requires no technology, merely an understanding of it. This is a perfect role to engage non-technical team members in, especially those who can articulate why the project is important to the community.
As an example the Don’t Flush Me team have done an excellent job of this, they have built trust with the authorities who are granting them access to the sewerage system – there’s no blame being cast, they are simply trying another way to help a community known problem. Similarly they are building trust with the community by creating a valuable resource for people who care about their local environment.
One of the biggest issues facing Sensor Commons projects is that of dispersion. Projects that seem like such a good idea fall at the hurdle of widespread adoption. Understanding how you can disperse your sensors properly means that like a dandelion seed on the wind you’ll find plenty of places to put down and ensure success.
There are many factors that contribute to this which are discussed below:
Be highly visible
There are two aspects of visibility that should be considered; first the visibility of the device itself and second, the visibility of the data created.
With respect to the sensor itself if it is in a public place then you should endeavour to make it visible and also provide information about what it is there for. Occasionally you’re going to get vandals trash your stuff – there’s not much you can do about it. However if you take the opportunity to explain what it is and what the project is about then it becomes harder for someone to vandalise a community project than something put there “by the man”.
Once you have the data then look for ways not just to make it public but also ways to make it visible. The Neighbourhood Scoreboards project by Martin Tomisch and team from the Sydney University Design Lab showed how visibility of data at a community level could affect behaviour.
Imagine engaging with a local council that has a display on the side of their building showing what the overall air quality score was in real time for the borough? These sorts of Civic Displays could become quite common place as different projects feed data into them. There’s probably an opportunity for civic art to incorporate data from these types of projects and display it in interesting ways to the local population.
By creating visibility of the data we can raise awareness or affect behaviour which is often the goal for many of these projects.
Data should be visible online as well – not simply by making the data sets available but also highlighting some meaning as well. What I found most interesting about the self-assembly of the radiation data on pachube in the wake of the Fukushima incident was that it wasn’t “real” until it was on a google map. Prior to that point there were dozens of data streams but it was too hard to interpret the data. Making your data visible in this instance means making it approachable for people to gain understanding from it.
Be entirely open
Openness in this day and age is almost expected but it’s worth pointing out that the projects that open source all of their code, schematics and data will do better than those that don’t.
The other part of openness however is about the wider project context. This type of openness is about the transparency of the project objectives and the findings, documenting any assumptions about your data such as it’s likely error rate and whether you’re doing any manipulation of the raw data to derive a metric.
Government data sets and sensor networks are steadfastly closed but there is a lot of weight paid to them because they have an implied lack of error and high precision. Ostensibly this is because they are supposed to be “well engineered”, rigorously tested and highly calibrated devices – why else would one sensor cost $50,000?
With radiation data on pachube as an example, there was much made in April about how reliable it was given that it wasn’t calibrated, the sensors were probably sitting on peoples’ windows and that they were only consumer grade. Precision was never the intent for those deploying the sensors however so the argument was moot – ultimately the point was to assess trend. If my sensor has an accuracy level of ? 20% then it’s always going to be out – probably by a similar amount. However if it goes up consistently over time, even though it’s out by 20% the trend is still going up – and I wouldn’t have known about that unless I was using a more deployable sensor because the government one is probably 200km away.
Having a culture of openness and transparency makes up for the error and lack of precision in the data. By “showing your workings” it opens up your data and method for critique and from there allows room for improvement. It also provides a method by which you can agree or disagree with the assumptions if you want to use the data and make an informed decision underpinning the data set.
The final requirement is to be upgradeable. One of the benefits of Moore’s Law is that not only do we get more computing power for the same price over time but that we get the same computing power for less dollars over time. Consider a humble arduino – something that is more powerful for about $40 than a multi-thousand dollar 286 PC back in the late 80s.
Being able to upgrade your sensor network allows you to take advantage of all the developments happening in this space. Adequately modularising your components so they can be switched out (eg switching to WiFi from cabled Ethernet) as well as abstracting your code (not doing heavy processing on your sensor, offloading it to the acquirer then processing it) make upgrading easy over time.” (http://ajfisher.me/2011/12/20/towards-a-sensor-commons)
AJ Fisher also discusses the differences between Top-down Smart Cities vs. P2P Sensor Commons
“Smart Cities are all well and good and IBM, Cisco and others are more than welcome to their ideas and products to make our urban infrastructure more clever – we need it more than ever now. For me this vision is narrow in that the top-down view made from a very tall tower provides an architecture that doesn’t seem to solve problems at a local level. Humans, by our nature are highly localised beings – whilst we may have to travel long distances to work we only travel a few kilometres from where we live and work once we’re there. As such we develop profound connections to our local environments – this is why we see “friends of” groups spring up for local parks, creeks or other reserves and why communities lobby so heavily for protection of these spaces. This type of technology enables us to interact with our environments differently.
If you think this is all naive data-driven techno-utopia think again.
Governments are starting to look at ways they can push their data into platforms like Pachube to make it accessible. Germany is in the process of doing this with its radiation data.
Individuals and project groups are already using tools like Pachube, Thingspeak and Open Sense to aggregate data from their local environment (eg: C02 levels).
It’s becoming almost trivially easy to create the sensors and the web tools are there to hold the data and start the process of understanding it. The chart below shows the temperature in my back yard in real time for the last week.
The access we are getting to cheap, reliable, malleable technologies such as Arduino and Xbee coupled with ubiquitous networks whether WiFi or Cellular is creating an opportunity for us to be able to understand our local environments better. Going are the days where we needed to petition councillors to do some water testing in our creeks and waterways or measure the quality of the air that we are breathing.
The deployment of these community oriented technologies will create the Sensor Commons; providing us with data that becomes available and accessible to anyone with an interest. Policy creation and stewardship will pass back to the local communities – as it should be – who will have the data to back up their decisions and create strong actions as a result.”
Here’s an example of a budding sensor commons project:
“Ed Bordern from Pachube … on the creation of a Community Driven Air Quality Sensor Network . His passionate call to arms highlights that we have no realtime system for measuring air quality. Further, what data does exist and has been released by governments is transient due to the sampling method (ie that the sensor is moved from location to location over time). Summarising a workshop on the topic, he discusses how a community oriented sensor network can be created, funded and deployed.”