Today I read this amazing article:
It’s worth mentioning that their is no science without theory. The human mind is wired for creating theories. That is why there is a neural network built into our brain/mind system. It is wired to consider the un-manifested.
That being said, what Kevin Kelly describes is a lot like Stephen Wolfram’s “A New Kind of Science”. Not exactly the same, but closely related.
In both cases: a *search* based science. Not that science didn’t already have “search”. But now you can start to create billions of possible combinations of simulations and then search through them. So it is simulation/search based.
You can compress thousands, or even millions of years or human trial and error into this type of research. Plus, you walk away with billions of potential variations on your design, and ready ways and building blocks to adapt and change your design, and data about how different variations work under different conditions. You have a design DNA.
Right now, I use databases like google as a kind of bionic brain extension. So, I say bring on the new databases, the computer clusters, the algorithms. As long as we all have equal access to them.
The question for me is: what comes after search? When knowledge bases reach the exabyte level, or higher, and there are algorithms searching/crunching through them and finding patterns, relationships, testing all of the possible combinations, etc, how do handle and process what the machines are outputting? How do we avoid becoming a cybernetic society?
An even more fundamental question is: what can you create with all of this data, and/or with the systems that are used to collect and analyze it? How can it be used as a medium for expression? What are the new ways to “see” and “feel” the data? How can the data systems be grounded as an ecology that is self-balancing, so that it doesn’t overrun our existence like a form of digital toxic pollution, and cause ill effects on living systems, like people being ruled by algorithmic output that is too much in one direction?
Another consideration: When we see that entities like google are the only ones that can wield and harness resources like those that process and hold petabyte databases, then we still have the potential for a power imbalance, where those who can hold and process the most data are the “wealthiest” in terms of capability, adaptability, access to knowledge.
I wonder how many people realize that existing technology possesses the building blocks to allow p2p networks to exceed the capability of any one entity like Google, etc?
I can see the possibilty of something simple, and elegant on a basic scale, that can scale-up easily, that provides a social utility for anyone who accesses it, using the combined resources of millions, or possibly even of people for storage and processing, that cannot be controlled for any specific exclusive purpose by any one person, and that could be controlled democratically by people opting out of participation should they not like the direction things are going in. We could have this today, and some people have already done it on a limited basis with things like SETI@Home, etc. What we need is more evolution in this area, more ways to use swarm super-computers, ways that are accessible by many people. A way to turn swarm super computers into an open social utility. This would is not out of our reach right now. We don’t have to wait until networks are totally decentralized to build this into our social systems. We all have computers and operating systems, free cpu cycles, internet connections with extra bandwidth, and likely ideas about what we could do with those resources. There are already clients like boinc.berkeley.edu/, and systems like ceph.newdream.net/ or even www.bittorrent.com/ as building block upon which to improve. www.bittorrent.com/ could even work if enough people participated.
The point is, a p2p social computing/data utility could exist today even with just BOINC and bittorrent. I am now in discussion with communities, like socialsynergyweb.org/oardc/startpage about how they could apply evolutionary computing, simulation, datamining, and other modelling and search to local food systems. (see socialsynergyweb.org/oardc/local-food-systems-computer-modeling-g…)
A BOINC/bittorrent system could be used with applictions like www.urbansim.org/, code.google.com/p/optimaes/ and countless other open source simulation systems, not to mention datamining, GIS analysis, etc This can give local communities access to pwoerful research and development facilities. It could also be used to render and crunch numbers on design/ FEA (finite element analysis) etc.
The question is, why isn’t this already happening? Probably primarily because we get a minimum of what we need from free/ad-based systems like Google. But we could have a lot more, even right now. There is a huge amount of inherent wealth and untapped commons available right now. Another reason could be that people are not seeing some kind of reciprocation of value from their participation in this systems. It is true that some people are seeing rewards in terms of points for teams, etc. But, what if those “points” could be some kind of credit system? Or, even more, what if people could see tangible results in better crop yields, access to new technologies, or other improvements in their lives? What if you could in part purchase access to new/developing technology in the future, by giving access to your free computer cycles now? BOINC makes this possible *rightnow*. BOINC has a built-in credit system, which could be improved if needed through further development, and was created to deal with the problem of people cheating the system to get more credits in the early days of the SETI@hoem project.
This could be one way to support the development of open design/open license projects and products. A distributed infrastructure for datamining, simulation, modelling, even rendering of open license entertainment projects like extinctionlevelevent.com/ through BOINC-based projects like burp.boinc.dk/ (BURP).
Please let us know what you think about this. If you have a project that is already engaged in research and development in some form, could you benefit from access to huge amounts of processing? It is my theory that access to this resource could also help some open source or open content projects secure funding, because it could drive down the up-front cost of research and development for technologies, and could offer a platform for testing theories of many types, from modelling technology to modelling human economic systems.