Ideas on a new research infrastructure for emerging collaborative networks

Today I read this amazing article:

It’s worth mentioning that their is no science without theory. The human mind is wired for creating theories. That is why there is a neural network built into our brain/mind system. It is wired to consider the un-manifested.

That being said, what Kevin Kelly describes is a lot like Stephen Wolfram’s “A New Kind of Science”. Not exactly the same, but closely related.

In both cases: a *search* based science. Not that science didn’t already have “search”. But now you can start to create billions of possible combinations of simulations and then search through them. So it is simulation/search based.

You can compress thousands, or even millions of years or human trial and error into this type of research. Plus, you walk away with billions of potential variations on your design, and ready ways and building blocks to adapt and change your design, and data about how different variations work under different conditions. You have a design DNA.

Right now, I use databases like google as a kind of bionic brain extension. So, I say bring on the new databases, the computer clusters, the algorithms. As long as we all have equal access to them.

The question for me is: what comes after search? When knowledge bases reach the exabyte level, or higher, and there are algorithms searching/crunching through them and finding patterns, relationships, testing all of the possible combinations, etc, how do handle and process what the machines are outputting? How do we avoid becoming a cybernetic society?

An even more fundamental question is: what can you create with all of this data, and/or with the systems that are used to collect and analyze it? How can it be used as a medium for expression? What are the new ways to “see” and “feel” the data? How can the data systems be grounded as an ecology that is self-balancing, so that it doesn’t overrun our existence like a form of digital toxic pollution, and cause ill effects on living systems, like people being ruled by algorithmic output that is too much in one direction?

Another consideration: When we see that entities like google are the only ones that can wield and harness resources like those that process and hold petabyte databases, then we still have the potential for a power imbalance, where those who can hold and process the most data are the “wealthiest” in terms of capability, adaptability, access to knowledge.

I wonder how many people realize that existing technology possesses the building blocks to allow p2p networks to exceed the capability of any one entity like Google, etc?

I can see the possibilty of something simple, and elegant on a basic scale, that can scale-up easily, that provides a social utility for anyone who accesses it, using the combined resources of millions, or possibly even of people for storage and processing, that cannot be controlled for any specific exclusive purpose by any one person, and that could be controlled democratically by people opting out of participation should they not like the direction things are going in. We could have this today, and some people have already done it on a limited basis with things like [email protected], etc. What we need is more evolution in this area, more ways to use swarm super-computers, ways that are accessible by many people. A way to turn swarm super computers into an open social utility. This would is not out of our reach right now. We don’t have to wait until networks are totally decentralized to build this into our social systems. We all have computers and operating systems, free cpu cycles, internet connections with extra bandwidth, and likely ideas about what we could do with those resources. There are already clients like, and systems like or even as building block upon which to improve. could even work if enough people participated.

The point is, a p2p social computing/data utility could exist today even with just BOINC and bittorrent. I am now in discussion with communities, like about how they could apply evolutionary computing, simulation, datamining, and other modelling and search to local food systems. (see…)

A BOINC/bittorrent system could be used with applictions like, and countless other open source simulation systems, not to mention datamining, GIS analysis, etc This can give local communities access to pwoerful research and development facilities. It could also be used to render and crunch numbers on design/ FEA (finite element analysis) etc.

The question is, why isn’t this already happening? Probably primarily because we get a minimum of what we need from free/ad-based systems like Google. But we could have a lot more, even right now. There is a huge amount of inherent wealth and untapped commons available right now. Another reason could be that people are not seeing some kind of reciprocation of value from their participation in this systems. It is true that some people are seeing rewards in terms of points for teams, etc. But, what if those “points” could be some kind of credit system? Or, even more, what if people could see tangible results in better crop yields, access to new technologies, or other improvements in their lives? What if you could in part purchase access to new/developing technology in the future, by giving access to your free computer cycles now? BOINC makes this possible *rightnow*. BOINC has a built-in credit system, which could be improved if needed through further development, and was created to deal with the problem of people cheating the system to get more credits in the early days of the [email protected] project.

This could be one way to support the development of open design/open license projects and products. A distributed infrastructure for datamining, simulation, modelling, even rendering of open license entertainment projects like through BOINC-based projects like (BURP).

Please let us know what you think about this. If you have a project that is already engaged in research and development in some form, could you benefit from access to huge amounts of processing? It is my theory that access to this resource could also help some open source or open content projects secure funding, because it could drive down the up-front cost of research and development for technologies, and could offer a platform for testing theories of many types, from modelling technology to modelling human economic systems.

7 Comments Ideas on a new research infrastructure for emerging collaborative networks

  1. Tom Loeber

    “something simple, and elegant on a basic scale, that can scale-up easily, that provides a social utility for anyone who accesses it, using the combined resources of millions, or possibly even of people for storage and processing, that cannot be controlled for any specific exclusive purpose by any one person, and that could be controlled democratically by people opting out of participation should they not like the direction things are going in.”

    I have a theory that appears to hold these potentials but not just for millions but possibly for billions, all of humanity, and maybe even trillions if we are to become a long lived species. My theory has brought me to understand that scaling anonymity would be valuable also, i.e. the more cherished a person becomes in the “system” as offering useful information on how to coordinate and share resources the more anonymity they are given. How could anything approach being utilitarian for billions of people, many with no computer access, or correctly judge a person’s worth to offer scaled anonymity in a functional manner? Sounds incredible, I know.

    Why has such a system not been enacted? Appears the second-order cybernetic nature of society, where no one can have an unbiased perspective, has tied us to accepting epistemic relativism as unquestionable. I see for example that mention is made that a desirable system could aid in securing funding for open source projects. I suspect that if we enact a system that truly enables us to communicate and coordinate our lives in a highly efficient and sustainable manner, funding will no longer be a consideration and happily. To realize these token systems of communication that we call “funding” are inherently dysfunctional and to realize viable alternatives is so incredible and alien to our perspective that we may not find such a system in time, in time to prevent accidental collapse of our one life support system due to our increasing abilities having too little intelligent governance. I also see an apparent distrust of “cybernetics” in your statement “How do we avoid becoming a cybernetic society?” You can see a rather exhaustive definition of this term on the American Society of Cybernetics glossary online at I might suggest the goal is not to avoid becoming a cybernetic society as that is the nature of the beast right now and will always be, if we continue to be. I offer you a different question that I believe is more accurate. How do we make a viable cybernetic system before it is too late?

  2. Sam Rose

    “I offer you a different question that I believe is more accurate. How do we make a viable cybernetic system before it is too late?”

    Tom, great, great comment, we are thinking alike on much of this I think. I concur with what I quoted from you above. We are already a cybernetic society. I guess one could argue that all life is cybernetic in some ways, and has been even before humans emerged.

    I don’t distrust cybernetics. In fact I use thermostats everyday! 🙂 But, I think what I was trying to ask was more along the lines of what you said, soemthing like “how can harness and use cybernetic systems without becoming a mindless society”

  3. Tom Loeber

    Okay. It is hard to grasp the situation, hard and frightening. We already have a “mindless society” and the potential for great accidents exist and are in the process of being realized like a great lumbering locomotive heading for a destroyed bridge over a deep and mortal chasm. We do not have a plurality of societies right now, only one, and it is fragmented upon imaginary borders inherited from a legacy of having no alternative than to accept and play along with the principles of anarchy which can not sustain. I realize it seems I am harping upon semantics but the risks are great and the solution only possible through clear understanding. I mentioned that society is a second-order cybernetic system which is reason to get as real, concise and universal in our terminology and exploration as possible. Epistemic relativism is basically the opposite of science and yet plays a large part in the formation of current policies, probably is the reason why so many appear to find thinking clearly such a difficulty. The anomie grows as the incoherency exacerbates under the stresses of the information explosion. As a lead to clarity I drop a couple more little used terms. We need to approach maximizing ergodicity if we want a social system that works and is not inherently self-destructive. This equates to maximizing Shannon’s entropy as the general opposite of the classical entropy of thermodynamics. Ephemeralization, a term Buckminster Fuller coined, will bring these things into more common perception amongst those who are looking but many are not looking and wont, perhaps a majority. They will only come around after the tool is created to coordinate human endeavors in a sustainable and pragmatic way and they see that it offers reward and value beyond any other option.

    Appears to me that Python holds the most promise for making this collaborative P2P communication software alluded to and that could facilitate maximizing egalitarianism (a consequence and requirement of seeking ergodicity) by seeking to adhere to the demographics as revealed in my general mathematical theory. Besides being designed in such a way to promote open source development, Python also has rich inherent data storage and management options which would minimize the risk and complications of utilizing external databases, me thinks. This also facilitates the software being entirely contained on client computers rather than any third party machine that might be hacked. The idea is scaled so that the amount of data needing to be stored, retrieved and managed through external web linkages should be well within feasible limits at every phase of the system. This also helps minimize risks as the forces that are opposed to self-determination would only be able to break into the encrypted stream sporadically and not get the totality of data necessary to thwart strategies being developed within the system, I suspect.

    Oh well. Just blabbing away here a bit. I’m approaching some utility with python and if there are any others out there who have the skills and/or want to help, you can get in touch with me.

    Thank you for bringing up an all important subject, Sam.

  4. Sam Rose

    It’s funny that you mention python, because it is a language that I deep into learning and employing right now. I think it can line up well with existing C and C++ projects out there, too.

    Let me know what you have in mind for development ideas and projects. I am very interested in this. There are a few simulation platforms written in python that I am already exploring. I think BOINC is a great platform at least for prototyping, and is highly servicable the way that it is right now, for massive volunteer parallel processing

  5. Sam Rose

    Tom, actually I should clarify that I am thinking about BOINC for short-term parallel processing of datamining and simulaiton/processign of data, something that is useable right away.

    But I am interested in the idea of ptyhon as a basis for longterm development of distributed computing like you mention above

  6. Eimhin

    Sam, do you know the crew working on Netention? For a connect, email me at involuteconduit @ gmail. com see you in Berlin soon 🙂

Leave A Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.