Remi Sussan, the editor of our French blog, has guest edited the last issue of our P2P Newsletter, the archive of which is available here.
Issue 160 is dedicated to the topic of multi-agent systems in their relationship to P2P.
Remi introduces the topic as follows:
Why are multi-agent systems important to understand P2P ? Because P2P is not only a change in our technological infrastructure, or even in our political and economical habits. Itâ€™s a real paradigm change in the way we think about complexity and organization. P2P suppose that many equipotential agents are able to let â€œemergeâ€ new structures and surprising behaviors, hopefully (but not necessarily) positive. But the new logic involved here may be applied to the wikipedia as well as to the termitesâ€™ collective intelligence. New logic means new tools. We cannot â€œthinkâ€ a collective system simply by contemplating its various components (if we could, there would be no emergence, right ?). To see these new potentialities appear, we have to see and let run such systems, even if they are â€œtoy systemsâ€, far from the complexity we may find in real life. Multi-agent programs are such tools and to let us enter in this new world.
Remi then goes on explaining how important advances in multi-agent systems will be to the social sciences, because it will allow experimentation.
This will be done through the creation of artificial societies, an example of which is Sugarscape.
Remi: The more complex programs are today dealing with many aspects simultaneously. Remember anyway that agent-based modeling is not about creating â€œsimulationsâ€ in the traditional meaning of the word: one doesnâ€™t try to include all the possible parameters in order to obtain a situation so similar to reality that it might be possible to predict the real behavior from the one occurring in the simulation. Itâ€™s more about abstracting basic principles doing the most simple model one can do, and see, from this simple system, if complex situations close to the one we may meet in real life may emerge. As Axtell and Epstein say, there is there a complete epistemological revolution. One no more says : â€œcan you explain it ?â€ but â€œCan you grow it?â€. It is, as the authors say,â€ generative social scienceâ€. â€œartificial societies, they write in their book, allows us to â€œgrowâ€ social structures in silico, demonstrating that certain sets of microspecifications are sufficient to generate the macrophenomena of interest.â€
My own commentary:
I find this topic to be very interesting and the special issue of Remi is of great interest to have some grounding in this emerging field. However, I do have one remark about the general approach. Network theory, and multi-agent research, are potentially ‘reductionist’, in that obviate the inner intentions of the agents. So for example, putting both beehives and human p2p systems in the same category is in my opinion a mistake. Individual in beehives follow the stygmergic messages left to them by their predecessors and colleagues, but have no overall view of the purpose of the beehive, nor of the fullness of activities taking place. In contract, human p2p systems such as in peer production, are based on holoptism, the capacity to know both the vertical aim and the horizontal activities of the community, and furthermore, the agents in question have subjective intentions.
Reductionist methodologies can of course be useful, as long as we keep in mind their limitations and how they in fact reduce the complexity of reality through some irrealist assumptions, such as that agents have no intentions or inner life, or that those do not matter.