Multi agents systems, articificial societies, and p2p

Remi Sussan, the editor of our French blog, has guest edited the last issue of our P2P Newsletter, the archive of which is available here.

Issue 160 is dedicated to the topic of multi-agent systems in their relationship to P2P.

Remi introduces the topic as follows:

Why are multi-agent systems important to understand P2P ? Because P2P is not only a change in our technological infrastructure, or even in our political and economical habits. It’s a real paradigm change in the way we think about complexity and organization. P2P suppose that many equipotential agents are able to let “emerge” new structures and surprising behaviors, hopefully (but not necessarily) positive. But the new logic involved here may be applied to the wikipedia as well as to the termites’ collective intelligence. New logic means new tools. We cannot “think” a collective system simply by contemplating its various components (if we could, there would be no emergence, right ?). To see these new potentialities appear, we have to see and let run such systems, even if they are “toy systems”, far from the complexity we may find in real life. Multi-agent programs are such tools and to let us enter in this new world.

Remi then goes on explaining how important advances in multi-agent systems will be to the social sciences, because it will allow experimentation.

This will be done through the creation of artificial societies, an example of which is Sugarscape.

Remi: The more complex programs are today dealing with many aspects simultaneously. Remember anyway that agent-based modeling is not about creating “simulations” in the traditional meaning of the word: one doesn’t try to include all the possible parameters in order to obtain a situation so similar to reality that it might be possible to predict the real behavior from the one occurring in the simulation. It’s more about abstracting basic principles doing the most simple model one can do, and see, from this simple system, if complex situations close to the one we may meet in real life may emerge. As Axtell and Epstein say, there is there a complete epistemological revolution. One no more says : “can you explain it ?” but “Can you grow it?”. It is, as the authors say,” generative social science”. “artificial societies, they write in their book, allows us to “grow” social structures in silico, demonstrating that certain sets of microspecifications are sufficient to generate the macrophenomena of interest.”

My own commentary:

I find this topic to be very interesting and the special issue of Remi is of great interest to have some grounding in this emerging field. However, I do have one remark about the general approach. Network theory, and multi-agent research, are potentially ‘reductionist’, in that obviate the inner intentions of the agents. So for example, putting both beehives and human p2p systems in the same category is in my opinion a mistake. Individual in beehives follow the stygmergic messages left to them by their predecessors and colleagues, but have no overall view of the purpose of the beehive, nor of the fullness of activities taking place. In contract, human p2p systems such as in peer production, are based on holoptism, the capacity to know both the vertical aim and the horizontal activities of the community, and furthermore, the agents in question have subjective intentions.

Reductionist methodologies can of course be useful, as long as we keep in mind their limitations and how they in fact reduce the complexity of reality through some irrealist assumptions, such as that agents have no intentions or inner life, or that those do not matter.

1 Comment Multi agents systems, articificial societies, and p2p

  1. AvatarSam Rose

    Michel, I see what you mean about the capacity and dimensions of human understanding. You definitely can’t make a 1-to-1 comparison with Agent based models.

    Yet, there is a reason why patterns that emerge from simple rules in agent based models are often strikingly similar to certain macro-scale behaviors of humans. It is because: even though we have the capacity to operate and understanf holopitcally, there are many times that we don’t.

    Instead, we as humans in those situations and problems, often just follow simple rules, and all humans are not constantly querying all of the dimensions of our understanding. Think of Malcolm Gladwell’s focus on split-second decisions in his book “Blink”. Stigmergy can happen even with humans dealing with very complex multi-dimensional undersandings. As time goes by, and as congnitive complexity evolves and expands, humans actually figure out a way to make a “simple rules” way to incorporate more complex factors into their decision making and problem solving. In our humans systems, we create “simple rules” that are reductions of ways to interact with complex environments and mediums. Because we as humans are so reductionist in nature, and very often rigidly structural in our design, our behavior outcomes as a macro-system can often be simulated with remarkable accuracy by even very simple agent based modeling. This is in part because a large part of the potential for human cognitive capacity does not often make it’s way into human governing and design systems. This is why we are struggling as a species with many of the problems that we struggle with today. Axelrod noted that the only way to get agent based models to “win” a tragedy of the commons simulation (to not die off and destroy the commons), was to make them “aware” of the game they were playing.

    How does this translate to global human systems? Are enough humans “aware” of the “game” they are playing, to “win” the real tragedy of the commons?

Leave A Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.