Comments on: Multi agents systems, articificial societies, and p2p https://blog.p2pfoundation.net/multi-agents-systems-articificial-societies-and-p2p/ Researching, documenting and promoting peer to peer practices Mon, 13 Oct 2014 12:39:35 +0000 hourly 1 https://wordpress.org/?v=5.5.17 By: Sam Rose https://blog.p2pfoundation.net/multi-agents-systems-articificial-societies-and-p2p/comment-page-1/#comment-69148 Sat, 05 May 2007 16:28:12 +0000 http://blog.p2pfoundation.net/multi-agents-systems-articificial-societies-and-p2p/2007/05/06#comment-69148 Michel, I see what you mean about the capacity and dimensions of human understanding. You definitely can’t make a 1-to-1 comparison with Agent based models.

Yet, there is a reason why patterns that emerge from simple rules in agent based models are often strikingly similar to certain macro-scale behaviors of humans. It is because: even though we have the capacity to operate and understanf holopitcally, there are many times that we don’t.

Instead, we as humans in those situations and problems, often just follow simple rules, and all humans are not constantly querying all of the dimensions of our understanding. Think of Malcolm Gladwell’s focus on split-second decisions in his book “Blink”. Stigmergy can happen even with humans dealing with very complex multi-dimensional undersandings. As time goes by, and as congnitive complexity evolves and expands, humans actually figure out a way to make a “simple rules” way to incorporate more complex factors into their decision making and problem solving. In our humans systems, we create “simple rules” that are reductions of ways to interact with complex environments and mediums. Because we as humans are so reductionist in nature, and very often rigidly structural in our design, our behavior outcomes as a macro-system can often be simulated with remarkable accuracy by even very simple agent based modeling. This is in part because a large part of the potential for human cognitive capacity does not often make it’s way into human governing and design systems. This is why we are struggling as a species with many of the problems that we struggle with today. Axelrod noted that the only way to get agent based models to “win” a tragedy of the commons simulation (to not die off and destroy the commons), was to make them “aware” of the game they were playing.

How does this translate to global human systems? Are enough humans “aware” of the “game” they are playing, to “win” the real tragedy of the commons?

]]>