Posted by Richard B. Hoppe on May 4, 2006 07:40 PM

The evolution of cooperation has long been a vexing problem in biology. In the 1960s and later, a number of proposals to account for various forms of cooperation were offered, including group selection, kin selection, and reciprocal altruism. Both kin selection and reciprocal altruism have some biological data to which to appeal. In The Selfish Gene Dawkins argued that cooperative behavior could emerge as ‘selfish’ genes evolved in the context of other genes (indeed, he’s said that the book could have been title The Cooperative Gene with no change in content) and to the extent that cooperation is an effective strategy for gene vehicles (organisms) to increase reproductive success, but that was largely a formal argument rather than an empirical one. And group selection (which Dawkins emphatically rejects), in my view at least is still on shaky empirical grounds. (Apologies to Steve Rissing, a friend and Project Steve Steve with whom I argue about that.)

A difficulty of doing research on cooperation is the same difficulty that plagues much research on other complex evolutionary phenomena, namely time: interesting multi-celled animals have (relatively) long lifetimes and following a population for many generations is impossible for a single researcher.

Enter computer models. I will not here rehearse the history of computer modeling of evolutionary processes, since I’ve previously touched on it here, here and here on PT.

Of present interest is a study of the evolution of cooperation in a computer model of evolution. Prior work has shown that there are conditions in which several kinds of ‘strategies’ for interactions among artificial agents can evolve. Robert Axelrod, for example, has done a slew of work on that topic. Game theory informs much of that research, and has been useful in predicting the occurrence of certain kinds of strategies in multi-agent contexts.

A new study by Mikhail Burtsev & Peter Turchin (Nature, 440:1041-1044, April 2006) provides a good deal more insight into how cooperative strategies can evolve.

More below the fold.

Burtsev & Turchin employed an agent-based model with artificial agents – what I will hereinafter call “critters” – having a structure in which a single-layer neural net connects inputs (receptors) to outputs (effectors). Receptors carry information about the external state of the critter’s world and about its internal state. The single-layer neural net, with variable (and mutatable) weights, connects inputs to outputs, producing behaviors.

The critters’ world is a 2-D 30x30 grid of cells with the edges connected to form a torus. Each cell contains a resource bundle (renewed periodically) and some number (which may be zero) of critters. There is no limit to the number of critters that can occupy a given cell; food, not space, is the limiting resource in this model. Each cell receives a periodic fixed amount of food, the replenishment and allocation of food being non-contingent. Critters have an orientation in a cell – there is a ‘forward’, ‘left’, ‘right’, and ‘back’. A critter can sense whether there are other critters or food in its own cell, and can sense the contents (other critters and food) of the immediately forward cell and the cells to the immediate left and right. They can’t sense behind them; their ‘field of view’ is limited. Critters can consume the food, increasing their internal resource value. In addition, a critter can sense the level of its internal resource: in effect, it can tell when it’s hungry.

Each critter carries a “marker” – a heritable externally ‘visible’ 10-digit integer string that identifies its lineage – a new critter’s marker is inherited from its parent when the parent divides. Thus there is information in the environment that distinguishes between kin and non-kin. A critter’s sensory system can calculate the Euclidean distance between its own marker and that of other critters in its field of view.

Initially the critters have just three “pre-wired” actions available to them: move into the forward cell if ‘food’ is there; eat if ‘food’ is in the current cell; and divide otherwise. Every action (except eating) depletes the critter’s internal resource, which must be replenished by eating. When a division occurs, the offspring is placed in the same cell as the parent. The offspring ‘inherits’ half of the parent’s internal resource and its marker. All other weights in the receptor-action matrix were set to zero initially.

In the evolutionary procedure of the study, mutations can occur in the weights of the single-layer neural net that connects receptors to effectors during division, and thus receptors can be connected to actions in new ways and new combinations, and previous weights can be altered. As a result, it’s possible for previously innocuous stimuli to come to induce behaviors.

The study had two main conditions: detectable markers vs. undetectable markers. No previous work with agent-based simulations of which I’m aware has employed critters who carry heritable phenotypic markers perceptible and discriminable with respect to self by other agents, though there may be some such out there somewhere. In that history of research where agent phenotypes are indistinguishable, three main ‘strategies’ for interactions among agents recur: hawk, dove, and bourgeois. In the hawk strategy, agents cross cell boundaries to attack other agents, stealing their victims’ resources. Doves never attack and attempt to escape from other agents when the ‘see’ them. A bourgeois agent stays put in its home cell, and attacks invaders of that cell while ignoring critters in neighboring cells.

In Burtsev & Turchin’s no-marker condition, those three strategies did indeed emerge: the population evolved critters with all three strategies. Whether the bourgeois strategy evolved depended on the food supply to cells; below some critical value, too low to support a sedentary critter, only hawk and dove evolved. When food supply is sufficient, the bourgeios strategy comes to dominate the population.

In the second and most interesting experimental condition of the Burtsev & Turchin study, marker recognition was turned on, so the potential for discriminating similar-to-self from dissimilar agents was available. In that condition, three strategies not heretofore seen evolved: cooperative dove, starling and raven.

Cooperative doves ignore out-group critters bearing markers dissimilar to their own, but ‘cooperate’ with in-group critters by leaving cells with other doves bearing similar markers to avoid competing with them for food. Ravens similarly leave cells with similar in-group critters, but they also attack dissimilar critters when they are encountered. Starlings stay at home with in-group critters, but as a group attack interlopers much as real starlings mob an invading hawk or owl.

That is in itself very interesting, but also interesting is the interaction of the evolution of particular strategies with the level of food resources – cell carrying capacity. As noted above in the non-marker condition, when the carrying capacity of cells is low, the bourgeios strategy does not emerge because the food supply of a given cell is insufficient to support a sedentary critter producing offspring in the same cell. Similarly, in the condition with phenotype markers available but insufficient per-cell resources, the starling strategy is impossible because cells can’t support multiple critters. But the raven strategy is possible in that situation. If the carrying capacity of cells is sufficient to support multiple critters per cell, then the starling strategy can and does arise.

Burtsev & Turchin mention that group predation – essentially hunting as packs, the “wolf” strategy – did not arise, most likely because the critters lacked effectors that allowed traveling in groups. They plan further research to explore that issue.

To finish I’ll quote Burtsev & Turchin’s concluding paragraph:

In conclusion, our study shows that within the artificial evolution framework it is possible to model not only how one strategy displaces another (or not), but the very process by which new strategies emerge out of a very large space of possibilities. Our model did not endow agents with a set of preconceived strategies – all that we assumed was that agents have a set of elementary sensory inputs and a set of actions. The selection of appropriate connections between inputs and actions was moulded by the process of evolution. It is notable that the agents in our simulations evolved many of the strategies that were postulated by previous researchers. Thus, in the absence of phenotypic markers, three distinct strategies emerged corresponding to the dove, the hawk and the bourgeois. This shows that our results are not in opposition to game theory, but represent an extension of previous approaches. In the presence of markers, the evolution resulted in some predictable modifications of these basic strategies, but also in the emergence of a new one. Cooperative doves avoided competition with in-group members, whereas cooperative hawks – ‘ravens’ – avoided attack on phenotypically similar agents. The new strategy was the starlings, who lived in groups and defended territory cooperatively against predation. (Bolding added)

The bolded sentences are important. There was no front-loading of strategies from which critters could choose. It was not necessary to surreptitiously smuggle complex behaviors into the model’s code. ID creationists kick and scream about that, but it isn’t necessary. Evolutionary processes can produce complex behaviors, as they produce complex structures, by modifying and elaborating very minimal initial conditions. Complexity ain’t hard at all to evolve, and novelty emerges as a natural consequence of evolution.

RBH