Nikoleta E.Glynatsi
Max Planck Research Group Dynamics of Social Behavior Talk: Limited information and the effects on the evolution of cooperation Abstract: Evolutionary game theory is an important theoretical framework for exploring cooperation and competition in evolving populations. Respective models often assume that successful behaviors are imitated more often and hence spread within the population. However, these models also make strong assumptions about how individuals remember and process information. For example, when strategies are updated through social learning, it is commonly assumed that individuals compare their average payoffs, that imitation is perfect, and that individuals copy each other’s strategies faithfully. In many applications, however, individuals have access only to limited information. In this talk, I will present results on relaxing these assumptions and assessing their effect on the evolution of cooperation. |
Simon Powers
Division of Computing Science and Mathematics, University of Striling Talk: What's the point of trust? Investigating the adaptive benefits of trust using evolutionary game theory Abstract: Why would one agent trust another? Social life is, and always has been, full of opportunities for one agent to exploit another. Examples abound, from hunter- gatherers shirking in communal hunting, through to used cars dealers misrepresenting the quality of their wares. By trusting another, an agent exposes itself to risk -- the risk that the other agent might not act in an honest and reliable way. Why, then, would trust behaviours evolve? A common explanation from social science is that trust acts as a cognitive shortcut. In other words, it reduces the complexity of decision-making for an agent in a social situation. But to determine whether this mechanism could provide a selection pressure for trust to evolve in social situations, we need to formalise it using evolutionary game theory. In this talk, I will present analyses showing conditions under which trust behaviours can evolve in a range of repeated games, from Prisoners Dilemmas to Stag Hunts, and discuss the implications of this for understanding trust between people and artificial intelligence. |