Learning Conventions in Multiagent Stochastic Domains using Likelihood Estimates
Fully cooperative multiagent systems---those in which agents share a joint utility model--is of special interest in AI. A key problem is that of ensuring that the actions of individual agents are coordinated, especially in settings where the agents are autonomous decision makers. We investigate approaches to learning coordinated strategies in stochastic domains where an agent's actions are not directly observable by others. Much recent work in game theory has adopted a Bayesian learning perspective to the more general problem of equilibrium selection, but tends to assume that actions can be observed. We discuss the special problems that arise when actions are not observable, including effects on rates of convergence, and the effect of action failure probabilities and asymmetries. We also use likelihood estimates as a means of generalizing fictitious play learning models in our setting. Finally, we propose the use of maximum likelihood as a means of removing strategies from consideration, with the aim of convergence to a conventional equilibrium, at which point learning and deliberation can cease.
Keywords: Multiagent systems, game theory, coordination, conventions, equilibrium
PS Link: http://www.cs.ubc.ca/spider/cebly/Papers/uai96coord.ps
PDF Link: /papers/96/p106-boutilier.pdf
AUTHOR = "Craig Boutilier
TITLE = "Learning Conventions in Multiagent Stochastic Domains using Likelihood Estimates",
BOOKTITLE = "Proceedings of the Twelfth Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI-96)",
PUBLISHER = "Morgan Kaufmann",
ADDRESS = "San Francisco, CA",
YEAR = "1996",
PAGES = "106--114"