Uncertainty in Artificial Intelligence
First Name   Last Name   Password   Forgot Password   Log in!
    Proceedings   Proceeding details   Article details         Authors         Search    
Robust Learning Equilibrium
Itai Ashlagi, Dov Monderer, Moshe Tennenholtz
We introduce robust learning equilibrium. The idea of learning equilibrium is that learning algorithms in multi-agent systems should themselves be in equilibrium rather than only lead to equilibrium. That is, learning equilibrium is immune to strategic deviations: Every agent is better off using its prescribed learning algorithm, if all other agents follow their algorithms, regardless of the unknown state of the environment. However, a learning equilibrium may not be immune to non strategic mistakes. For example, if for a certain period of time there is a failure in the monitoring devices (e.g., the correct input does not reach the agents), then it may not be in equilibrium to follow the algorithm after the devices are corrected. A robust learning equilibrium is immune also to such non-strategic mistakes. The existence of (robust) learning equilibrium is especially challenging when the monitoring devices are 'weak'. That is, the information available to each agent at each stage is limited. We initiate a study of robust learning equilibrium with general monitoring structure and apply it to the context of auctions. We prove the existence of robust learning equilibrium in repeated first-price auctions, and discuss its properties.
Pages: 7-14
PS Link:
PDF Link: /papers/06/p7-ashlagi.pdf
AUTHOR = "Itai Ashlagi and Dov Monderer and Moshe Tennenholtz",
TITLE = "Robust Learning Equilibrium",
BOOKTITLE = "Proceedings of the Twenty-Second Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI-06)",
ADDRESS = "Arlington, Virginia",
YEAR = "2006",
PAGES = "7--14"

hosted by DSL   •   site info   •   help