Uncertainty in Artificial Intelligence
First Name   Last Name   Password   Forgot Password   Log in!
    Proceedings   Proceeding details   Article details         Authors         Search    
Value Function Approximation in Zero-Sum Markov Games
Michail Lagoudakis, Ron Parr
Abstract:
This paper investigates value function approximation in the context of zero-sum Markov games, which can be viewed as a generalization of the Markov decision process (MDP) framework to the two-agent case. We generalize error bounds from MDPs to Markov games and describe generalizations of reinforcement learning algorithms to Markov games. We present a generalization of the optimal stopping problem to a two-player simultaneous move Markov game. For this special problem, we provide stronger bounds and can guarantee convergence for LSTD and temporal difference learning with linear value function approximation. We demonstrate the viability of value function approximation for Markov games by using the Least squares policy iteration (LSPI) algorithm to learn good policies for a soccer domain and a flow control problem.
Keywords:
Pages: 283-292
PS Link:
PDF Link: /papers/02/p283-lagoudakis.pdf
BibTex:
@INPROCEEDINGS{Lagoudakis02,
AUTHOR = "Michail Lagoudakis and Ron Parr",
TITLE = "Value Function Approximation in Zero-Sum Markov Games",
BOOKTITLE = "Proceedings of the Eighteenth Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI-02)",
PUBLISHER = "Morgan Kaufmann",
ADDRESS = "San Francisco, CA",
YEAR = "2002",
PAGES = "283--292"
}


hosted by DSL   •   site info   •   help