Uncertainty in Artificial Intelligence
First Name   Last Name   Password   Forgot Password   Log in!
    Proceedings         Authors   Author's Info   Article details         Search    
Representation Policy Iteration
Sridhar Mahadevan
Abstract:
This paper addresses a fundamental issue central to approximation methods for solving large Markov decision processes (MDPs): how to automatically learn the underlying representation for value function approximation? A novel theoretically rigorous framework is proposed that automatically generates geometrically customized orthonormal sets of basis functions, which can be used with any approximate MDP solver like least squares policy iteration (LSPI). The key innovation is a coordinate-free representation of value functions, using the theory of smooth functions on a Riemannian manifold. Hodge theory yields a constructive method for generating basis functions for approximating value functions based on the eigenfunctions of the self-adjoint (Laplace-Beltrami) operator on manifolds. In effect, this approach performs a global Fourier analysis on the state space graph to approximate value functions, where the basis functions reflect the largescale topology of the underlying state space. A new class of algorithms called Representation Policy Iteration (RPI) are presented that automatically learn both basis functions and approximately optimal policies. Illustrative experiments compare the performance of RPI with that of LSPI using two handcoded basis functions (RBF and polynomial state encodings).
Keywords:
Pages: 372-379
PS Link:
PDF Link: /papers/05/p372-mahadevan.pdf
BibTex:
@INPROCEEDINGS{Mahadevan05,
AUTHOR = "Sridhar Mahadevan ",
TITLE = "Representation Policy Iteration",
BOOKTITLE = "Proceedings of the Twenty-First Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI-05)",
PUBLISHER = "AUAI Press",
ADDRESS = "Arlington, Virginia",
YEAR = "2005",
PAGES = "372--379"
}


hosted by DSL   •   site info   •   help