Uncertainty in Artificial Intelligence
First Name   Last Name   Password   Forgot Password   Log in!
    Proceedings   Proceeding details   Article details         Authors         Search    
Policy Improvement for POMDPs Using Normalized Importance Sampling
Christian Shelton
Abstract:
We present a new method for estimating the expected return of a POMDP from experience. The method does not assume any knowledge of the POMDP and allows the experience to be gathered from an arbitrary sequence of policies. The return is estimated for any new policy of the POMDP. We motivate the estimator from function-approximation and importance sampling points-of-view and derive its theoretical properties. Although the estimator is biased, it has low variance and the bias is often irrelevant when the estimator is used for pair-wise comparisons. We conclude by extending the estimator to policies with memory and compare its performance in a greedy search algorithm to REINFORCE algorithms showing an order of magnitude reduction in the number of trials required.
Keywords:
Pages: 496-503
PS Link:
PDF Link: /papers/01/p496-shelton.pdf
BibTex:
@INPROCEEDINGS{Shelton01,
AUTHOR = "Christian Shelton ",
TITLE = "Policy Improvement for POMDPs Using Normalized Importance Sampling",
BOOKTITLE = "Proceedings of the Seventeenth Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI-01)",
PUBLISHER = "Morgan Kaufmann",
ADDRESS = "San Francisco, CA",
YEAR = "2001",
PAGES = "496--503"
}


hosted by DSL   •   site info   •   help