Uncertainty in Artificial Intelligence
First Name   Last Name   Password   Forgot Password   Log in!
    Proceedings   Proceeding details   Article details         Authors         Search    
A Bayesian Sampling Approach to Exploration in Reinforcement Learning
John Asmuth, Lihong Li, Michael Littman, Ali Nouri, David Wingate
Abstract:
We present a modular approach to reinforcement learning that uses a Bayesian representation of the uncertainty over models. The approach, BOSS (Best of Sampled Set), drives exploration by sampling multiple models from the posterior and selecting actions optimistically. It extends previous work by providing a rule for deciding when to resample and how to combine the models. We show that our algorithm achieves nearoptimal reward with high probability with a sample complexity that is low relative to the speed at which the posterior distribution converges during learning. We demonstrate that BOSS performs quite favorably compared to state-of-the-art reinforcement-learning approaches and illustrate its flexibility by pairing it with a non-parametric model that generalizes across states.
Keywords: null
Pages: 19-26
PS Link:
PDF Link: /papers/09/p19-asmuth.pdf
BibTex:
@INPROCEEDINGS{Asmuth09,
AUTHOR = "John Asmuth and Lihong Li and Michael Littman and Ali Nouri and David Wingate",
TITLE = "A Bayesian Sampling Approach to Exploration in Reinforcement Learning",
BOOKTITLE = "Proceedings of the Twenty-Fifth Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI-09)",
PUBLISHER = "AUAI Press",
ADDRESS = "Corvallis, Oregon",
YEAR = "2009",
PAGES = "19--26"
}


hosted by DSL   •   site info   •   help