Uncertainty in Artificial Intelligence
First Name   Last Name   Password   Forgot Password   Log in!
    Proceedings   Proceeding details   Article details         Authors         Search    
Variance-Based Rewards for Approximate Bayesian Reinforcement Learning
Jonathan Sorg, Satinder Singh, Richard Lewis
Abstract:
The explore{exploit dilemma is one of the central challenges in Reinforcement Learning (RL). Bayesian RL solves the dilemma by providing the agent with information in the form of a prior distribution over environments; however, full Bayesian planning is intractable. Planning with the mean MDP is a common myopic approximation of Bayesian planning. We derive a novel reward bonus that is a function of the posterior distribution over environments, which, when added to the reward in planning with the mean MDP, results in an agent which explores efficiently and effectively. Although our method is similar to existing methods when given an uninformative or unstructured prior, unlike existing methods, our method can exploit structured priors. We prove that our method results in a polynomial sample complexity and empirically demonstrate its advantages in a structured exploration task.
Keywords:
Pages: 564-571
PS Link:
PDF Link: /papers/10/p564-sorg.pdf
BibTex:
@INPROCEEDINGS{Sorg10,
AUTHOR = "Jonathan Sorg and Satinder Singh and Richard Lewis",
TITLE = "Variance-Based Rewards for Approximate Bayesian Reinforcement Learning",
BOOKTITLE = "Proceedings of the Twenty-Sixth Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI-10)",
PUBLISHER = "AUAI Press",
ADDRESS = "Corvallis, Oregon",
YEAR = "2010",
PAGES = "564--571"
}


hosted by DSL   •   site info   •   help