Uncertainty in Artificial Intelligence
First Name   Last Name   Password   Forgot Password   Log in!
    Proceedings         Authors   Author's Info   Article details         Search    
Model-Based Bayesian Exploration
Richard Dearden, Nir Friedman, David Andre
Abstract:
Reinforcement learning systems are often concerned with balancing exploration of untested actions against exploitation of actions that are known to be good. The benefit of exploration can be estimated using the classical notion of Value of Information --- the expected improvement in future decision quality arising from the information acquired by exploration. Estimating this quantity requires an assessment of the agent's uncertainty about its current value estimates for states. In this paper we investigate ways of representing and reasoning about this uncertainty in algorithms where the system attempts to learn a model of its environment. We explicitly represent uncertainty about the parameters of the model and build probability distributions over Q-values based on these. These distributions are used to compute a myopic approximation to the value of information for each action and hence to select the action that best balances exploration and exploitation.
Keywords: Reinforcement Learning, Value of Information Exploration
Pages: 150-159
PS Link: http://robotics.Stanford.EDU/people/nir/Papers/DFA1.ps
PDF Link: /papers/99/p150-dearden.pdf
BibTex:
@INPROCEEDINGS{Dearden99,
AUTHOR = "Richard Dearden and Nir Friedman and David Andre",
TITLE = "Model-Based Bayesian Exploration",
BOOKTITLE = "Proceedings of the Fifteenth Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI-99)",
PUBLISHER = "Morgan Kaufmann",
ADDRESS = "San Francisco, CA",
YEAR = "1999",
PAGES = "150--159"
}


hosted by DSL   •   site info   •   help