Uncertainty in Artificial Intelligence
First Name   Last Name   Password   Forgot Password   Log in!
    Proceedings   Proceeding details   Article details         Authors         Search    
PAC-Bayesian Policy Evaluation for Reinforcement Learning
Mahdi Fard, Joelle Pineau, Csaba Szepesvari
Abstract:
Bayesian priors offer a compact yet general means of incorporating domain knowledge into many learning tasks. The correctness of the Bayesian analysis and inference, however, largely depends on accuracy and correctness of these priors. PAC-Bayesian methods overcome this problem by providing bounds that hold regardless of the correctness of the prior distribution. This paper introduces the first PAC-Bayesian bound for the batch reinforcement learning problem with function approximation. We show how this bound can be used to perform model-selection in a transfer learning scenario. Our empirical results confirm that PAC-Bayesian policy evaluation is able to leverage prior distributions when they are informative and, unlike standard Bayesian RL approaches, ignore them when they are misleading.
Keywords:
Pages: 195-202
PS Link:
PDF Link: /papers/11/p195-fard.pdf
BibTex:
@INPROCEEDINGS{Fard11,
AUTHOR = "Mahdi Fard and Joelle Pineau and Csaba Szepesvari",
TITLE = "PAC-Bayesian Policy Evaluation for Reinforcement Learning",
BOOKTITLE = "Proceedings of the Twenty-Seventh Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI-11)",
PUBLISHER = "AUAI Press",
ADDRESS = "Corvallis, Oregon",
YEAR = "2011",
PAGES = "195--202"
}


hosted by DSL   •   site info   •   help