Uncertainty in Artificial Intelligence
First Name   Last Name   Password   Forgot Password   Log in!
    Proceedings   Proceeding details   Article details         Authors         Search    
The Optimal Reward Baseline for Gradient-Based Reinforcement Learning
Lex Weaver, Nigel Tao
Abstract:
There exist a number of reinforcement learning algorithms which learnby climbing the gradient of expected reward. Their long-runconvergence has been proved, even in partially observableenvironments with non-deterministic actions, and without the need fora system model. However, the variance of the gradient estimator hasbeen found to be a significant practical problem. Recent approacheshave discounted future rewards, introducing a bias-variance trade-offinto the gradient estimate. We incorporate a reward baseline into thelearning system, and show that it affects variance without introducingfurther bias. In particular, as we approach the zero-bias,high-variance parameterization, the optimal (or variance minimizing)constant reward baseline is equal to the long-term average expectedreward. Modified policy-gradient algorithms are presented, and anumber of experiments demonstrate their improvement over previous work.
Keywords:
Pages: 538-545
PS Link:
PDF Link: /papers/01/p538-weaver.pdf
BibTex:
@INPROCEEDINGS{Weaver01,
AUTHOR = "Lex Weaver and Nigel Tao",
TITLE = "The Optimal Reward Baseline for Gradient-Based Reinforcement Learning",
BOOKTITLE = "Proceedings of the Seventeenth Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI-01)",
PUBLISHER = "Morgan Kaufmann",
ADDRESS = "San Francisco, CA",
YEAR = "2001",
PAGES = "538--545"
}


hosted by DSL   •   site info   •   help