Uncertainty in Artificial Intelligence
First Name   Last Name   Password   Forgot Password   Log in!
    Proceedings         Authors   Author's Info   Article details         Search    
Incremental Model-based Learners With Formal Learning-Time Guarantees
Alexander Strehl, Lihong Li, Michael Littman
Model-based learning algorithms have been shown to use experience efficiently when learning to solve Markov Decision Processes (MDPs) with finite state and action spaces. However, their high computational cost due to repeatedly solving an internal model inhibits their use in large-scale problems. We propose a method based on real-time dynamic programming (RTDP) to speed up two model-based algorithms, RMAX and MBIE (model-based interval estimation), resulting in computationally much faster algorithms with little loss compared to existing bounds. Specifically, our two new learning algorithms, RTDP-RMAX and RTDP-IE, have considerably smaller computational demands than RMAX and MBIE. We develop a general theoretical framework that allows us to prove that both are efficient learners in a PAC (probably approximately correct) sense. We also present an experimental evaluation of these new algorithms that helps quantify the tradeoff between computational and experience demands.
Pages: 485-493
PS Link:
PDF Link: /papers/06/p485-strehl.pdf
AUTHOR = "Alexander Strehl and Lihong Li and Michael Littman",
TITLE = "Incremental Model-based Learners With Formal Learning-Time Guarantees",
BOOKTITLE = "Proceedings of the Twenty-Second Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI-06)",
ADDRESS = "Arlington, Virginia",
YEAR = "2006",
PAGES = "485--493"

hosted by DSL   •   site info   •   help