Imitation Learning with a ValueBased Prior
Umar Syed, Robert Schapire
Abstract:
The goal of imitation learning is for an apprentice to learn how to behave in a stochastic environment by observing a mentor demonstrating the correct behavior. Accurate prior knowledge about the correct behavior can reduce the need for demonstrations from the mentor. We present a novel approach to encoding prior knowledge about the correct behavior, where we assume that this prior knowledge takes the form of a Markov Decision Process (MDP) that is used by the apprentice as a rough and imperfect model of the mentor's behavior. Specifically, taking a Bayesian approach, we treat the value of a policy in this modeling MDP as the log prior probability of the policy. In other words, we assume a priori that the mentor's behavior is likely to be a high value policy in the modeling MDP, though quite possibly different from the optimal policy. We describe an efficient algorithm that, given a modeling MDP and a set of demonstrations by a mentor, provably converges to a stationary point of the log posterior of the mentor's policy, where the posterior is computed with respect to the "value based" prior. We also present empirical evidence that this prior does in fact speed learning of the mentor's policy, and is an improvement in our experiments over similar previous methods.
Keywords:
Pages: 384391
PS Link:
PDF Link: /papers/07/p384syed.pdf
BibTex:
@INPROCEEDINGS{Syed07,
AUTHOR = "Umar Syed
and Robert Schapire",
TITLE = "Imitation Learning with a ValueBased Prior",
BOOKTITLE = "Proceedings of the TwentyThird Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI07)",
PUBLISHER = "AUAI Press",
ADDRESS = "Corvallis, Oregon",
YEAR = "2007",
PAGES = "384391"
}

