Uncertainty in Artificial Intelligence
First Name   Last Name   Password   Forgot Password   Log in!
    Proceedings   Proceeding details   Article details         Authors         Search    
Regret-based Reward Elicitation for Markov Decision Processes
Kevin Regan, Craig Boutilier
Abstract:
The specification of aMarkov decision process (MDP) can be difficult. Reward function specification is especially problematic; in practice, it is often cognitively complex and time-consuming for users to precisely specify rewards. This work casts the problem of specifying rewards as one of preference elicitation and aims to minimize the degree of precision with which a reward function must be specified while still allowing optimal or near-optimal policies to be produced. We first discuss how robust policies can be computed for MDPs given only partial reward information using the minimax regret criterion. We then demonstrate how regret can be reduced by efficiently eliciting reward information using bound queries, using regret-reduction as a means for choosing suitable queries. Empirical results demonstrate that regret-based reward elicitation offers an effective way to produce near-optimal policies without resorting to the precise specification of the entire reward function.
Keywords: null
Pages: 444-451
PS Link:
PDF Link: /papers/09/p444-regan.pdf
BibTex:
@INPROCEEDINGS{Regan09,
AUTHOR = "Kevin Regan and Craig Boutilier",
TITLE = "Regret-based Reward Elicitation for Markov Decision Processes",
BOOKTITLE = "Proceedings of the Twenty-Fifth Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI-09)",
PUBLISHER = "AUAI Press",
ADDRESS = "Corvallis, Oregon",
YEAR = "2009",
PAGES = "444--451"
}


hosted by DSL   •   site info   •   help