Uncertainty in Artificial Intelligence
First Name   Last Name   Password   Forgot Password   Log in!
    Proceedings         Authors   Author's Info   Article details         Search    
The Thing That We Tried Didn't Work Very Well : Deictic Representation in Reinforcement Learning
Sarah Finney, Natalia Gardiol, Leslie Kaelbling, Tim Oates
Abstract:
Most reinforcement learning methods operate on propositional representations of the world state. Such representations are often intractably large and generalize poorly. Using a deictic representation is believed to be a viable alternative: they promise generalization while allowing the use of existing reinforcement-learning methods. Yet, there are few experiments on learning with deictic representations reported in the literature. In this paper we explore the effectiveness of two forms of deictic representation and a na\"{i}ve propositional representation in a simple blocks-world domain. We find, empirically, that the deictic representations actually worsen learning performance. We conclude with a discussion of possible causes of these results and strategies for more effective learning in domains with objects.
Keywords:
Pages: 154-161
PS Link:
PDF Link: /papers/02/p154-finney.pdf
BibTex:
@INPROCEEDINGS{Finney02,
AUTHOR = "Sarah Finney and Natalia Gardiol and Leslie Kaelbling and Tim Oates",
TITLE = "The Thing That We Tried Didn't Work Very Well : Deictic Representation in Reinforcement Learning",
BOOKTITLE = "Proceedings of the Eighteenth Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI-02)",
PUBLISHER = "Morgan Kaufmann",
ADDRESS = "San Francisco, CA",
YEAR = "2002",
PAGES = "154--161"
}


hosted by DSL   •   site info   •   help