Anytime State-Based Solution Methods for Decision Processes with non-Markovian Rewards
Sylvie Thiebaux, Froduald Kabanza, John Slanley
A popular approach to solving a decision process with non-Markovian rewards (NMRDP) is to exploit a compact representation of the reward function to automatically translate the NMRDP into an equivalent Markov decision process (MDP) amenable to our favorite MDP solution method. The contribution of this paper is a representation of non-Markovian reward functions and a translation into MDP aimed at making the best possible use of state-based anytime algorithms as the solution method. By explicitly constructing and exploring only parts of the state space, these algorithms are able to trade computation time for policy quality, and have proven quite effective in dealing with large MDPs. Our representation extends future linear temporal logic (FLTL) to express rewards. Our translation has the effect of embedding model-checking in the solution method. It results in an MDP of the minimal size achievable without stepping outside the anytime framework, and consequently in better policies by the deadline.
PS Link: http://csl.anu.edu.au/~thiebaux/papers/uai02.ps.gz
PDF Link: /papers/02/p501-thiebaux.pdf
AUTHOR = "Sylvie Thiebaux
and Froduald Kabanza and John Slanley",
TITLE = "Anytime State-Based Solution Methods for Decision Processes with non-Markovian Rewards",
BOOKTITLE = "Proceedings of the Eighteenth Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI-02)",
PUBLISHER = "Morgan Kaufmann",
ADDRESS = "San Francisco, CA",
YEAR = "2002",
PAGES = "501--510"