Approximate Learning in Complex Dynamic Bayesian Networks
Raffaella Settimi, Jim Smith, A. Gargoum
In this paper we extend the work of Smith and Papamichail (1999) and present fast approximate Bayesian algorithms for learning in complex scenarios where at any time frame, the relationships between explanatory state space variables can be described by a Bayesian network that evolve dynamically over time and the observations taken are not necessarily Gaussian. It uses recent developments in approximate Bayesian forecasting methods in combination with more familiar Gaussian propagation algorithms on junction trees. The procedure for learning state parameters from data is given explicitly for common sampling distributions and the methodology is illustrated through a real application. The efficiency of the dynamic approximation is explored by using the Hellinger divergence measure and theoretical bounds for the efficacy of such a procedure are discussed.
Keywords: Dynamic Generalised Linear Models,
Dynamic Bayesian networks, Hellinger distance, j
PS Link: http://www.warwick.ac.uk/staff/R.Settimi/dynbn.zip
PDF Link: /papers/99/p585-settimi.pdf
AUTHOR = "Raffaella Settimi
and Jim Smith and A. Gargoum",
TITLE = "Approximate Learning in Complex Dynamic Bayesian Networks",
BOOKTITLE = "Proceedings of the Fifteenth Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI-99)",
PUBLISHER = "Morgan Kaufmann",
ADDRESS = "San Francisco, CA",
YEAR = "1999",
PAGES = "585--593"