Uncertainty in Artificial Intelligence
First Name   Last Name   Password   Forgot Password   Log in!
    Proceedings         Authors   Author's Info   Article details         Search    
Structured Priors for Structure Learning
Vikash Mansinghka, Charles Kemp, Thomas Griffiths, Joshua Tenenbaum
Traditional approaches to Bayes net structure learning typically assume little regularity in graph structure other than sparseness. However, in many cases, we expect more systematicity: variables in real-world systems often group into classes that predict the kinds of probabilistic dependencies they participate in. Here we capture this form of prior knowledge in a hierarchical Bayesian framework, and exploit it to enable structure learning and type discovery from small datasets. Specifically, we present a nonparametric generative model for directed acyclic graphs as a prior for Bayes net structure learning. Our model assumes that variables come in one or more classes and that the prior probability of an edge existing between two variables is a function only of their classes. We derive an MCMC algorithm for simultaneous inference of the number of classes, the class assignments of variables, and the Bayes net structure over variables. For several realistic, sparse datasets, we show that the bias towards systematicity of connections provided by our model yields more accurate learned networks than a traditional, uniform prior approach, and that the classes found by our model are appropriate.
Pages: 324-331
PS Link:
PDF Link: /papers/06/p324-mansinghka.pdf
AUTHOR = "Vikash Mansinghka and Charles Kemp and Thomas Griffiths and Joshua Tenenbaum",
TITLE = "Structured Priors for Structure Learning",
BOOKTITLE = "Proceedings of the Twenty-Second Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI-06)",
ADDRESS = "Arlington, Virginia",
YEAR = "2006",
PAGES = "324--331"

hosted by DSL   •   site info   •   help