Conditions Under Which Conditional Independence and Scoring Methods Lead to Identical Selection of Bayesian Network Models
It is often stated in papers tackling the task of inferring Bayesian network structures from data that there are these two distinct approaches: (i) Apply conditional independence tests when testing for the presence or otherwise of edges; (ii) Search the model space using a scoring metric. Here I argue that for complete data and a given node ordering this division is a myth, by showing that cross entropy methods for checking conditional independence are mathematically identical to methods based upon discriminating between models by their overall goodness-of-fit logarithmic scores.
PDF Link: /papers/01/p91-cowell.pdf
AUTHOR = "Robert Cowell
TITLE = "Conditions Under Which Conditional Independence and Scoring Methods Lead to Identical Selection of Bayesian Network Models",
BOOKTITLE = "Proceedings of the Seventeenth Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI-01)",
PUBLISHER = "Morgan Kaufmann",
ADDRESS = "San Francisco, CA",
YEAR = "2001",
PAGES = "91--97"