Uncertainty in Artificial Intelligence
First Name   Last Name   Password   Forgot Password   Log in!
    Proceedings   Proceeding details   Article details         Authors         Search    
Learning Sparse Causal Models is not NP-hard
Tom Claassen, Joris Mooij, Tom Heskes
This paper shows that causal model discovery is not an NP-hard problem, in the sense that for sparse graphs bounded by node degree k the sound and complete causal model can be obtained in worst case order N^{2(k+2)} independence tests, even when latent variables and selection bias may be present. We present a modification of the well-known FCI algorithm that implements the method for an independence oracle, and suggest improvements for sample/real-world data versions. It does not contradict any known hardness results, and does not solve an NP-hard problem: it just proves that sparse causal discovery is perhaps more complicated, but not as hard as learning minimal Bayesian networks.
Pages: 172-181
PS Link:
PDF Link: /papers/13/p172-claassen.pdf
AUTHOR = "Tom Claassen and Joris Mooij and Tom Heskes",
TITLE = "Learning Sparse Causal Models is not NP-hard",
BOOKTITLE = "Proceedings of the Twenty-Ninth Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI-13)",
ADDRESS = "Corvallis, Oregon",
YEAR = "2013",
PAGES = "172--181"

hosted by DSL   •   site info   •   help