Uncertainty in Artificial Intelligence
First Name   Last Name   Password   Forgot Password   Log in!
    Proceedings   Proceeding details   Article details         Authors         Search    
Causal Conclusions that Flip Repeatedly and Their Justification
Kevin Kelly, Conor Mayo-Wilson
Abstract:
Over the past two decades, several consis- tent procedures have been designed to infer causal conclusions from observational data. We prove that if the true causal network might be an arbitrary, linear Gaussian net- work or a discrete Bayes network, then every unambiguous causal conclusion produced by a consistent method from non-experimental data is subject to reversal as the sample size increases any finite number of times. That result, called the causal flipping theorem, ex- tends prior results to the effect that causal discovery cannot be reliable on a given sam- ple size. We argue that since repeated flipping of causal conclusions is unavoidable in principle for consistent methods, the best possible discovery methods are consistent methods that retract their earlier conclusions no more than necessary. A series of sim- ulations of various methods across a wide range of sample sizes illustrates concretely both the theorem and the principle of com- paring methods in terms of retractions.
Keywords:
Pages: 277-285
PS Link:
PDF Link: /papers/10/p277-kelly.pdf
BibTex:
@INPROCEEDINGS{Kelly10,
AUTHOR = "Kevin Kelly and Conor Mayo-Wilson",
TITLE = "Causal Conclusions that Flip Repeatedly and Their Justification",
BOOKTITLE = "Proceedings of the Twenty-Sixth Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI-10)",
PUBLISHER = "AUAI Press",
ADDRESS = "Corvallis, Oregon",
YEAR = "2010",
PAGES = "277--285"
}


hosted by DSL   •   site info   •   help