Uncertainty in Artificial Intelligence
First Name   Last Name   Password   Forgot Password   Log in!
    Proceedings   Proceeding details   Article details         Authors         Search    
Evaluating computational models of explanation using human judgments
Michael Pacer, Joseph Williams, Xi Chen, Tania Lombrozo, Thomas Griffiths
Abstract:
We evaluate four computational models of explanation in Bayesian networks by comparing model predictions to human judgments. In two experiments, we present human participants with causal structures for which the models make divergent predictions and either solicit the best explanation for an observed event (Experiment 1) or have participants rate provided explanations for an observed event (Experiment 2). Across two versions of two causal structures and across both experiments, we find that the Causal Explanation Tree and Most Relevant Explanation models provide better fits to human data than either Most Probable Explanation or Explanation Tree models. We identify strengths and shortcomings of these models and what they can reveal about human explanation. We conclude by suggesting the value of pursuing computational and psychological investigations of explanation in parallel.
Keywords:
Pages: 498-507
PS Link:
PDF Link: /papers/13/p498-pacer.pdf
BibTex:
@INPROCEEDINGS{Pacer13,
AUTHOR = "Michael Pacer and Joseph Williams and Xi Chen and Tania Lombrozo and Thomas Griffiths",
TITLE = "Evaluating computational models of explanation using human judgments",
BOOKTITLE = "Proceedings of the Twenty-Ninth Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI-13)",
PUBLISHER = "AUAI Press",
ADDRESS = "Corvallis, Oregon",
YEAR = "2013",
PAGES = "498--507"
}


hosted by DSL   •   site info   •   help