Uncertainty in Artificial Intelligence
First Name   Last Name   Password   Forgot Password   Log in!
    Proceedings   Proceeding details   Article details         Authors         Search    
Combining predictions from linear models when training and test inputs differ
Thijs van Ommen
Methods for combining predictions from dif- ferent models in a supervised learning setting must somehow estimate/predict the quality of a modelā??s predictions at unknown future inputs. Many of these methods (often implicitly) make the assumption that the test inputs are identical to the training inputs, which is seldom reasonable. By failing to take into account that prediction will generally be harder for test inputs that did not occur in the training set, this leads to the se- lection of too complex models. Based on a novel, unbiased expression for KL divergence, we pro- pose XAIC and its special case FAIC as versions of AIC intended for prediction that use different degrees of knowledge of the test inputs. Both methods substantially differ from and may out- perform all the known versions of AIC even when the training and test inputs are iid, and are es- pecially useful for deterministic inputs and under covariate shift. Our experiments on linear models suggest that if the test and training inputs differ substantially, then XAIC and FAIC predictively outperform AIC, BIC and several other methods including Bayesian model averaging.
Pages: 653-662
PS Link:
PDF Link: /papers/14/p653-van_ommen.pdf
AUTHOR = "Thijs van Ommen ",
TITLE = "Combining predictions from linear models when training and test inputs differ",
BOOKTITLE = "Proceedings of the Thirtieth Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI-14)",
ADDRESS = "Corvallis, Oregon",
YEAR = "2014",
PAGES = "653--662"

hosted by DSL   •   site info   •   help