Uncertainty in Artificial Intelligence
First Name   Last Name   Password   Forgot Password   Log in!
    Proceedings   Proceeding details   Article details         Authors         Search    
Almost-everywhere algorithmic stability and generalization error
Samuel Kutin, Partha Niyogi
Abstract:
We explore in some detail the notion of algorithmic stability as a viable framework for analyzing the generalization error of learning algorithms. We introduce the new notion of training stability of a learning algorithm and show that, in a general setting, it is sufficient for good bounds on generalization error. In the PAC setting, training stability is both necessary and sufficient for learnability.\ The approach based on training stability makes no reference to VC dimension or VC entropy. There is no need to prove uniform convergence, and generalization error is bounded directly via an extended McDiarmid inequality. As a result it potentially allows us to deal with a broader class of learning algorithms than Empirical Risk Minimization. \ We also explore the relationships among VC dimension, generalization error, and various notions of stability. Several examples of learning algorithms are considered.
Keywords:
Pages: 275-282
PS Link:
PDF Link: /papers/02/p275-kutin.pdf
BibTex:
@INPROCEEDINGS{Kutin02,
AUTHOR = "Samuel Kutin and Partha Niyogi",
TITLE = "Almost-everywhere algorithmic stability and generalization error",
BOOKTITLE = "Proceedings of the Eighteenth Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI-02)",
PUBLISHER = "Morgan Kaufmann",
ADDRESS = "San Francisco, CA",
YEAR = "2002",
PAGES = "275--282"
}


hosted by DSL   •   site info   •   help