Uncertainty in Artificial Intelligence
First Name   Last Name   Password   Forgot Password   Log in!
    Proceedings         Authors   Author's Info   Article details         Search    
The Minimum Information Principle for Discriminative Learning
Amir Globerson, Naftali Tishby
Exponential models of distributions are widely used in machine learning for classiffication and modelling. It is well known that they can be interpreted as maximum entropy models under empirical expectation constraints. In this work, we argue that for classiffication tasks, mutual information is a more suitable information theoretic measure to be optimized. We show how the principle of minimum mutual information generalizes that of maximum entropy, and provides a comprehensive framework for building discriminative classiffiers. A game theoretic interpretation of our approach is then given, and several generalization bounds provided. We present iterative algorithms for solving the minimum information problem and its convex dual, and demonstrate their performance on various classiffication tasks. The results show that minimum information classiffiers outperform the corresponding maximum entropy models.
Keywords: null
Pages: 193-200
PS Link:
PDF Link: /papers/04/p193-globerson.pdf
AUTHOR = "Amir Globerson and Naftali Tishby",
TITLE = "The Minimum Information Principle for Discriminative Learning",
BOOKTITLE = "Proceedings of the Twentieth Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI-04)",
ADDRESS = "Arlington, Virginia",
YEAR = "2004",
PAGES = "193--200"

hosted by DSL   •   site info   •   help