Uncertainty in Artificial Intelligence
First Name   Last Name   Password   Forgot Password   Log in!
    Proceedings         Authors   Author's Info   Article details         Search    
Approximation by Quantization
Vibhav Gogate, Pedro Domingos
Inference in graphical models consists of repeatedly multiplying and summing out potentials. It is generally intractable because the derived potentials obtained in this way can be exponentially large. Approximate inference techniques such as belief propagation and variational methods combat this by simplifying the derived potentials, typically by dropping variables from them. We propose an alternate method for simplifying potentials: quantizing their values. Quantization causes different states of a potential to have the same value, and therefore introduces context-specific independencies that can be exploited to represent the potential more compactly. We use algebraic decision diagrams (ADDs) to do this efficiently. We apply quantization and ADD reduction to variable elimination and junction tree propagation, yielding a family of bounded approximate inference schemes. Our experimental tests show that our new schemes significantly outperform state-of-the-art approaches on many benchmark instances.
Pages: 247-255
PS Link:
PDF Link: /papers/11/p247-gogate.pdf
AUTHOR = "Vibhav Gogate and Pedro Domingos",
TITLE = "Approximation by Quantization",
BOOKTITLE = "Proceedings of the Twenty-Seventh Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI-11)",
ADDRESS = "Corvallis, Oregon",
YEAR = "2011",
PAGES = "247--255"

hosted by DSL   •   site info   •   help