Uncertainty in Artificial Intelligence
First Name   Last Name   Password   Forgot Password   Log in!
    Proceedings         Authors   Author's Info   Article details         Search    
Near-optimal Adaptive Pool-based Active Learning with General Loss
Nguyen Viet Cuong, Wee Lee, Nan Ye
We consider adaptive pool-based active learning in a Bayesian setting. We first analyze two com- monly used greedy active learning criteria: the maximum entropy criterion, which selects the example with the highest entropy, and the least confidence criterion, which selects the example whose most probable label has the least probabil- ity value. We show that unlike the non-adaptive case, the maximum entropy criterion is not able to achieve an approximation that is within a con- stant factor of optimal policy entropy. For the least confidence criterion, we show that it is able to achieve a constant factor approximation to the optimal version space reduction in a worst-case setting, where the probability of labelings that have not been eliminated is considered as the ver- sion space. We consider a third greedy active learning criterion, the Gibbs error criterion, and generalize it to handle arbitrary loss functions be- tween labelings. We analyze the properties of the generalization and its variants, and show that they perform well in practice.
Pages: 122-131
PS Link:
PDF Link: /papers/14/p122-viet_cuong.pdf
AUTHOR = "Nguyen Viet Cuong and Wee Lee and Nan Ye",
TITLE = "Near-optimal Adaptive Pool-based Active Learning with General Loss",
BOOKTITLE = "Proceedings of the Thirtieth Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI-14)",
ADDRESS = "Corvallis, Oregon",
YEAR = "2014",
PAGES = "122--131"

hosted by DSL   •   site info   •   help