Uncertainty in Artificial Intelligence
First Name   Last Name   Password   Forgot Password   Log in!
    Proceedings   Proceeding details   Article details         Authors         Search    
Building Bridges: Viewing Active Learning from the Multi-Armed Bandit Lens
Ravi Ganti, Alexander Gray
Abstract:
In this paper we propose a multi-armed bandit inspired, pool based active learning algorithm for the problem of binary classification. By carefully constructing an analogy between active learning and multi-armed bandits, we utilize ideas such as lower confidence bounds, and self-concordant regularization from the multi-armed bandit literature to design our proposed algorithm. Our algorithm is a sequential algorithm, which in each round assigns a sampling distribution on the pool, samples one point from this distribution, and queries the oracle for the label of this sampled point. The design of this sampling distribution is also inspired by the analogy between active learning and multi-armed bandits. We show how to derive lower confidence bounds required by our algorithm. Experimental comparisons to previously proposed active learning algorithms show superior performance on some standard UCI datasets.
Keywords:
Pages: 232-241
PS Link:
PDF Link: /papers/13/p232-ganti.pdf
BibTex:
@INPROCEEDINGS{Ganti13,
AUTHOR = "Ravi Ganti and Alexander Gray",
TITLE = "Building Bridges: Viewing Active Learning from the Multi-Armed Bandit Lens",
BOOKTITLE = "Proceedings of the Twenty-Ninth Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI-13)",
PUBLISHER = "AUAI Press",
ADDRESS = "Corvallis, Oregon",
YEAR = "2013",
PAGES = "232--241"
}


hosted by DSL   •   site info   •   help