event

PhD Defense by Christopher Berlind

Primary tabs

Ph.D. Dissertation Defense

 

Title: New Insights on the Power of Active Learning

 

Christopher Berlind

Ph.D. Candidate in Computer Science

School of Computer Science
Georgia Institute of Technology

http://www.cc.gatech.edu/~cberlind

 

Date: Tuesday, June 23, 2015
Time: 9:30 am
Location: KACB Room 2100

 

Committee

---------------

Prof. Maria-Florina Balcan (Co-advisor, School of Computer Science, Carnegie Mellon University)

Prof. Le Song (Co-advisor, School of Computational Science and Engineering, Georgia Institute of Technology)

Prof. Santosh Vempala (School of Computer Science, Georgia Institute of Technology)

Prof. Charles L. Isbell, Jr. (School of Interactive Computing, Georgia Institute of Technology)

Prof. Avrim Blum (School of Computer Science, Carnegie Mellon University)

 

Abstract

---------------

Supervised machine learning is the process of algorithmically learning how to make future predictions by training on labeled examples of past occurrences. While traditionally a learning algorithm has access to a large corpus of labeled examples, the recent proliferation of data made possible by modern computing power and the Internet has made unlabeled data much easier to come by than accompanying labels. For example, billions of images are readily available for download on the Internet, but annotations of the objects present in an image are much more difficult to acquire.

 

Two main methods have been proposed by the machine learning community for taking advantage of relatively low-cost unlabeled examples in an effort to reduce the number of expensive labeled examples needed for learning. One method is semi-supervised learning, which includes a large quantity of unlabeled examples into the training data in addition to a smaller number of labeled examples. Another is active learning, in which the algorithm itself can select which examples it would like labeled out of a large pool of unlabeled examples. Prior research on active learning has focused almost entirely on the issue of reducing labeling effort (over that of passive learning) through intelligent querying strategies.

 

In this dissertation, we show that the power to make adaptive label queries has benefits beyond reducing labeling effort over passive learning. We develop and explore several novel methods for active learning that exemplify these new capabilities. Some of these methods use active learning for a non-standard purpose, such as computational speedup, structure discovery, and domain adaptation. Others successfully apply active learning in situations where prior results have given evidence of its ineffectiveness.

 

Specifically, we first give an active algorithm for learning disjunctions that is able to overcome a computational intractability present in the semi-supervised version of the same problem. This is the first known example of the computational advantages of active learning. Next, we investigate using active learning to determine structural properties (margins) of the data-generating distribution that can further improve learning rates. This is in contrast to most active learning algorithms which either assume or ignore structure rather than seeking to identify and exploit it. We then give an active nearest neighbors algorithm for domain adaptation, the task of learning a predictor for some target domain using mostly examples from a different source domain. This is the first formal analysis of the generalization and query behavior of an active domain adaptation algorithm. Finally, we show a situation where active learning can outperform passive learning on very noisy data, circumventing prior results that active learning cannot have a significant advantage over passive learning in high-noise regimes.

Status

  • Workflow Status:Published
  • Created By:Tatianna Richardson
  • Created:06/09/2015
  • Modified By:Fletcher Moore
  • Modified:10/07/2016

Categories

Target Audience