Login
Computer Science @ Rochester
Wednesday, April 29, 2009
9:30 AM
Computer Studies Bldg. Room 632
Ph.D. Thesis Proposal
Satyaki Mahalanabis
University of Rochester
When does unlabeled data help? - algorithms and convergence rate analysis
There has been a lot of recent interest in theoretical machine learning about how unlabeled examples can help a learning algorithm reduce the number of labeled examples it needs to achieve low error. Previous theoretical research has focused mainly on active learning algorithms for parametric classifiers (typically of bounded VC dimension) such as thresholds and half-spaces.

In this thesis, we propose to investigate the convergence for non-parametric classifiers (such as nearest neighbour classifiers) in active learning setting. We propose to study different sampling strategies as well as what assumptions one needs to make regarding the underlying data and label distributions so that nearest neighbour or other plug-in estimators give better convergence than their respective passive versions.

In the long term, we would also like to study computationally efficient algorithms for active learning - ones which ask for fewer labeled data points while requiring not too many (i.e polynomially many) unlabeled samples, unlike most algorithms known currently (like [BBL06]) which may need exponentially many unlabeled data. Such efficient algorithms are not known even for simple classifiers such as homogeneous hyperplanes with bounded noise [Mon06].