Optimally designing the location of training input points (active
learning) and choosing the best model (model selection) are two important
components of supervised learning and have been studied extensively.
However, these two issues seem to have been investigated separately as two
independent problems. If training input points and models are simultaneously
optimized, the generalization performance would be further improved. We call
this problem active learning with model selection. In this talk, I
introduce a new approach called ensemble active learning. The proposed
approach compares favorably with alternative methods such as iteratively
performing active learning and model selection in a sequential manner.
References:
Sugiyama, M. & Rubens, N.
A Batch Ensemble Approach to Active Learning with Model Selection, Technical
Report TR07-0004, Department of Computer Science, Tokyo Institute of
Technology, Tokyo, Japan, 2007. http://www.cs.titech.ac.jp/~tr/reports/2007/TR07-0004.pdf
Sugiyama, M.
Active learning in approximately linear regression based on conditional
expectation of generalization error.
Journal of Machine Learning Research, vol.7 (Jan), pp.141-166, 2006. http://sugiyama-www.cs.titech.ac.jp/~sugi/2006/ALICE.pdf