Max-Planck-Institut für Informatik
max planck institut
informatik
mpii logo Minerva of the Max Planck Society
 

MPI-INF or MPI-SWS or Local Campus Event Calendar

<< Previous Entry Next Entry >> New Event Entry Edit this Entry Login to DB (to update, delete)
What and Who
Title:Multi-Task Learning and Matrix Regularization
Speaker:Andreas Argyriou
coming from:UCL London
Speakers Bio:Dr. Argyriou obtained his BSc, MEng in EECS from MIT, Boston, his MSc and PhD (January 2008) from UCL, London, working with M. Pontil and Z. Ghahramani. His interests are in machine learning, regularisation theory, kernel methods, multi-task and transfer learning, sparse estimation, learning combinations of kernels, convex optimisation, semi-supervised learning, with applications to bioinformatics, computer vision, and collaborative filtering, among others.
Event Type:Talk
Visibility:D1, D4, RG1, MMCI, D3, D5, SWS
We use this to send out email in the morning.
Level:AG Audience
Language:English
Date, Time and Location
Date:Tuesday, 21 July 2009
Time:11:00
Duration:45 Minutes
Location:Saarbrücken
Building:E1 4
Room:019
Abstract
Multi-task learning extends the standard paradigm of supervised learning. In multi-task learning, samples for multiple related tasks are given and the goal is to learn a function for each task and also to generalize well (transfer learned knowledge) on new tasks. The applications of this paradigm are numerous and range from computer vision to collaborative filtering to bioinformatics while it also relates to matrix completion, multiclass, multiview learning etc. I will present a framework for multi-task learning which is based on learning a common kernel for all tasks. I will also show how this formulation connects to the trace norm and group Lasso approaches. Moreover, the proposed optimization problem can be solved using an alternating minimization algorithm which is simple and efficient. It can also be "kernelized" by virtue of a multi-task representer theorem, which holds for a large family of matrix regularization problems and includes the classical representer theorem as a special case. Finally, I will draw an analogy between multi-task learning and convex kernel learning and will present a general convergent algorithm for learning convex combinations of finite or infinite kernels.
Contact
Name(s):Thorsten Thormaehlen (for Matthias Seeger)
Phone:+49 681 9325-417
EMail:--email address not disclosed on the web
Video Broadcast
Video Broadcast:NoTo Location:
Tags, Category, Keywords and additional notes
Note:
Attachments, File(s):
  • Thorsten Thormählen, 07/15/2009 10:23 AM
  • Thorsten Thormählen, 07/15/2009 10:23 AM -- Created document.