MPI-INF Logo
Campus Event Calendar

Event Entry

What and Who

Accelerated Learning of Probabilistic Data Models for Large-Scale Applications

Jörg Lücke
University of Oldenburg
Talk
AG 1, AG 2, AG 3, AG 4, AG 5, RG1, SWS, MMCI  
Public Audience
English

Date, Time and Location

Thursday, 14 June 2018
10:00
60 Minutes
E1 5
029
Saarbrücken

Abstract

Probabilistic models are powerful and very general mathematical tools for a range of Machine Learning tasks including clustering, classification, denoising, inpainting, detection, tracking, source separation or image and sound understanding. The key theoretical and practical question is how probabilistic data models can effectively and efficiently be learned from data. The question applies to elementary data models used, e.g., for clustering and classification as well as to advanced and novel data models that enable novel applications.


I will first give an introduction to the task of data clustering, and will introduce two well-known algorithms: k-means and Expectation Maximization (EM) for Gaussian Mixture Models (GMMs). The properties, relations and computational complexities of the two algorithms will be explained and discussed. By using the example of a GMM as probabilistic model, I will then proceed by introducing truncated variational approximations for accelerated learning. Following their introduction, I will discuss the theoretical foundations of truncated approximations and their generalization to graphical models with discrete latents.

My presentation will close with applications of variational learning to standard and advanced data models. Highlights will include sublinear clustering of large datasets, `black-box' learning, large-scale deep models for semi-supervised learning, and unsupervised learning algorithms for visual, acoustic and medical data.

Contact

Connie Balzert
--email hidden
passcode not visible
logged in users only

Connie Balzert, 06/08/2018 12:41 -- Created document.