MPI-INF Logo
Campus Event Calendar

Event Entry

What and Who

Learning to Segment in Images and Videos with Different Forms of Supervision

Anna Khoreva
Max-Planck-Institut für Informatik - D2
Promotionskolloquium
AG 1, AG 2, AG 3, AG 4, AG 5, RG1, SWS, MMCI  
Public Audience
English

Date, Time and Location

Wednesday, 20 December 2017
17:00
60 Minutes
E1 4
024
Saarbrücken

Abstract

Much progress has been made in image and video segmentation over the last years. To a large extent, the success can be attributed to the strong appearance models completely learned from data, in particular using deep learning methods. However,to perform best these methods require large representative datasets for training with expensive pixel-level annotations, which in case of videos are prohibitive to obtain. Therefore, there is a need to relax this constraint and to consider alternative forms of supervision, which are easier and cheaper to collect. First, we develop approaches for training convolutional networks with weaker forms of supervision, such as bounding boxes or image labels, for object boundary estimation and semantic/instance labelling tasks. We propose to generate pixel-level

approximate groundtruth from these weaker forms of annotations to train a network, which allows to achieve high-quality results comparable to the full supervision quality without any modifications of the network architecture or the training procedure.
Second, we address the problem of the excessive computational and memory costs inherent to solving video segmentation via graphs. We propose approaches to improve the runtime and memory efficiency as well as the output segmentation quality by learning from the available training data the best representation of the graph. In particular, we contribute with learning must-link constraints,the topology and edge weights of the graph as well as enhancing the graph nodes - superpixels - themselves. Third, we tackle the task of pixel-level object tracking and address the problem of the limited amount of densely annotated video data for training convolutional
networks. We introduce an architecture which allows training with static images only and propose an elaborate data synthesis scheme which creates a large number of training examples close to the target domain from the given first frame mask. With the proposed techniques we show that densely annotated consequent video data is not necessary to achieve high-quality temporally coherent video segmentation
results.

Contact

Connie Balzert
2000
--email hidden
passcode not visible
logged in users only

Connie Balzert, 12/07/2017 09:39
Connie Balzert, 12/07/2017 09:38 -- Created document.