Max-Planck-Institut für Informatik
max planck institut
informatik
mpii logo Minerva of the Max Planck Society
 

MPI-INF or MPI-SWS or Local Campus Event Calendar

<< Previous Entry Next Entry >> New Event Entry Edit this Entry Login to DB (to update, delete)
What and Who
Title:The lifetime of an object – long-term monitoring of objects
Speaker:Dima Damen
coming from:University of Bristol
Speakers Bio:Dima Damen is a lecturer (assistant professor) at the Department of Computer Science, University of Bristol. Her latest work is focused on egocentric video understanding.

https://www.cs.bris.ac.uk/~damen

Event Type:Talk
Visibility:D1, D2, D3, D4, D5, RG1, SWS, MMCI
We use this to send out email in the morning.
Level:MPI Audience
Language:English
Date, Time and Location
Date:Friday, 23 October 2015
Time:10:00
Duration:60 Minutes
Location:Saarbrücken
Building:E1 4
Room:633 (sixth floor rotunda)
Abstract
As opposed to the traditional notion of actions and activities in
computer vision, where the motion (e.g. jumping) or the goal (e.g.
cooking) is the focus, this talk will argue for object-centred
understanding of actions and activities. The number of viable usages
of one object is several orders of magnitude smaller than the number
of activities in which the same object can be used. Automatic
understanding of the various `modes of interaction’ with objects in an
environment could be achieved by long-term monitoring of an object,
potentially throughout the object’s lifetime.

The talk will start with the bicycle theft problem as one of the early
works on long-term monitoring of objects.

The bulk of the talk will focus on object-level interactions from
first-person videos. The egeocentric view of the world offers a unique
perspective on daily object-level interactions. When monitoring
multiple people interacting with an environment over time,
task-relevant objects can be discovered. More importantly, when
harvesting interactions with the same object from multiple operators,
the object’s various statuses and modes of interaction can be
understood. This can lead to fully unsupervised object-based guidance
where knowledge on how objects can be used is provided to a novice
user. I will present an Android prototype of this unsupervised
approach that runs on Google Glass.
Contact
Name(s):Rodrigo Benenson
Video Broadcast
Video Broadcast:NoTo Location:
Tags, Category, Keywords and additional notes
Note:
Attachments, File(s):

Created by:Rodrigo Benenson, 10/21/2015 08:35 AMLast modified by:Uwe Brahm/MPII/DE, 11/24/2016 04:13 PM
  • Rodrigo Benenson, 10/21/2015 08:35 AM -- Created document.