MPI-INF Logo
Campus Event Calendar

Event Entry

What and Who

Video Analysis and Enhancement: Spatio-Temporal Methods for Extracting Content from Videos and Enhancing Video Output

Irfan Essa
Georgia Institute of Technology
Talk

Irfan Essa is a Professor in the School of Interactive Computing (iC) and Associate Dean in the College of Computing (CoC), at the Georgia Institute of Technology (GA Tech), in Atlanta, Georgia, USA. Professor Essa works in the areas of Computer Vision, Computer Graphics, Computational Perception, Robotics and Computer Animation, Machine Learning, and Social Computing, with potential impact on Video Analysis and Production (e.g., Computational Photography & Video, Image-based Modeling and Rendering, etc.) Human Computer Interaction, Artificial Intelligence, Computational Behavioral/Social Sciences, and Computational Journalism research. He has published over 150 scholarly articles in leading journals and conference venues on these topics and several of his papers have also won best paper awards. He has been awarded the NSF CAREER and was elected to the grade of IEEE Fellow. He has held extended research consulting positions with Disney Research and Google Research and also was an Adjunct Faculty Member at Carnegie Mellon’s Robotics Institute. He joined GA Tech Faculty in 1996 after his earning his MS (1990), Ph.D. (1994), and holding research faculty position at the Massachusetts Institute of Technology (Media Lab) [1988-1996].
AG 1, AG 2, AG 3, AG 4, AG 5, RG1, SWS, MMCI  
Public Audience
English

Date, Time and Location

Thursday, 17 September 2015
10:00
45 Minutes
E1 4
024
Saarbrücken

Abstract

In this talk, I will start with describing the pervasiveness of image and video content, and how such content is growing with the ubiquity of cameras. I will use this to motivate the need for better tools for analysis and enhancement of video content. I will start with some of our earlier work on temporal modeling of video, then lead up to some of our current work and describe two main projects. (1) Our approach for a video stabilizer, currently implemented and running on YouTube, and its extensions. (2) A robust and scaleable method for video segmentation.

I will describe, in some detail, our Video stabilization method, which generates stabilized videos and is in wide use. Our method allows for video stabilization beyond the conventional filtering that only suppresses high frequency jitter. This method also supports removal of rolling shutter distortions common in modern CMOS cameras that capture the frame one scan-line at a time resulting in non-rigid image distortions such as shear and wobble. Our method does not rely on a-priori knowledge and works on video from any camera or on legacy footage. I will showcase examples of this approach and also discuss how this method is launched and running on YouTube, with Millions of users.

Then I will describe an efficient and scalable technique for spatio-temporal segmentation of long video sequences using a hierarchical graph-based algorithm. This hierarchical approach generates high quality segmentations and we demonstrate the use of this segmentation as users interact with the video, enabling efficient annotation of objects within the video. I will also show some recent work on how this segmentation and annotation can be used to do dynamic scene understanding.

I will then follow up with some recent work on image and video analysis in the mobile domains. I will also make some observations about ubiquity of imaging and video in general and need for better tools for video analysis.

Contact

Connie Balzert
9325 2000
--email hidden
passcode not visible
logged in users only

Connie Balzert, 09/16/2015 09:40 -- Created document.