Max-Planck-Institut für Informatik
max planck institut
informatik
mpii logo Minerva of the Max Planck Society
 

MPI-INF or MPI-SWS or Local Campus Event Calendar

<< Previous Entry Next Entry >> New Event Entry Edit this Entry Login to DB (to update, delete)
What and Who
Title:Hand Gesture-based User Interface in Ubiquitous Virtual Reality
Speaker:Youngkyoon Yang
coming from:Max-Planck-Institut für Informatik - D4
Speakers Bio:Youngkyoon Jang is a Research Associate of Augmented Human Research Center at KAIST, S. Korea, since Aug 2015. He obtained his PhD from KAIST in 2015. His research explores novel natural user interface technologies that aim to overcome challenges in interactions between humans and computers in a wearable AR/VR environment, specializing in understanding human behaviors, understanding scenes, and identifying users based on gesture interaction, mobile & wearable computing, and visual computing.
Event Type:Talk
Visibility:D1, D2, D3, D4, D5, RG1, SWS, MMCI
We use this to send out email in the morning.
Level:AG Audience
Language:English
Date, Time and Location
Date:Monday, 1 February 2016
Time:14:00
Duration:60 Minutes
Location:Saarbrücken
Building:E1 4
Room:019
Abstract
Ubiquitous VR (UVR) research is aimed at the development of new computing paradigms for “AR/VR Life in Ubiquitous Smart Spaces”. According to the advances of recent IoT and AR/VR technologies, the concept of UVR is expected to be implemented sooner than expected. In this talk, we see (1) recent AR/VR research trends in the context of UVR, (2) a spatio-temporal classifier specifically designed for hand gesture estimation targeting VR object selection and manipulation in egocentric viewpoint. The talk focuses on hand gesture-based UI, playing a key role for novel man-machine interfaces. Estimating their posture and gesture in egocentric viewpoint is highly challenging, mainly due to the self-occlusions (missing visual information). We have tackled the problems through various novel ideas on top of cutting-edge techniques. We conclude the talk with some future direction, including face landmark detection and the combination of two modalities to understanding user’s intention.
Contact
Name(s):Christian Theobalt
Video Broadcast
Video Broadcast:NoTo Location:
Tags, Category, Keywords and additional notes
Note:
Attachments, File(s):
Created by:Christian Theobalt/AG4/MPII/DE, 01/29/2016 09:29 AMLast modified by:Uwe Brahm/MPII/DE, 11/24/2016 04:13 PM
  • Christian Theobalt, 01/29/2016 09:29 AM -- Created document.