Sohaib Khan is Associate Professor and Department Chair of Computer Science at LUMS School of Science and Engineering, Lahore, Pakistan. His research interests broadly span the areas of image and video analysis, including estimating 3D from images, motion capture and multiple camera surveillance systems. Dr Khan earned his PhD degree in Computer Science in 2002 from University of Central Florida, specializing in computer vision. He was the associate editor of Machine Vision and Applications journal from 2008 - 2010. Dr Khan was awarded Hillman Award for Excellence in PhD Research in 2001 and Best Teacher Award by LUMS Computer Science Department in 2007.
A variety of dynamic objects, such as faces, bodies, and cloth, are represented in computer graphics as a collection of moving spatial landmarks.
Spatiotemporal data is inherent in a number of graphics applications including animation, simulation, and object and camera tracking. The principal
modes of variation in the spatial geometry of objects are typically modeled using dimensionality reduction techniques, while concurrently, trajectory
representations like splines and autoregressive models are widely used to exploit the temporal regularity of deformation. In this talk, I will present the bilinear spatiotemporal basis as a model that simultaneously exploits spatial and temporal regularity while maintaining the ability to generalize well to new sequences. The model can be interpreted as representing the data as a linear combination of spatiotemporal sequences consisting of shape modes oscillating over time at key frequencies. This factorization allows the use of analytical, predefined functions to represent temporal variation (e.g., BSplines or the Discrete Cosine Transform) resulting in more efficient model representation and estimation. We apply the bilinear model to natural spatiotemporal phenomena, including face, body, and cloth motion data, and compare it in terms of compaction, generalization ability, predictive precision, and efficiency to existing models. We demonstrate the application of the model to a number of graphics tasks including labeling, gap-filling,
de-noising, and motion touch-up.