MPI-INF Logo
Campus Event Calendar

Event Entry

What and Who

Improving Learning and Representations for Visual Material Recognition

Wenbin Li
Fachrichtung Informatik - Saarbrücken
PhD Application Talk

Master's student
AG 1, AG 2, AG 3, AG 4, AG 5, SWS, RG1, MMCI  
Public Audience
English

Date, Time and Location

Monday, 25 February 2013
08:45
120 Minutes
E1 4
R024
Saarbrücken

Abstract

Perceiving and recognizing material is a fundamental aspect of visual perception. It enables humans to make predictions about the world and interact with care. An efficient material recognition solution will find a wide range of uses such as in context awareness and robot manipulation. In this talk, I will present the following two novel methods for the topic.
In material recognition task, it is very important to generalize across instances. Yet this often requires multiple example instances at training time recorded under different conditions in order to present the intra-class variation to the learning algorithm. Current studies are limited to a maximum of about 10 material classes which seems largely insufficient to address real-world scenarios. And one of the main problems we see is in the rather tedious acquisition of such datasets for learning. Motivated by recent success of rendering training data from 3D models, we investigate if and how rendered image can help for recognize real world material image in this work. In particular, we study different approaches of combining and adapting the rendered data and real data to bridge the gap between these different sources.
In the second work, we seeks a better way to obtaining feature representation on material image. Traditional computer vision study usually built on hand-crafted descriptor like SIFT and LBP. However, it  is generally nontrivial to come up with good design of visual feature and those manually designed  features are often customized for specific task and cannot be generalized to different tasks. Starting  from a recent proposed generative model Spike-and-Slab Sparse Coding model (S3C), we investigate if  and how it can be used for feature learning on material recognition task. Further, we propose two  strategies to incorporate scale information into the learning procedure and compare performances with state-of-art manually designed feature descriptor. Our results show that our learned multi-scale features for material recognition outperform hand-crafted descriptors on the FMD and KTH-TIPS2 material classification benchmarks.

Contact

IMPRS Office Team
0681 93251800
--email hidden
passcode not visible
logged in users only

Tags, Category, Keywords and additional notes

Stephanie Jörg, 02/22/2013 11:58
Stephanie Jörg, 02/22/2013 11:52 -- Created document.