In my Master thesis, I explore techniques that use machine learning for finding out how images and 3D meshes can be associated with the corresponding semantic labels. I present how an existing method for large-scale image annotation ("WSABIE" by Weston et al. 2011) can be extended to support a multi modal setting, in which we can learn a joint semantic structuring of both meshes and images.
The talk will constitute my obligatory Master Seminar. The underlying techniques have been developed in collaboration with Robert Herzog, Michael Wand and Leonidas Guibas.