Our research group "Multimedia Information Retrieval & Music Processing" develops fundamental algorithms and concepts for the analysis, classification, indexing, and retrieval of time-dependent data streams. In particular, we deal with two different multimedia domains: music data and human motion data. In the music domain, we advance the development of techniques and tools for organizing, structuring, retrieving, navigating, and presenting music-related data. Here, our objective is to automatically link several types of music data including text, symbolic data, audio, image, and video with the goal to coordinate the multiple information sources related to a given musical work. In the motion domain, we explore new approaches to motion analysis, retrieval, and classification. One of our strategies is to handle spatio-temporal motion deformations already on the feature level, which then allows us to adopt efficient indexing methods allowing for flexible and efficient content-based retrieval applicable to large motion capture data sets.