MPI-INF Logo
Campus Event Calendar

Event Entry

New for: D1, D2, D3, INET, D4, D5

What and Who

DNNs for Sparse Coding and Dictionary Learning

Debabrata Mahapatra
Indian Institute of Science Bangalore
PhD Application Talk

MSc graduate
AG 1, AG 2, AG 3, INET, AG 4, AG 5, SWS, RG1, MMCI  
Public Audience
English

Date, Time and Location

Monday, 8 October 2018
11:00
60 Minutes
E1 5
R029
Saarbrücken

Abstract

Interpreting the iterative algorithm ISTA as an unfolded Deep Neural Network (DNN), a novel architecture was proposed, in which, the activation functions are analogous to the proximal operators. Unlike the standard DNNs, in this architecture, the weights and biases were kept fixed by using prior knowledge, while the activation functions were learned from the data. This lead to a rich variety of proximal operators that are suitable, in particular, for sparse coding. Consequently, the proposed network outperformed state-of-the-art sparse coding algorithms by a margin of 4 to 6 dB. This architecture is further extended for Dictionary learning by borrowing ideas from Autoencoders, where the encoding part is performed by the proposed model. This extended model learns dictionaries with which data can be represented sparsely in an unsupervised manner.

Contact

Stephanie Jörg
0681 9325 1800
--email hidden
passcode not visible
logged in users only

Stephanie Jörg, 10/08/2018 09:42
Stephanie Jörg, 10/05/2018 12:18 -- Created document.