MPI-INF Logo
Campus Event Calendar

Event Entry

What and Who

Stochastic Optimization Methods for Deep Learning

Mahesh Chandra
International Max Planck Research School for Computer Science - IMPRS
IMPRS Research Seminar

MSc student
AG 1, AG 2, AG 3, AG 4, AG 5, SWS, RG1, MMCI  
Public Audience
English

Date, Time and Location

Monday, 27 June 2016
12:00
30 Minutes
E1 4
R024
Saarbrücken

Abstract

Recent advances in Deep Learning have enabled to train deep architectures efficiently. However, they can be further improved upon by using intelligent optimization schemes. A major advance in training any large scale machine learning model is the use of Stochastic Gradient Descent (SGD). There are many drawbacks with SGD such as decreasing step-size, noisy gradients etc. This has led to a line of work on improving upon SGD and has resulted in many efficient algorithms such as Adagrad, RMSProp and Adam. In this talk we aim to introduce you to these schemes and provide insights on designing such algorithms.

Contact

IMPRS Office Team
0681 93251802
--email hidden
passcode not visible
logged in users only

Stephanie Jörg, 06/24/2016 12:36 -- Created document.