Max-Planck-Institut für Informatik
max planck institut
mpii logo Minerva of the Max Planck Society

MPI-INF or MPI-SWS or Local Campus Event Calendar

<< Previous Entry Next Entry >> New Event Entry Edit this Entry Login to DB (to update, delete)
What and Who
Title:Stochastic Optimization Methods for Deep Learning
Speaker:Mahesh Chandra
coming from:International Max Planck Research School for Computer Science - IMPRS
Speakers Bio:MSc student
Event Type:IMPRS Research Seminar
Visibility:D1, D2, D3, D4, D5, SWS, RG1, MMCI
We use this to send out email in the morning.
Level:Public Audience
Date, Time and Location
Date:Monday, 27 June 2016
Duration:30 Minutes
Building:E1 4
Recent advances in Deep Learning have enabled to train deep architectures efficiently. However, they can be further improved upon by using intelligent optimization schemes. A major advance in training any large scale machine learning model is the use of Stochastic Gradient Descent (SGD). There are many drawbacks with SGD such as decreasing step-size, noisy gradients etc. This has led to a line of work on improving upon SGD and has resulted in many efficient algorithms such as Adagrad, RMSProp and Adam. In this talk we aim to introduce you to these schemes and provide insights on designing such algorithms.
Name(s):IMPRS Office Team
Phone:0681 93251802
EMail:--email address not disclosed on the web
Video Broadcast
Video Broadcast:NoTo Location:
Tags, Category, Keywords and additional notes
Attachments, File(s):
Stephanie Jörg/MPI-INF, 06/24/2016 12:31 PM
Last modified:
halma/MPII/DE, 11/05/2018 12:35 PM
  • Stephanie Jörg, 06/24/2016 12:36 PM -- Created document.