Max-Planck-Institut für Informatik
max planck institut
mpii logo Minerva of the Max Planck Society

MPI-INF or MPI-SWS or Local Campus Event Calendar

<< Previous Entry Next Entry >> New Event Entry Edit this Entry Login to DB (to update, delete)
What and Who
Title:Variational Bayes In Private Settings
Speaker:Mijung Park
coming from:Amsterdam Machine Learning Lab
Speakers Bio:Mijung Park completed her Ph.D. in the department of Electrical and Computer Engineering under the supervision of Prof. Jonathan Pillow (now at Princeton University) and Prof. Alan Bovik at The University of Texas at Austin. She worked with Prof. Maneesh Sahani as a postdoc at the Gatsby computational neuroscience unit at University College London. Currently, she works with Prof. Max Welling as a postdoc in the informatics institute at University of Amsterdam. Her research focuses on developing practical algorithms for privacy preserving data analysis. Previously, she worked on a broad range of topics including approximate Bayesian computation (ABC), probabilistic manifold learning, active learning for drug combinations and neurophysiology experiments, and Bayesian structure learning for sparse and smooth high dimensional parameters.
Event Type:SWS Colloquium
Visibility:D1, D2, D3, D4, D5, SWS, RG1, MMCI
We use this to send out email in the morning.
Level:AG Audience
Date, Time and Location
Date:Friday, 3 March 2017
Duration:-- Not specified --
Bayesian methods are frequently used for analysing privacy-sensitive datasets,

including medical records, emails, and educational data, and there is a growing
need for practical Bayesian inference algorithms that protect the privacy of
individuals' data. To this end, we provide a general framework for
privacy-preserving variational Bayes (VB) for a large class of probabilistic
models, called the conjugate exponential (CE) family. Our primary observation
is that when models are in the CE family, we can privatise the variational
posterior distributions simply by perturbing the expected sufficient
statistics of the complete-data likelihood. For widely used non-CE models
with binomial likelihoods (e.g., logistic regression), we exploit the
Polya-Gamma data augmentation scheme to bring such models into the CE
family, such that inferences in the modified model resemble the original
(private) variational Bayes algorithm as closely as possible. The
iterative nature of variational Bayes presents a further challenge for
privacy preservation, as each iteration increases the amount of noise
needed. We overcome this challenge by combining: (1) a relaxed notion of
differential privacy, called concentrated differential privacy, which
provides a tight bound on the privacy cost of multiple VB iterations and
thus significantly decreases the amount of additive noise; and (2) the
privacy amplification effect of subsampling mini-batches from large-scale
data in stochastic learning. We empirically demonstrate the effectiveness
of our method in CE and non-CE models including latent Dirichlet
allocation (LDA), Bayesian logistic regression, and Sigmoid Belief
Networks (SBNs), evaluated on real-world datasets.

Name(s):Roslyn Stricker
Video Broadcast
Video Broadcast:YesTo Location:Saarbr├╝cken
To Building:E1 5To Room:029
Meeting ID:
Tags, Category, Keywords and additional notes
Attachments, File(s):

Roslyn Stricker/MPI-SWS, 02/09/2017 10:38 AM
Last modified:
Uwe Brahm/MPII/DE, 03/03/2017 07:01 AM
  • Roslyn Stricker, 02/09/2017 10:48 AM -- Created document.