as an abstraction of the structure underlying many learning algorithms.
Roughly speaking, a sample compression scheme of size $k$ means that given an arbitrary list of labeled examples, one can retain only $k$ of them in a way that allows to recover the labels of all other examples in the list. They showed that compression implies PAC learnability for binary-labeled classes, and asked whether the other direction holds.
We answer their question and show that every concept class $C$ with VC dimension $d$ has a sample compression scheme of size exponential in $d$.
The proof uses an approximate minimax phenomenon for binary matrices of low VC dimension, which may be of interest in the context of game theory.
Joint work with Amir Yehudayoff
The talk will assume no previous knowledge in machine learning.
The talk will take 45 minutes.
Link to paper: http://eccc.hpi-web.de/report/2015/040/