MPI-INF Logo
Campus Event Calendar

Event Entry

What and Who

Optimal Machine Teaching Without Collusion

Sandra Zilles
University of Regina
SWS Colloquium

Dr. Sandra Zilles is a Professor of Computer Science at the University of Regina, where she holds a Canada Research Chair in Computational Learning Theory as well as a Canada CIFAR AI Chair. Her research on machine learning and artificial intelligence is funded by government agencies and industry partners and has led to over 100 peer-reviewed publications. Her main research focus is on theoretical aspects of machine learning, yet some of the methods developed in her lab have found applications in research on autonomous vehicles, in research on genetics, and in cancer research. Dr. Zilles is a member of the College of New Scholars, Artists and Scientists of the Royal Society of Canada, an Associate Editor for the IEEE Transactions on Pattern Analysis and Machine Intelligence, and an Associate Editor for the Journal of Computer and System Sciences. She recently served on the Board of Directors for Innovation Saskatchewan and on the Board of Directors for the Pacific Institute for the Mathematical Sciences (PIMS).
AG 1, SWS, RG1  
Expert Audience
English

Date, Time and Location

Tuesday, 23 November 2021
14:00
60 Minutes
Virtual talk
Virtual talk
Saarbrücken

Abstract

In supervised machine learning, in an abstract sense, a concept in a given reference class has to be inferred from a small set of labeled examples. Machine teaching refers to the inverse problem, namely the problem to compress any concept in the reference class to a "teaching set" of labeled examples in a way that the concept can be reconstructed. The goal is to minimize the worst-case teaching set size taken over all concepts in the reference class, while at the same time adhering to certain conditions that disallow unfair collusion between the teacher and the learner. Applications of machine teaching include multi-agent systems and program synthesis.
In this presentation, it is first shown how preference relations over concepts can be used in order to guarantee collusion-free teaching and learning. Intuitive examples are presented in which quite natural preference relations result in data-efficient collusion-free teaching of complex classes of concepts. Further, it is demonstrated that optimal collusion-free teaching cannot always be attained by the preference-based approach. Finally, we will challenge the standard notion of collusion-freeness and show that a more stringent notion characterizes teaching with the preference-based approach.
This presentation summarizes joint work with Shaun Fallat, Ziyuan Gao, David G. Kirkpatrick, Christoph Ries, Hans U. Simon, and Abolghasem Soltani.

Contact

Claudia Richter
+49 681 9303 9103
--email hidden
Zoom
passcode not visible
talk to your secretary

Claudia Richter, 11/12/2021 12:37 -- Created document.