MPI-INF Logo
Campus Event Calendar

Event Entry

What and Who

Enhancing Explainability and Scrutability of Recommender Systems

Azin Ghazimatin
MMCI
Promotionskolloquium
AG 1, AG 2, AG 3, INET, AG 4, AG 5, D6, SWS, RG1, MMCI  
Public Audience
English

Date, Time and Location

Thursday, 9 December 2021
11:00
60 Minutes
Virtual talk
Virtual talk
Saarbrücken

Abstract

Our increasing reliance on complex algorithms for recommendations calls for models and methods for explainable, scrutable, and trustworthy AI. While explainability is required for understanding the relationships between model inputs and outputs, a scrutable system allows us to modify its behavior as desired. These properties help bridge the gap between our expectations and the algorithm’s behavior and accordingly boost our trust in AI. Aiming to cope with information overload, recommender systems play a crucial role in filtering content (such as products, news, songs, and movies) and shaping a personalized experience for their users. Consequently, there has been a growing demand from the information consumers to receive proper explanations for their personalized recommendations. These explanations aim at helping users understand why certain items are recommended to them and how their previous inputs to the system relate to the generation of such recommendations. Besides, in the event of receiving undesirable content, explanations could possibly contain valuable information as to how the system’s behavior can be modified accordingly. In this thesis, we will present our contributions towards explainability and scrutability of recommender systems:
- We introduce a user-centric framework, FAIRY, for discovering and ranking post-hoc explanations for the social feeds generated by black-box platforms.
- We propose a method, PRINCE, to facilitate provider-side explainability through generating action-based, counterfactual and concise explanations.
- We propose a human-in-the-loop framework, ELIXIR, for enhancing scrutability and subsequently the recommendation models by leveraging user feedback on explanations.

Contact

Gerhard Weikum
+49 681 9325 5000
--email hidden

Virtual Meeting Details

Petra Schaaf, 11/29/2021 10:23 -- Created document.