MPI-INF Logo
Campus Event Calendar

Event Entry

What and Who

Don't Confound Yourself: Causality from Biased Data

David Kaltenpoth
CISPA Helmholtz Center for Information Security
Promotionskolloquium
AG 1, AG 2, AG 3, INET, AG 4, AG 5, D6, SWS, RG1, MMCI  
Public Audience
English

Date, Time and Location

Monday, 25 November 2024
15:30
60 Minutes
E1 4
024
Saarbrücken

Abstract

Machine Learning has achieved remarkable success in predictive tasks
across diverse domains, from autonomous cars to LLMs, this predictive
prowess masks a fundamental limitation: ML systems excel at capturing
statistical associations in observational data but fail to uncover the
underlying causal mechanisms that generate these patterns. While machine
learning models may accurately predict patient outcomes or identify
tumors in medical imaging, they cannot answer crucial counterfactual
questions regarding how these systems would respond to novel actions or
policy changes.
A fundamental problem with understanding causation is due to the
pervasive influence of unmeasured confounding and selection bias in
observational data. Unmeasured confounding occurs when hidden variables
influence both our observed predictors and outcomes, creating spurious
correlations that ML models eagerly learn but that don't represent
genuine causal relationships. Selection bias further compounds this
problem by systematically distorting our sample in ways that
generalization to a broader class of instances may be impossible.
These challenges cannot be overcome by simply collecting more data or
building more sophisticated predictive models. They require the use of a
formal framework to reason under which conditions we can expect our
models to recover the underlying causal graph. In this thesis we provide
one such framework allowing us to derive conditions under which accurate
causal networks and effects may be discovered, allowing us to deal with
partially observed systems under novel conditions.

Contact

Petra Schaaf
+49 681 9325 5000
--email hidden
passcode not visible
see notes

Petra Schaaf, 11/14/2024 13:27 -- Created document.