AI is rapidly advancing and transforming industries, but its full potential in real-world applications is often hindered by issues of reliability and trustworthiness. A critical challenge lies in addressing spurious correlations—patterns in data that models rely on for predictions but are not truly relevant to the task. These spurious features can lead to significant failures, especially when models encounter distributional shifts.
In this talk, I will discuss my ongoing research, which focuses on demonstrating that widely used models, particularly vision-language models like CLIP, often lack robustness against spurious correlation even in real-world applications. I also explore how to enhance the robustness and generalization of AI systems by efficiently mitigating reliance on spurious features. Additionally, I will share what inspired me to pursue this concern and outline broader research directions I hope to explore in the future to contribute to the development of dependable AI systems.