Real-World Challenges of Human-AI Interaction: Transparency, Alignment, and Effectiveness
Stephanie Milani
CMU
Talk
Stephanie Milani is a final-year Ph.D. candidate in the Machine Learning Department at CMU, where she is advised by Fei Fang. Her research addresses real-world, human-centered challenges of machine learning involving a sequential decision-making component, including human-AI interaction, transparency, and alignment. Stephanie is a Future Leader in Responsible Data Science & AI. Her work has been published at top machine learning and human-computer interaction venues, recognized by an outstanding paper award, and featured in Science News Explores, Nature Journal, Scientific American, and BBC News.
AG 1, AG 2, AG 3, INET, AG 4, AG 5, D6, SWS, RG1, MMCI
We are rapidly moving towards a world where autonomous AI agents will perform a wide range of tasks on behalf of or in collaboration with humans. Critical to these settings is the property that an AI agent's actions directly influence the environment --- such as editing software --- and the people it interacts with --- by collaborating, teaching, and learning. A primary obstacle in this setting is the development of agents from a human-centered perspective: we want to ensure that AI is transparent so it can be easily inspected, aligns with our objectives, and is effective at the desired task. Currently, these agents often depend on deep neural networks, which are notoriously opaque, exacerbating the issue of transparency. Moreover, these agents are often deployed in applications in which specifying objectives is challenging, which can lead to misalignment. Finally, AI may perform well technically but fail to meet the needs of the users due to insufficient attention to the real-world contexts in which they will be deployed. My research aims to address these challenges and build the next generation of human-centered machine learning by integrating techniques from machine learning and human-computer interaction. To that end, this talk surveys my past and ongoing work in three key areas: i) training transparent reinforcement learning agents, ii) understanding human evaluation and perceptions of AI agent behavior toward the goal of alignment, and iii) designing human-AI interaction frameworks with a human-centered perspective.