Tanusree Sharma is a Ph.D. candidate of Informatics at the University of Illinois at Urbana Champaign, advised by Yang Wang. She works at the intersection of usable security and privacy, human-centered AI, and decentralized governance, where she uses human-centered methods to design, build, and study systems to address issues around power imbalances in technology design and transparency in complex socio-technical systems (e.g., AI). Tanusree has authored more than 15 publications in premier academic venues across HCI, security, and privacy (e.g., Nature, ACM CHI, Usenix Security). Tanusree’s work is supported by NSF, Meta, and OpenAI. She is awarded the OpenAI “Democratic Input to AI” Grant as part of her dissertation. Her work has been covered by media outlets such as Nature and Forbes. Her research is deeply influenced by her upbringing in her home country, Bangladesh. You can find out more about Tanusree at https://tanusreesharma.github.io/
AG 1, AG 2, AG 3, INET, AG 4, AG 5, D6, SWS, RG1, MMCI
Advancements in Artificial Intelligence (AI) are impacting our lives, raising concerns related to data collection, and social alignment to the resilience of AI models. A major criticism of AI development is the lack of transparency in design and decision-making about AI behavior, potentially leading to adverse outcomes such as discrimination, lack of inclusivity and representation, breaching legal rules, and privacy and security risks. Underserved populations, in particular, can be disproportionately affected by these design decisions. Conventional approaches in soliciting people’s input, such as interviews, surveys, and focus groups, have limitations, such as often lacking consensus, coordination, and regular engagement. In this talk, I will present two examples of sociotechnical interventions for democratic and ethical AI. First, to address the need for ethical dataset creation for AI development, I will present a novel method “BivPriv,” drawing ideas from accessible computing and computer vision in creating an inclusive private visual dataset with blind users as contributors. Then I will discuss my more recent work on “Inclusive.AI” funded by OpenAI to address concerns of social alignment by facilitating a democratic platform with decentralized governance mechanisms for scalable user interaction and integrity in the decision-making processes related to AI.