Good AI can enrich understanding of data by building explanations to highlight what is important. These explanations can be used to reveal issues in data or drive scientific understanding when an AI model outperforms human experts. In this session, I will cover some challenges and pitfalls in developing explanations with AI models and lay out techniques to build and evaluate explanations. As a motivating example along the way, I will discuss our work on detecting new-onset diabetes from electrocardiograms where AI significantly outperforms electrophysiologists.
Rajesh Ranganath is an assistant professor at NYU's Courant Institute of Mathematical Sciences and the Center for Data Science. He is also affiliate faculty at the Department of Population Health at NYUMC. His research focuses on approximate inference, causal inference, probabilistic models, and machine learning for healthcare. Rajesh completed his PhD at Princeton and BS and MS from Stanford University. Rajesh has won several awards including the the Porter Ogden Jacobus Fellowship, given to the top four doctoral students at Princeton University, the Savage Award in Theory and Methods, and an NSF Career Award.