Sameer Singh

Debugging and Explaining Machine Learning Models

Machine learning is at the forefront of many recent advances in science and technology, enabled in part by the sophisticated models and algorithms that have been recently introduced. However, as a consequence of this complexity, machine learning essentially acts as a black-box as far as users are concerned, making it incredibly difficult to understand, predict, or detect bugs in their behavior. For example, determining when a machine learning model is “good enough” is challenging since held-out accuracy metrics significantly overestimate real-world performance. I will describe our research on approaches that explain the predictions of any classifier in an interpretable and faithful manner, and automated techniques to detect bugs that can occur naturally when a model is deployed. I will cover various ways in which we summarize this relationship: as linear weights, as precise rules, and as counter-examples, and present examples that demonstrate their utility in understanding, and debugging, black-box machine learning algorithms.

Dr. Sameer Singh is an Assistant Professor of Computer Science at the University of California, Irvine (UCI). He is working on robustness and interpretability of machine learning algorithms, along with models that reason with text and structure for natural language processing. Sameer was a postdoctoral researcher at the University of Washington and received his PhD from the University of Massachusetts, Amherst, during which he also worked at Microsoft Research, Google Research, and Yahoo! Labs. He was selected as a DARPA Riser, was awarded the Adobe Research Data Science Award, the grand prize in the Yelp dataset challenge, the Yahoo! Key Scientific Challenges award and the UMass Graduate School fellowship. His group has received funding from Allen Institute for AI, NSF, DARPA, Adobe Research, and FICO.

Buttontwitter Buttonlinkedin
This website uses cookies to ensure you get the best experience. Learn more