Antonio Hung

What's Inside the Box: A Guide on How to Understand What Your Model is Doing

As machine learning models become larger, the harder it is to understand their inner workings. As these models achieve and even beat human-level performance at many tasks, it might not be as useful, if you don't understand why the model predicts a certain result for a given feature set. In any domain, especially in insurance, we as Data Scientists and ML Engineers, should know why a model predicts a certain outcome and be able to deliver those findings to stakeholders and even our customers. We also need to be aware of scenarios where simple models, which are easy to understand, are a better choice than more complicated, yet more accurate deeper models This talk will cover why it is important to understand model predictions and how it impacts the customer. We’ll also cover a few techniques on interpreting machine learning models and learn how we can integrate these techniques into out Machine Learning pipelines. We'll also talk about simpler models, and even rule based models, which might not be the best in terms of accuracy, but are more explainable.

Tony Hung is a Data Engineer Lead at Progressive, working on building out systems to help accelerate the use of data science by removing the burden of infrastructure provisioning, enabling the sharing of curated data among different teams, and ensuring that artificial intelligence is used responsibly and in compliance with upcoming AI regulations. In previous lives, he worked as a machine learning practitioner, for both small and large companies, building out models in NLP, audio and financial time series data

Buttonlinkedin
This website uses cookies to ensure you get the best experience. Learn more