Léa Genuit

ML Fairness 2.0 - Intersectional Group Fairness

As more companies adopt AI, more people question the impact AI creates on society, especially on algorithmic fairness. However, most metrics that measure the fairness of AI algorithms today don’t capture the critical nuance of intersectionality. Instead, they hold a binary view of fairness, e.g., protected vs. unprotected groups. In this talk, we’ll discuss the latest research on intersectional group fairness using worst-case comparisons. Key Takeaways:

*The importance of fairness in AI

*Why AI fairness is even more critical today

*Why intersectional group fairness is critical to improving AI fairness

Léa is a data scientist at Fiddler AI. With a belief that all AI products should be ethical, she focuses on researching transparency in AI algorithms, including AI explainability, fairness, and bias. When she’s away from her laptop, she can be found running through a cool ocean breeze in the Presidio of San Francisco.

Buttontwitter Buttonlinkedin
This website uses cookies to ensure you get the best experience. Learn more