Kyra Yee

Algorithmic Bias Bounties: A Community Driven Approach to Surfacing Harms

Proactively detecting bias in machine learning models is difficult, and companies often fail to find out about harms until they’ve already reached the public. We want to change that. We were inspired by how bug bounties have been used in the security world to establish best practices for identifying and mitigating vulnerabilities in order to protect the public. We hope bias bounties can be used similarly to cultivate a community of people focused on ML ethics to help us identify a broader range of issues than we would be able to on our own. This is motivated by the belief that direct feedback from the communities who are affected by our algorithms helps us design products to better serve all people and communities. In this session, we will review some of the challenges of hosting a bias bounty and what we learned from people’s submissions.

Kyra is a research engineer on the machine learning ethics, transparency, and accountability team at Twitter, where she works on methods for detecting and mitigating algorithmic harms. Prior to Twitter, she was a resident at Meta (formerly Facebook) AI research working on machine translation. She is passionate about working towards safe and equitable deployment of technology.

Buttonlinkedin
This website uses cookies to ensure you get the best experience. Learn more