Is Fair AI Possible? - Yes, and We Have All We Need For It!
As AI-based technology started next industrial revolution it exposed a lot of social inequalities that were mostly hidden and suddenly become obvious to everyone. Unintentional bias in data and AI solutions along with ability to scale it make the usage of the new technology unequal and unfair to people and social groups. Is AI technology responsible for that? No. The technology itself is not fair or unfair. The way we use the technology makes it fair or unfair. We have to evolve the AI technologies to comply with our social norms and we have everything we need to achieve it.
*The new AI technology is a great amplifier – it is scaling up both its achievements and deficiencies. Biased data and solutions make us to pose a question: Is the new technology bad for us?
*The answer is – no. The technology itself does not know social norms that we want. We have to use the technology in a way that will comply with our social norms.
*I will show how the problems of data bias and unfairness of AI could be solved by using the AI technology itself, so there is a scalable way to automatically correct data biases and find fair solutions that we can accept and use.
Michael Tetelman has PhD in Theoretical and Mathematical Physics. He is working on developing Bayesian methods for Neural Networks, Deep Learning and Artificial Intelligence. He is currently interested in researching Machine Learning methods for AI Fairness, and specifically AI methods for removing biases in data, self-supervised learning for automatic data labeling and correcting labeling errors. In the past he did research on Machine Learning methods for Optical Character Recognition, Image Processing and Data Compression. His work in Physics includes theoretical and applied studies of Phase Transitions and Quantum Fields Theory.