Apostol Vassilev

Bridging the Ethics Gap Surrounding AI

This session will motivate the need for a comprehensive socio-technical approach to assessing the impact of AI on individuals and society. While there are many approaches for ensuring the technology we use every day is safe and secure, there are factors specific to AI that require new perspectives. AI systems are often placed in contexts where they can have the most consequential for people impact. Whether that impact is helpful or harmful is a fundamental question of the field of Trustworthy and Responsible AI. Trustworthy and Responsible AI is not just about whether a given AI system is biased, fair or ethical, but whether it does what is claimed. Many practices exist for responsibly producing AI: transparency, test, evaluation, validation, and verification of AI systems and datasets, human factors such as participatory design techniques and multi-stakeholder approaches, and a human-in-the-loop. However, none of these practices individually or in concert are a panacea against bias and each brings its own set of pitfalls. What is missing from current remedies is guidance from a broader socio-technical perspective that connects these practices to societal values. To successfully manage the risks of AI bias, we must operationalize these values and create new norms around how AI is built and deployed. This is the approach taken in the recent NIST SP 1270: Towards a Standard for Identifying and Managing Bias in Artificial Intelligence, https://doi.org/10.6028/NIST.SP.1270.

Apostol Vassilev leads a Research Team at NIST. His team focuses on a wide range of AI problems: AI bias identification and mitigation, meta learning with large language models for various NLP tasks, robustness and resilience of AI systems, applications of AI for mitigating cybersecurity attacks. Apostol’s scientific background is in mathematics (Ph.D.) and computer science (MS), but he is also interested in social aspects of using AI technology and advocates for a comprehensive socio-technical approach to evaluating AI’s impact on individuals and society.

Buttonlinkedin
This website uses cookies to ensure you get the best experience. Learn more