15 - 16 June 2022

Trusted AI Summit Trusted AI Summit schedule

MLOps Summit San Francisco



Download PDF
  • 08:00

    Coffee & Registration

  • 09:00

    Trusted AI Stage: Chair Welcome

  • 09:15

    Building AI Responsibly - From the Ground Up

  • Alexandra Ross

    SPEAKER

    Alexandra Ross - Senior Director, Senior Data Protection, Use & Ethics Counsel - Autodesk

    Down arrow blue

    Building AI Responsibly – From the Ground Up

    As the use of artificial intelligence, machine learning and Big Data continues to develop across industries, companies are presented with increasingly complex legal, ethical and operational challenges. Companies that create or work with AI offerings to support or enhance products or business methods need guidance to succeed. Learn how best to build and maintain an ethics by design program, leverage your existing privacy and security program framework, and manage stakeholders at this presentation by legal and data ethics leads at Autodesk.

    Key Takeaways:

    • Understand current best practices for ensuring compliance with key regulations focused on AI.

    • Learn how to engage stakeholders, leverage resources and build, staff and maintain an ethics program.

    • Tips on building an ethical data culture, governance models, training and awareness.

    Alexandra Ross is Senior Director, Senior Data Protection, Use & Ethics Counsel at Autodesk, Inc. where she provides legal, strategic and governance support for Autodesk’s global privacy, security, data use and ethics programs. She is also an Advisor to BreachRx and an Innovators Evangelist for The Rise of Privacy Tech (TROPT). Previously she was Senior Counsel at Paragon Legal and Associate General Counsel for Wal-Mart Stores. She is a certified information privacy professional (CIPP/US, CIPP/E, CIPM, CIPT, FIP and PLS), holds a law degree from UC Hastings College of Law, and a B.S. in theater from Northwestern University. Alexandra is a recipient of the 2019 Bay Area Corporate Counsel Award – Privacy.

    Twitter Linkedin
  • Alec Shuldiner

    SPEAKER

    Alec Shuldiner - Data Ethics Program Lead - Autodesk

    Down arrow blue

    Alec Shuldiner, PhD., leads Autodesk’s Data Ethics Program, a key component of the company’s trusted data practices. He has a background in big data, compliance, and technological systems, and is an occasional IoT researcher and commentator.

    Twitter Linkedin
  • 09:50

    Summit Presentation - Building a Trustworthy AI Framework

  • 10:15
    Nikon Rasumov

    Privacy and Fairness and MLX

    Nikon Rasumov - Product Manager - Meta

    Down arrow blue

    Privacy and Fairness and MLX

    AI Data and Feature Engineering has some important Privacy and Fairness and Experience requirements including: Lineage Tracking, Purpose Limitation, Retention, Data Minimization, Unauthorized as well as avoiding Label Imbalance, Label Bias, Model Bias. I will talk about some of the techniques to address those requirements.

    Nikon Rasumov has +10 years of experience in building B2C and B2B start-ups from the ground up. He holds a Ph.D. from Cambridge University in computational neuroscience as well as affiliations with MIT and Singularity University. As an expert in information-driven product design, his publications and patents deal with how to minimize vulnerabilities resulting from sharing too much information. Nikon’s product portfolio includes Symantec Cyber Resilience Readiness™, SecurityScorecard Automatic Vendor Detection ™, Symantec CyberWriter™, Cloudflare Bot Management with various other insurance and security analytic platforms. Currently Nikon is responsible for Privacy and Developer Experience of AI Data and Feature Engineering at Facebook.

    Twitter Linkedin
  • 10:45

    Morning Break

  • 11:00

    Training Your Models for Trusted Outcomes

  • 11:35

    Summit Presentation - Trusted AI in the Enterprise

  • 12:00

    Panel Discussion: Create Trusted Models with Explainable AI

  • Frankie Cancino

    PANELIST

    Frankie Cancino - Data Scientist - Mercedes-Benz Research & Development

    Down arrow blue

    Frankie Cancino is a Data Scientist at Mercedes-Benz Research & Development, working on applied machine learning initiatives. Prior to joining Mercedes-Benz R&D, Frankie was a Senior AI Scientist at Target AI, focused on methods to improve demand forecasting and anomaly detection. He is also the organizer and founder of Data Science Minneapolis. Data Science Minneapolis is a community that brings together professionals, researchers, data scientists, and AI enthusiasts.

    Linkedin
  • 12:45
    Sonu Durgia

    PANELIST

    Sonu Durgia - Product Lead, Responsible AI - Facebook

    Linkedin
  • Lunch

  • 13:45
    Kyra Yee

    Algorithmic Bias Bounties: A Community-Driven Approach to Surfacing Harms

    Kyra Yee - Machine Learning Research Engineer - Twitter

    Down arrow blue

    Algorithmic Bias Bounties: A Community Driven Approach to Surfacing Harms

    Proactively detecting bias in machine learning models is difficult, and companies often fail to find out about harms until they’ve already reached the public. We want to change that. We were inspired by how bug bounties have been used in the security world to establish best practices for identifying and mitigating vulnerabilities in order to protect the public. We hope bias bounties can be used similarly to cultivate a community of people focused on ML ethics to help us identify a broader range of issues than we would be able to on our own. This is motivated by the belief that direct feedback from the communities who are affected by our algorithms helps us design products to better serve all people and communities. In this session, we will review some of the challenges of hosting a bias bounty and what we learned from people’s submissions.

    Kyra is a research engineer on the machine learning ethics, transparency, and accountability team at Twitter, where she works on methods for detecting and mitigating algorithmic harms. Prior to Twitter, she was a resident at Meta (formerly Facebook) AI research working on machine translation. She is passionate about working towards safe and equitable deployment of technology.

    Linkedin
  • 14:20

    Round Table Discussions

  • Shilpi Agarwal

    Round Table Topic Leader: Data Ethics in Business - The Cornerstone of Customer Trust

    Shilpi Agarwal - Founder & Chief Data Ethics Officer - DataEthics4All

    Down arrow blue

    Shilpi Agarwal is a Data Philanthropist, Adjunct Faculty at Stanford and MIT $100K Launch Mentor.

    Armed with the technical skills from her Bachelor of Engineering in Computer Science, design thinking skills from her Masters in Design, combined with 20+ years of Business and Marketing know-how by working as a Marketing Consultant for some really big and some small brands, Shilpi started DataEthics4All, troubled with the unethical use of data around her on social media, in business and in political campaigns.

    DataEthics4All is a Community bringing the STEAM in AIᵀᴹ Movement for Youth and celebrating Ethics 1stᵀᴹ Champions of today and tomorrow pledging to help 5 Million economically disadvantaged students in the next 5 years by breaking barriers of entry in tech and creating awareness on the ethical use of data in data science and artificial intelligence in enterprise, working towards a better Data and AI World.

    Twitter Linkedin
  • Subramanian "Subbu" Iyer

    Round Table Topic Leader: AI vs Simpler Solutions

    Subramanian "Subbu" Iyer - Sr. Director of AI - Target

    Linkedin
  • Ban Kawas

    Round Table Topic Leader: Explainable AI (XAI) and Its Role in Building Trusted AI

    Ban Kawas - Senior Research Scientist - Reinforcement Learning - Meta

    Down arrow blue

    Ban is a Senior AI Research Scientist at Meta. She is working on democratizing Reinforcement Learning and enabling its use in the real world, spanning several application areas from compiler optimization to embodied AI. Ban and her team are developing ReAgent; an end-to-end platform for applied RL, checkout open source version at https://reagent.ai/

    Linkedin
  • Naman Kohli

    Round Table Topic Leader: Causal Analysis

    Naman Kohli - Applied Scientist - Amazon

    Linkedin
  • Lakshmi Ravi

    Round Table Topic Leader: Causal Analysis

    Lakshmi Ravi - Applied Scientist - Amazon

    Down arrow blue

    Selecting ML Algorithms and Validating

    ML Practitioners often have a dilemma in identifying the right ML Model for the problem space. In this talk, I will be going over the common questions that will help in narrowing down the right next step. Developed model will have to meet certain validation metrics. The next common question is how the validation metrics proposed by scientists will have to be explained to business leaders and help them decide if the model is eligible to deployed. The next step is to find mechanisms to develop and study the online validation metrics. Often online metrics of an ML model launched will require studying the results in Treatment-Control fashion. In this talk, I will describe common development practices that helps in A/B testing of experiments.

    Lakshmi is an Applied Scientist with Amazon.She has been working with Amazon Machine Learning teams for the last 4.5 years. She had the chance to be part of Alexa's NLP team, Behavior Analytics (a causal Inference division in Amazon) and Amazon Music teams (improving the voice experience in Alexa).

    Linkedin
  • 15:15

    Afternoon Networking Break

  • 15:45
    Aalok Shanbhag

    Overcoming 'Black Box' Model Challenges

    Aalok Shanbhag - Senior Machine Learning Engineer - Snap Inc.

    Linkedin
  • 16:20

    Summit Presentation - Using AI for Good

  • 16:45
    Apostol Vassilev

    Bridging the Ethics Gap Surrounding AI

    Apostol Vassilev - Research Team Lead; AI & Cybersecurity Expert - National Institute of Standards and Technology (NIST)

    Down arrow blue

    Bridging the Ethics Gap Surrounding AI

    This session will motivate the need for a comprehensive socio-technical approach to assessing the impact of AI on individuals and society. While there are many approaches for ensuring the technology we use every day is safe and secure, there are factors specific to AI that require new perspectives. AI systems are often placed in contexts where they can have the most consequential for people impact. Whether that impact is helpful or harmful is a fundamental question of the field of Trustworthy and Responsible AI. Trustworthy and Responsible AI is not just about whether a given AI system is biased, fair or ethical, but whether it does what is claimed. Many practices exist for responsibly producing AI: transparency, test, evaluation, validation, and verification of AI systems and datasets, human factors such as participatory design techniques and multi-stakeholder approaches, and a human-in-the-loop. However, none of these practices individually or in concert are a panacea against bias and each brings its own set of pitfalls. What is missing from current remedies is guidance from a broader socio-technical perspective that connects these practices to societal values. To successfully manage the risks of AI bias, we must operationalize these values and create new norms around how AI is built and deployed. This is the approach taken in the recent NIST SP 1270: Towards a Standard for Identifying and Managing Bias in Artificial Intelligence, https://doi.org/10.6028/NIST.SP.1270.

    Apostol Vassilev leads a Research Team at NIST. His team focuses on a wide range of AI problems: AI bias identification and mitigation, meta learning with large language models for various NLP tasks, robustness and resilience of AI systems, applications of AI for mitigating cybersecurity attacks. Apostol’s scientific background is in mathematics (Ph.D.) and computer science (MS), but he is also interested in social aspects of using AI technology and advocates for a comprehensive socio-technical approach to evaluating AI’s impact on individuals and society.

    Linkedin
  • 17:15

    Networking Reception

  • 18:15

    End of Day One

  • THIS EVENT STARTS AT 8:45 AM

  • 08:45

    Coffee & Registration

  • 09:45

    Trusted AI Stage: Chair Welcome

  • 10:00
    Kathy Baxter

    Our Role in Guiding Responsible AI Regulation

    Kathy Baxter - Principal Architect, Ethical AI Practice - Salesforce

    Down arrow blue

    Our Role in Guiding Responsible AI Regulation

    Major technological advancements rarely begin as safe, inclusive, or focused on long-term societal impacts. As we have all seen, and some have painfully experienced, AI is no different in that regard. However, it is different in terms of the sheer scale, speed, and complexity of its impact so it is unsurprising that there is significant effort to create standards, frameworks, and regulations. There are still many questions to be answered about how to standardize or regulate AI but there are things that every organization creating and implementing AI can do to prepare for upcoming regulations and create trustworthy technology, which Kathy will share

    As a Principal Architect of Ethical AI Practice at Salesforce, Kathy develops research-informed best practice to educate Salesforce employees, customers, and the industry on the development of responsible AI. She collaborates and partners with external AI and ethics experts to continuously evolve Salesforce policies, practices, and products. Prior to Salesforce, she worked at Google, eBay, and Oracle in User Experience Research. She received her MS in Engineering Psychology and BS in Applied Psychology from the Georgia Institute of Technology. The second edition of her book, "Understanding your users," was published in May 2015. You can read about her current research at einstein.ai/ethics.

    Twitter Linkedin
  • 10:35

    Summit Presentation - Best Practices for Data, Privacy and Security Management

  • 11:00

    Morning Break

  • 11:30
    John Lunsford

    Bringing Infrastructure into Present and Future Considerations of AI Mistrust

    John Lunsford - User Experience Researcher - Uber

    Down arrow blue

    Bringing Infrastructure into Present and Future Considerations of AI Mistrust

    The ride-for hire industry has been around for a long time. More than 800 years in fact. And some of its earliest iterations incorporated rudimentary algorithmic decision making into the activity of for-hire transit. Without discussions of fairness these systems went on to structure modern society’s unequal transportation environment, allowing fairness only to apply to those already in power. As we develop AI solutions to address problems of inequality in access, we have to consider how the promise of fairness is mediated by unfair systems that ai depends on to function. That interaction then becomes the foundation for trust- or mistrust - in AI’s deployment and ability to address problems of fairness in social, political, economic, & material systems. John will share ways to approach tracking, documenting, and building AI fairness practices into landscapes that were not always designed to accommodate them.

    A User Experience Researcher in Safety, John earned his/their PhD in Communication from Cornell University in 2021, as well as an MS in Communication, an MA in Cultural anthropology, and BS in Political Science. A classical ethnographer by training, John has expanded an anthropological approach to encompass media studies, social physiology, political science, and urban design. It’s from that mixed vantage that John considers the effects on and of technology on social process and structures; documenting for his PhD the legacy of for hire transportations’ impact on the evolution of unequal access, its reflection of dominant societal priorities, and their impact on emerging rideshare and autonomous transportation systems. John’s current work in the realm of safety blends a passion for wicked problems with the demand of real-world complexities impacting the transportation landscape.

    Linkedin
  • 12:05

    Summit Presentation - Strengthening Customer Relations with AI

  • 12:30

    Lunch

  • 13:30
    Kinjal Basu

    Operationalizing Responsible AI in Large-Scale Organizations

    Kinjal Basu - Senior Staff Software Engineer - LinkedIn

    Down arrow blue

    Operationalizing Responsible AI in Large-Scale Organizations

    Most large-scale organizations face challenges while scaling their infrastructure to support multiple teams across multiple product domains. More often than not, individual teams build systems and models to power their specific product areas, but because of the innate differences in the products and infrastructure support, the broad use of Responsible AI techniques poses a serious challenge for organizations.

    Each product can potentially have a different definition of “fairness” across different dimensions and hence require very different measurement and mitigation solutions. In this talk, we will focus on how we are building a scalable system on our machine learning platform that can not only measure but also mitigate unintended consequences of AI models across most products at LinkedIn.

    We will discuss how this system aims to seamlessly integrate into each and every AI pipeline and measures unfairness across different protected attributes. The system is flexible to incorporate different definitions of fairness as required by the product. Moreover, if and when algorithmic bias is detected we also have a system to remove such bias through state-of-the-art AI algorithms across different notions of fairness. That being said, we are just starting and there is much more work to be done and we don’t have all the answers yet.

    Finally, all of the above point to having a good intent towards ethical practices. But the real win comes from the actual member impact after launching such bias mitigated models in production. We will also discuss how we A/B test our models and systems once they are launched in production and incorporate those learnings to improve the overall member experience. Thus, connecting the overall intent and impact cycle.

    Kinjal is currently a Sr. Staff Software Engineer and the tech lead for Responsible AI at LinkedIn, focusing on challenging problems in fairness, explainability, and privacy. He leads several initiatives across different product applications towards making LinkedIn a responsible and equitable platform. He received his Ph.D. in Statistics from Stanford University, with a best thesis award and has several published papers in many top journals and conferences. He has been serving as a reviewer and program committee member in multiple top venues such as NeurIPS, ICML, KDD, FAccT, WWW, etc

    Twitter Linkedin
  • 14:05

    Trusted AI at Scale

  • 14:40
    Supreet Kaur

    Closing General Session: Complexity vs Simplicity in ML and AI Projects

    Supreet Kaur - Assistant Vice President - Morgan Stanley

    Down arrow blue

    MLOps/Trusted AI Summit: Closing General Session: Complexity vs Simplicity in ML and AI Projects

    Women in AI Reception: Pivoting into AI

    Supreet is an AVP at Morgan Stanley. Prior to Morgan Stanley, she was a management consultant at ZS Associates where she automated different workflows and built data driven solutions for fortune 500 clients. She is extremely passionate about technology and AI and hence started her own community called DataBuzz where she engages the audience by sharing the latest AI and Tech trends and also mentors people who want to pivot in this field.

  • 15:00

    End of Summit

MLOps Summit San Francisco

MLOps Summit San Francisco

15 - 16 June 2022

Get your ticket
This website uses cookies to ensure you get the best experience. Learn more