09 - 10 November 2022

Deep Learning Summit Deep Learning Summit schedule

Toronto AI Summit



Download PDF
  • 08:00

    COFFEE & REGISTRATION

  • 09:00

    WELCOME NOTE

  • 09:15
    Rishab Goel

    Deep Learning and Graphs

    Rishab Goel - Machine Learning Engineer - Twitter

    Linkedin
  • 09:45

    Deep Learning Infrastructure

  • 10:10
    Kevin F. Li

    Deep Learning and Cognition Modelling

    Kevin F. Li - Machine Learning Engineer - Princess Margaret Cancer Centre

    Down arrow blue

    Existing models for classifying and interpreting cognitive intelligence based on pediatric brain images are usually derived from low-dimensional statistical analysis. While such models are computationally efficient, they use oversimplified representations of a brain's features. They neglect essential brain structure information, such as regions of interest (ROI) and high-density segmentation features. Therefore, we develop a deep learning framework to understand and model cognitive intelligence using CT brain images.

    Our data pipeline provides over 600 billion parameters. Such high-density data requires a novel parallel computing framework for tuning and training tasks. Our framework can tractably handle these computational requirements by utilizing 1) an extensive grid search fitting-training scheme, 2) automated learning that optimizes deep neural network structure, 3) Bayesian variation inference that interprets uncertainty during the learning process, and 4) hardware configurations for both CPU and GPU environments. This framework is adaptable and particularly useful for high- and low-dimensional datasets shown in many cognitive modellings.

    We will demonstrate the predictive and descriptive capability of such a deep learning framework. One of this research highlights is that we have successfully modelled the uncertainty of the latent intelligence features using ELBO optimization, transformed an integration of a joint distribution into an expectation function.

    Linkedin
  • 10:40

    MORNING NETWORKING BREAK

  • 11:10

    Panel Discussion: ML System Design for Constant Experimentation

  • Sharon Shahrokhi Tehrani

    Panelist

    Sharon Shahrokhi Tehrani - Product Manager, Machine Intelligence Retention - CBC

    Down arrow blue

    In machine learning and AI, the black box models are developed from input data by algorithms (e.g., Deep learning algorithms). Although the input variables are known, the functions' complexity and the joint relationship between variables make it challenging for data scientists and ML developers to interpret the processes inside the box and explain the ultimate decision. The lack of interpretability makes it challenging to trust the black box models and creates barriers to adopting ML and AI in numerous domains. The short answer to the current question, "How to overcome 'Black Box' Model Challenges?" is explainable AI. Designing the AI and ML models by adding explainable techniques improves understanding, increases trust and transparency, and avoids possible biases and discriminations due to data quality issues.

  • Olga Tsubiks

    Panelist

    Olga Tsubiks - Director, Strategic Analytics and Data Science - RBC

  • 11:50
    Billy Porter

    How NLP Shapes Social Dynamics

    Billy Porter - Software Engineer - Google

    Down arrow blue

    Artificial intelligence is already affecting human behavior. People are worried about targeted advertising and social media addiction, but that's not what they should be worried about. Currently, AI is affecting how we think and driving society apart, both culturally and politically

    Free speech versus hate speech: It's very hard to define. I might look at one thing and say "that's clearly sarcasm" whereas you might think it's very offensive. Since it is so hard to define, humans will naturally treat this definition by what's allowed to be posted online. AI models determine what's allowed to be posted online, and thus determine how we think.

    But it doesn't end there - social media sites, search engines, etc - show you content you are more likely to engage with. When investigating news topics, you're likely to only encounter information that aligns with you politically as these AI models take advantage of confirmation bias.

    Now, armed with this information that backs up your current opinion, you move to social media. On social media, the content you are more likely to engage with is information you disagree with (it feels like social media is only arguments these days).

    This constant echo chamber and argumentation further drives us apart. The country is more divided than ever - not because of harmful rhetoric of politicians - but because of AI.

  • 12:20
    Amir Raza

    Deep Learning in Cyber Security

    Amir Raza - Applied Data Scientist - Elpha Secure

  • 12:50

    LUNCH

  • 14:00
    Vinay Narayana

    The MLOps Journey: Machine Learning as an Engineering Discipline

    Vinay Narayana - Sr. Director of AI/ML Engineering - Levi Strauss & Co.

    Down arrow blue

    The current situation at most companies could be summarized as below:

    · Every team has their own unique way of testing and productionizing a model

    · Lack of a centralized feature store

    · Severe data quality issues

    · Limited to no data or model monitoring in production (or test)

    · Limited to no operational readiness

    · Fragmented collaboration with partner teams

    This presentation takes the use case of a typical data science org that can apply software engineering principles to improve and solve all the above typical scenarios.

    A vision that all data science teams could aspire for, involves the following:

    · access to reliable data (with SLOs),

    · automate data processing, model, training, evaluation and validation,

    · productionize the model either for batch or online serving,

    · continuously monitor data and model in production,

    · use a trigger based mechanism to auto train, deliver and deploy in production

    For achieving the vision, multiple goals need to be put in place. Some of them are below:

    · Transform and standardize on how we do MLOps across all teams

    · Leverage a centralized feature store and remove any training or serving skew

    · All data produced must be treated as a product

    · Enable comprehensive data and model monitoring capabilities

    · Follow standard tiered approach model for implementing operations readiness

    · Lastly, nurture relationships and collaborate with data engineering, central infra teams, etc

    The rest of the presentation will go into details on how to implement each of the above goals along with a few high level architectural patterns.

  • 14:30
    Hakimeh Purmehdi

    Deep Learning in Telecommunications

    Hakimeh Purmehdi - Senior Data Scientist - Ericsson

    Down arrow blue

    Hakimeh Purmehdi is a senior data scientist at Ericsson Global Artificial Intelligence Accelerator, where leads innovative AI/ML solutions for future wireless communication networks. She received her Ph.D. degree in electrical engineering from the Department of Electrical and Computer Engineering, University of Alberta, Edmonton, AB, Canada. She finished her postdoc in AI and image processing at the Radiology Department, University of Alberta. Before joining Ericsson, she was the co-founder of Corowave, a startup to develop non-contact bio-signal monitoring, and she was with Microsoft Research (MSR) as a research engineer. Her research focus is basically on the intersection of wireless communication (5G and beyond), AI solutions (such as online learning, federated learning, reinforcement learning, deep learning), and biotech.

    Twitter Linkedin
  • 15:00

    AFTERNOON NETWORKING BREAK

  • 15:45
    Mary Jane Dykeman

    AI/ML Risk Assessment

    Mary Jane Dykeman - Managing Partner - INQ Law

    Down arrow blue

    Mary Jane Dykeman is a managing partner at INQ Law. In addition to data law, she is a long-standing health lawyer. Her data practice focuses on privacy, artificial intelligence (AI), cyber preparedness and response, and data governance. She regularly advises on use and disclosure of identifiable and de-identified data. Mary Jane applies a strategic, risk and innovation lens to data and emerging technologies. She helps clients identify the data they hold, understand how to use it within the law, and how to innovate responsibly to improve patient care and health system efficiencies. In her health law practice, Mary Jane focuses on clinical and enterprise risk, privacy and information management, health research, governance and more. She currently acts as VP Legal, Chief Legal/Risk to the Centre for Addiction and Mental Health, home of the Krembil Centre for Neuroinformatics, and was instrumental in the development of Ontario’s health privacy legislation.

    Mary Jane regularly consults on large data initiatives and use of data for health research, quality, and health system planning. Her consulting work extends to modernizing privacy legislation and digital societies, and she works with Boards, CEOs and CISOs, as well as innovation teams on the emerging risks, trends and opportunities in data. Mary Jane regularly speaks on AI, cyber risk and how to better engage and build trust with clients and customers whose data is at play. She is also a frequent speaker and writer on health law and data law. Mary Jane is co-founder of Canari AI, an AI risk impact solution.

    Linkedin
  • 16:10
    Rohit Saha

    Computer Vision and Deep Learning

    Rohit Saha - Applied Research Scientist - Georgian

    Down arrow blue

    In recent years, we have seen amazing results in artificial intelligence and machine learning owing to the emergence of models such as transformers and pretrained language models. Despite the astounding results published in academic papers, there is still a lot of ambiguity and challenges when it comes to deploying these models in industry because: 1) troubleshooting, training, and maintaining these models is very time and cost consuming due to their inherent large size and complexities 2) there is not enough clarity yet about when the advantages of these models outweigh their challenges and when they should be preferred over classical ML models. These challenges are even more severe for small and mid-size companies that do not have access to huge compute resources and infrastructure. In this talk, we discuss these challenges and share our findings and recommendations from working on real world examples at SPINS: a company that provides industry-leading CPG Product Intelligence. In particular, we describe how we leveraged state-of-the-art language models to seamlessly automate part of SPINS workflow and drive substantial business outcomes. We share our findings from our experimentation and provide insights on when one should use these gigantic models instead of classic ML models. Considering that we have all sorts of challenges in our use cases from an ill-defined label space to a huge number of classes (~86,000) and massive data imbalance, we believe our findings and recommendations can be applied to most real-world settings. We hope that the learnings from this talk can help you to solve your own problems more effectively and efficiently!

  • 16:40
    Sharon Shahrokhi Tehrani

    Overcoming 'Black Box' Model Challenges

    Sharon Shahrokhi Tehrani - Product Manager, Machine Intelligence Retention - CBC

    Down arrow blue

    In machine learning and AI, the black box models are developed from input data by algorithms (e.g., Deep learning algorithms). Although the input variables are known, the functions' complexity and the joint relationship between variables make it challenging for data scientists and ML developers to interpret the processes inside the box and explain the ultimate decision. The lack of interpretability makes it challenging to trust the black box models and creates barriers to adopting ML and AI in numerous domains. The short answer to the current question, "How to overcome 'Black Box' Model Challenges?" is explainable AI. Designing the AI and ML models by adding explainable techniques improves understanding, increases trust and transparency, and avoids possible biases and discriminations due to data quality issues.

  • 17:10

    CLOSING REMARKS

  • 17:20

    NETWORKING DRINKS RECEPTION

  • 08:00

    COFFEE & REGISTRATION

  • 09:00

    WELCOME NOTE

  • 09:15
    Jekaterina Novikova

    Deep Learning Methods for Mental Health Prediction

    Jekaterina Novikova - Director of Machine Learning - WinterLight Labs

    Down arrow blue

    Language and Speech Processing: From Human-Robot Interaction to Alzheimer’s Prediction

    Natural language and speech processing is a thriving area in AI that becomes more and more important nowadays. Almost everyone has been exposed in one way or another to the newest technology that employs natural language processing, was it a virtual assistant Siri, or a simple automated phone answering system. The range of possible applications able to create value from natural language processing is much broader, however, and may include such, from the first glance unrelated, areas as interaction with humanoid robots or detection of dementia. In this talk, Jekaterina Novikova, a Director of Machine Learning at Winterlight Labs, will discuss how AI researchers use natural language processing in these two fields.

    Jekaterina Novikova is a Director of Machine Learning at Winterlight Labs. Winterlight Labs is a Toronto-based Canadian company that is developing a novel AI-based diagnostic platform that can objectively assess and monitor cognitive health. Jekaterina's work explores artificial intelligence in the context of language understanding, characterising speaker's cognitive, acoustic and linguistic state, as well as in the context of human-machine interaction. Jekaterina received a PhD in Computer Science in 2015 from the University of Bath, UK. More information on Jekaterina's research can be found at: http://jeknov.tumblr.com

    Twitter Linkedin
  • 09:45

    Generative Adversarial Networks

  • 10:10
    Stephen O'Farrell

    BuzzWords - How Bumble Does Multilingual Topic Modelling at Scale

    Stephen O'Farrell - Machine Learning Scientist - Bumble

    Down arrow blue

    BuzzWords - How Bumble does Multilingual Topic Modelling at Scale

    With the abundance of free-form text data available nowadays, topic modelling has become a fundamental tool for understanding the key issues being discussed online. We found the state-of-the-art topic modelling libraries either too naive or too slow for the amount of data a company like Bumble deals with, so we decided to develop our own solution. BuzzWords runs entirely on GPU using BERT-based models - meaning it can perform topic modelling on multilingual datasets of millions of data points, giving us significantly faster training times when compared to other prominent topic modelling libraries

    Stephen O’ Farrell is a machine learning scientist at Bumble, where, as a member of the Integrity & Safety team, he works to ensure user safety across all of Bumble’s platforms. His work generally deals with NLP and Computer Vision tasks - deploying deep learning models at scale across the organisation. He graduated with an MSc in Data Science and BSc in Computational Thinking, both from Maynooth University, Ireland

    Linkedin
  • 10:40

    MORNING NETWORKING BREAK

  • 11:10

    Discussion Group: Data-Driven Modeling Approaches

  • Akash Shetty

    Facilitator

    Akash Shetty - Data Scientist - ApplyBoard

    Linkedin
  • Richard Boire

    Facilitator

    Richard Boire - Professor - Seneca College

    Linkedin
  • 11:50
    Shreshth Gandhi

    AI Workbench: Predicting Testable Biological Hypotheses with Deep Learning

    Shreshth Gandhi - Machine Learning Lead and Lead Scientist - Deep Genomics

    Down arrow blue

    Deep Genomics combines artificial intelligence (AI) and RNA biology to program and prioritize transformational AI-enabled therapies for almost any gene in any genetic condition. Our proprietary platform, called the AI Workbench, allows Deep Genomics to decode vast amounts of data on RNA biology, identify novel targets for genetic diseases, and produce therapeutic programs with a high success rate. In this talk, I'll outline our end-to-end drug development process with AI at its core, and give examples of some recent breakthroughs that have allowed us to make accurate predictions of variant effects and rapid identification of the active and potent therapeutic compounds.

    Shreshth Gandhi leads the Machine Learning group at Deep Genomics, a biotechnology company that uses ML to program and prioritize transformational RNA therapeutics for genetic diseases. He received his master's degree from the University of Toronto, where his research work focused on developing deep learning predictors for predicting RNA-protein binding. At Deep Genomics he continued this work at the intersection of deep learning and genomics and co-developed the splicing predictor that was used to identify that the ATP7B Variant c.1934T>G p.Met645Arg causes Wilson Disease by altering splicing.

    Linkedin
  • 12:30

    LUNCH

  • 13:35
    Akash Shetty

    Deep Learning and Explainable AI

    Akash Shetty - Data Scientist - ApplyBoard

    Linkedin
  • 14:00

    Closing Panel Discussion: Bill C-27 Potential Impact on AI and Implications of the EU AI Act

  • Mary Jane Dykeman

    Panelist

    Mary Jane Dykeman - Managing Partner - INQ Law

    Down arrow blue

    Mary Jane Dykeman is a managing partner at INQ Law. In addition to data law, she is a long-standing health lawyer. Her data practice focuses on privacy, artificial intelligence (AI), cyber preparedness and response, and data governance. She regularly advises on use and disclosure of identifiable and de-identified data. Mary Jane applies a strategic, risk and innovation lens to data and emerging technologies. She helps clients identify the data they hold, understand how to use it within the law, and how to innovate responsibly to improve patient care and health system efficiencies. In her health law practice, Mary Jane focuses on clinical and enterprise risk, privacy and information management, health research, governance and more. She currently acts as VP Legal, Chief Legal/Risk to the Centre for Addiction and Mental Health, home of the Krembil Centre for Neuroinformatics, and was instrumental in the development of Ontario’s health privacy legislation.

    Mary Jane regularly consults on large data initiatives and use of data for health research, quality, and health system planning. Her consulting work extends to modernizing privacy legislation and digital societies, and she works with Boards, CEOs and CISOs, as well as innovation teams on the emerging risks, trends and opportunities in data. Mary Jane regularly speaks on AI, cyber risk and how to better engage and build trust with clients and customers whose data is at play. She is also a frequent speaker and writer on health law and data law. Mary Jane is co-founder of Canari AI, an AI risk impact solution.

    Linkedin
  • 14:40

    CLOSING REMARKS

  • 14:50

    END OF SUMMIT

Toronto AI Summit

Toronto AI Summit

09 - 10 November 2022

Get your ticket
This website uses cookies to ensure you get the best experience. Learn more