Schedule

08:15

REGISTRATION

09:00

WELCOME

DEEP LEARNING & HEALTHCARE IN PRACTICE

09:15

Michael Kuo

Michael Kuo, UCLA

Towards the Development of Clinically Relevant Applications of Deep learning in Healthcare

Towards the Development of Clinically Relevant Applications of Deep learning in Healthcare

Medicine, by definition, is an information science that requires the capacity to actively acquire individualized and context-specific data, and to then iteratively evaluate, assimilate and refine this information against a vast database of medical knowledge in order to arrive at a small solution space and a corresponding set of implementable policies. Deep Learning, as a transformational tool, is thus extremely well suited to medical application; unfortunately, fundamental understanding of the domain and where and how Deep Learning can be applied in a clinically relevant manner still lag. In this talk I will share my group’s experience over the past decade with trying to develop more advanced and clinically relevant computational approaches in cancer that can integrate large and diverse multi-scale biological data sets including medical imaging, tissue, genomics, and clinical data, in order to predict an individual patient’s cancer genomics and likelihood of response to a particular therapy using only their medical imaging data. I will discuss how we are incorporating Deep Learning in our approaches and highlight other areas of future growth and opportunity in healthcare where Deep Learning can potentially have great impact.

Dr. Kuo received his Medical Degree from Baylor College of Medicine and did his clinical training in Diagnostic Radiology at Stanford University, where he also completed a clinical fellowship in Cardiovascular and Interventional Radiology. He served as Assistant Professor in the Department of Radiology at the University of California-San Diego from 2003-2009. In 2009 he moved to the University of California-Los Angeles where he is an Associate Professor in the Departments of Radiology, Pathology and Bioengineering and served as the Directors of both the Radiogenomics and Radiology-Pathology Programs. Dr. Kuo is an international leader in the field of Radiogenomics where he has published seminal foundational papers. His principle area of research focus is in the field of radiogenomics where his group applies integrative computational and biological approaches in order to derive actionable clinical insights and tools centered around patient stra tification and therapeutic response prediction by leveraging large multi-scale relational data sets including clinical outcomes, clinical imaging, tissue, cellular and subcellular biological data.

Buttontwitter Buttonlinkedin

09:35

 Neil Lawrence

Neil Lawrence, University of Sheffield

Challenges for Delivering Machine Learning in Health

Challenges for Delivering Machine Learning in Health

The wealth of data availability presents new opportunities in health but also challenges. In this talk we will focus on challenges for machine learning in health: 1. Paradoxes of the Data Society, 2. Quantifying the Value of Data, 3. Privacy, loss of control, marginalization. Each of these challenges has particular implications for machine learning. The paradoxes relate to our evolving relationship with data and our changing expectations. Quantifying value is vital for accounting for the influence of data in our new digital economies and issues of privacy and loss of control are fundamental to how our pre-existing rights evolve as the digital world encroaches more closely on the physical. One of the goals of research community should be to provide the technological tooling to address these challenges ensure that we are empowered to avoid the pitfalls of the data driven society, allowing us to reap the benefits of machine learning in applications from personalized health to health in the developing world.

Neil Lawrence is a Professor of Machine Learning at the University of Sheffield, currently on leave of absence at Amazon, Cambridge. His main technical research interest is machine learning through probabilistic models. He focuses on both the algorithmic side of these models and their application. He has a particular interest on applications in personalized health and applications in the developing world. Neil is well known for his work with Gaussian processes, and has proposed Gaussian process variants of many of the successful deep learning architectures. He is also an advocate of of the ideas behind “Open Data Science” and active in public awareness (see https://www.theguardian.com/profile/neil-lawrence) and community organization. He has been both program chair and general chair of the NIPS Conference.

Buttontwitter Buttonlinkedin

AI IN DRUG DISCOVERY & DEVELOPMENT

10:05

Oladimeji Farri

Oladimeji Farri, Philips Research

Deep Learning-based Diagnostic Inferencing and Clinical Paraphrasing

Deep Learning-based Diagnostic Inferencing and Clinical Paraphrasing

Deep learning has emerged as the preferred approach for machine learning with limited or no annotated corpora and hand-crafted features. In addition to the success of convolutional and recurrent neural networks in addressing typical NLP tasks e.g. syntactic parsing or sentiment analysis, integrating “memory” and “attention” in neural networks while leveraging relevant knowledge sources yields results that compare to or outperform the state-of-the-art for analysis of semantically-rich narratives in the clinical domain. My presentation highlights some of our work at the AI Lab in Philips Research NA in which we implement attention-based and memory networks to perform clinical paraphrasing and diagnostic reasoning.

Oladimeji (Dimeji) Farri received his PhD in Health Informatics from the University of Minnesota, and MBBS (Medicine and Surgery) from the University of Ibadan, Nigeria, in 2012 and 2005 respectively. He is currently a Senior Research Scientist at Philips Research – North America (PRNA) in Cambridge, Massachusetts, where he leads the Artificial Intelligence (AI) Lab. His interests are in clinical NLP, text analysis, question answering and dialog systems to address medical dilemmas experienced by patients/consumers and healthcare providers. His recent work includes the use of deep learning in offering solutions for clinical decision support and patient engagement.

Buttontwitter Buttonlinkedin

10:25

COFFEE

11:10

Polina Mamoshina

Polina Mamoshina, Insilico Medicine

Application of Deep Neural Networks to Biomarker Development

Application of Deep Neural Networks to Biomarker Development

With the almost exponential growing of transcriptomics data now it is possible and even necessary to apply sophisticated machine learning techniques to the field. Applications of deep neural networks combined with domain expertise can help optimize biomarker development process through intelligent analysis of high-throughput screening experiments and large repositories of biomedical data. This presentation will cover aspects of creating multi-modal biomarkers human age trained on human blood biochemistry and transcriptomics data.

Polina Mamoshina is a senior research scientist at Insilico Medicine, Inc, a Baltimore-based bioinformatics and deep learning company focused on reinventing drug discovery and biomarker development and a part of the computational biology team of Oxford University Computer Science Department. Polina graduated from the Department of Genetics of the Moscow State University. She was one of the winners of GeneHack a Russian nationwide 48-hour hackathon on bioinformatics at the Moscow Institute of Physics and Technology attended by hundreds of young bioinformaticians. Polina is involved in multiple deep learning projects at the Pharmaceutical Artificial Intelligence division of Insilico Medicine working on the drug discovery engine and developing biochemistry, transcriptome, and cell-free nucleic acid-based biomarkers of aging and disease. She recently co-authored seven academic papers in peer-reviewed journals.

Buttontwitter Buttonlinkedin

DEEP LEARNING IN MEDICAL IMAGING

11:35

Ben Glocker

Ben Glocker, Imperial College London

Deep Learning in Medical Imaging - Successes and Challenges

Deep Learning in Medical Imaging - Successes and Challenges

Machines capable of analysing and interpreting medical scans with super-human performance are within reach. Deep learning, in particular, has emerged as a promising tool in our work on automatically detecting brain damage. But getting from the lab into clinical practice comes with great challenges. How do we know when the machine gets it wrong? Can we predict failure, and can we make the machine robust to changes in the clinical data? We will discuss some of our most recent work that aims to address these critical issues and demonstrate our latest results on deep learning for analysing medical scans.

Ben Glocker is a Lecturer in Medical Image Computing at the Department of Computing, Imperial College London. He holds a PhD from TU Munich, and was a post-doc at Microsoft Research Cambridge and a research fellow at the University of Cambridge. He received several awards for his work on medical image analysis including the Francois Erbsman Prize, the Werner von Siemens Excellence Award, and an honorary mention for the Cor Baayen Award. Ben is the deputy head of the BioMedIA group and his research focuses on applying machine learning techniques for advanced biomedical image computing and medical computer vision.

Buttontwitter Buttonlinkedin

12:00

Kyung Hyun Sung

Kyung Hyun Sung, UCLA

Quantitative MRI-Driven Deep Learning

Quantitative MRI-Driven Deep Learning

Deep Learning (DL) has recently garnered great attention because of its superior performance in image recognition and classification. One of the main promises of DL is to replace handcrafted imaging features with efficient algorithms for hierarchical feature extraction. Many studies have shown DL is a powerful engine for producing “actionable results” in unstructured big data. We present deep learning methods to effectively distinguish between indolent and clinically significant prostatic carcinoma using multi-parametric MRI (mp-­MRI). The main contributions include i) constructing DL frameworks to avoid massive learning requirements through pre-trained convolutional neural network (CNN) models and ii) applying the proposed DL framework to the computerized analysis of prostate multi-parametric MRI from improved cancer classification.

Dr. Sung received the M.S and Ph.D. degrees in Electrical Engineering from University of Southern California, Los Angeles, in 2005 and 2008, respectively. From 2008 to 2012, he finished his postdoctoral training at Stanford in the Departments of Radiology and joined the University of California, Los Angeles (UCLA) Department of Radiological Sciences in 2012 as an Assistant Professor. His research interest is to develop fast and reliable MRI methods that can provide improved diagnostic contrast and useful information. In particular, his group (http://mrrl.ucla.edu/meet-our-team/sung-lab/) is currently focused on developing advanced quantitative MRI techniques for early diagnosis, treatment guidance, and therapeutic response assessment for oncologic and cardiac applications

Buttontwitter Buttonlinkedin

12:20

LUNCH

13:30

PANEL: How to Overcome Challenges Faced in Medical Imaging Databases

14:10

Anastasia Georgievskaya

Anastasia Georgievskaya, Beauty.AI

MODERATOR

Deep Learning for Analyzing Perception of Human Appearance in Healthcare and Beauty

Deep learning techniques can be used to extract facial imaging biomarkers of human health status and to track the effects of cosmetic interventions. Here we present a set of tools for analysis of perception of human age and health status. We also demonstrate that when certain population groups are under-represented in the training sets, these populations are left out or may be subject to higher error rates. This is why Youth Laboratories launched Diversity.AI, a think tank for anti-discrimination by the deep-learned systems. The presentation describes the strategies for evaluating human appearance for machine-human interaction and reveals the risks and dangers of deep-learned biomarkers.

Anastasia Georgievskaya is the co-founder and research scientist at Youth Laboratories, a company developing tools to study aging and discover effective anti-aging interventions using advances in machine vision and artificial intelligence. She helped organize the first beauty competition judged by the robot jury, Beauty.AI and develop an app for tracking age-related facial changes and testing the effectiveness of various treatments called RYNKL. Anastasia has a degree in bioengineering and bioinformatics from the Moscow State University. She won numerous math and bioinformatics competitions and successfully volunteered for some of the most prestigious companies in aging research including Insilico Medicine.

Buttontwitter Buttonlinkedin

Laurens Hogeweg

Laurens Hogeweg, COSMONiO

PANELLIST

Dr. Laurens Hogeweg completed his PhD on medical image processing using machine learning in 2013 at the Radboud University Nijmegen after having acquired Msc degrees in both medicine and biomedical technology from the Rijksuniversiteit Groningen. After his PhD he went to industry and developed cloud-based solutions for processing of large image datasets. In 2016 he started another job at COSMONiO as a research scientist on the topic of deep learning for image processing. His research interest is in the area of learning from small data.

Buttontwitter Buttonlinkedin

Jorge Cardoso

Jorge Cardoso, UCL

PANELLIST

Jorge has a BSc in Biomedical Engineering (2006) and an MSc in Medical Electronics and Signal Processing for Biomedical Engineering (2008) from the Universidade do Minho, Portugal, followed by a PhD (2008-2012) and PostDoc (2012-2015) in medical image analysis and biomarker development between CMIC and the Dementia Research Centre at UCL. In June 2015 he was appointed Lecturer in Quantitative Neuroradiology at the Translational Imaging Group, part of CMIC, in collaboration with the National Hospital for Neurology and Neurosurgery, working on translating and integrating quantitative biomarkers and automated image analysis techniques within the clinical environment. His research explores novel highly accurate and robust machine learning techniques to segment, parcellate and localize different types of tissues using anatomical, microstructural and functional images.

Buttontwitter Buttonlinkedin

14:10

Reza Khorshidi

Reza Khorshidi, AIG

PANELLIST

Reza is currently the Chief Scientist at AIG (leading the company's AI research and InsurTech innovations, globally), as well as one of the co-leads of Deep Medicine program at the University of Oxford's Martin School (focused on healthcare innovation, and the use of AI in digital health ecosystems). He obtained his DPhil (i.e., PhD) in computational neuroscience and machine learning from the University of Oxford in 2010, and since then has been leading teams, researches and disruptive innovation projects in both academia and industry.

Buttontwitter Buttonlinkedin

DEEP LEARNING FOR DIAGNOSTICS

14:10

Max Little

Max Little , Aston University

Machine Learning in Healthcare: Why We Are Not Quite There Yet

Machine Learning in Healthcare: Why We Are Not Quite There Yet?

Machine learning promises to revolutionise medical applications such as diagnostics and clinimetrics. Recent progress in algorithms such as deep learning have pushed performance to human-level competence in some applications. However, these algorithms can give meaningless predictions for some kinds of data where humans would not. These confounded predictions could be perilous in mission-critical applications such as healthcare. I will argue that we will have to address difficult issues such as the nature of sampling and data collection from an imperfect world, the accountability of complex predictors, and the need for explanatory rather than just predictive power.

Prof. Max Little is an applied mathematician and statistician. He is a leading expert on clinical signal processing and machine learning algorithms for the use of consumer technologies such as telephones and smartphones to detect the symptoms of Parkinson's remotely. Along with being a Associate Professor of Mathematics at the University of Ashton, he is also a Senior Research Fellow at the University of Oxford and a Visiting Associate Professor at MIT.

Buttontwitter Buttonlinkedin

14:30

Pearse Keane

Pearse Keane, Moorfields Eye Hospital

Artificial Intelligence and Optical Coherence Tomography - Reinventing the Eye Exam?

Artificial Intelligence and Optical Coherence Tomography - Reinventing the Eye Exam?

Ophthalmology is among the most technology-driven of the all the medical specialties, with treatments utilizing high-spec medical lasers and advanced microsurgical techniques, and diagnostics involving ultra-high resolution imaging. Ophthalmology is also at the forefront of many trailblazing research areas in healthcare, such as stem cell and gene therapies.

Moorfields Eye Hospital in London is the oldest eye hospital in the world. Every year, >600,000 patients attend Moorfields - more than double the number of the largest eye hospitals in North America. Together with the adjacent UCL Institute of Ophthalmology, Moorfields is among the largest centres for vision science research in the world. In July 2016, Moorfields announced a formal collaboration and data sharing agreement with DeepMind Health. This collaboration involves the sharing of >1,000,000 anonymised retinal scans with DeepMind to allow for the automated diagnosis of diseases such as age-related macular degeneration (AMD) and diabetic retinopathy (DR).

In my presentation, I will describe the motivation - and urgent need - to apply deep learning to ophthalmology, the processes required to establish a research collaboration between the NHS and a company like DeepMind, the goals of our research, and finally, why I believe that ophthalmology could be first branch of medicine to be fundamentally reinvented through the application of deep learning.

Pearse A. Keane, MD, FRCOphth, is a consultant ophthalmologist at Moorfields Eye Hospital, London and an NIHR Clinician Scientist, based at the Institute of Ophthalmology, University College London (UCL). Dr Keane specialises in applied ophthalmic research, with a particular interest in ocular imaging. He joined Moorfields in 2010; prior to this, he carried out retinal imaging research at the Doheny Eye Institute in Los Angeles. He is originally from Ireland and received his medical degree from University College Dublin (UCD).

In January 2015, he was awarded a prestigious "Clinician Scientist” award from the National Institute of Health Research (NIHR) in the United Kingdom (UK) - the first ophthalmologist in the UK to receive such an award. His remit from this award is to explore the potential of new medical technologies and innovation in ophthalmology, ranging from advanced imaging to artificial intelligence to virtual and augmented realities. With this remit in mind, he recently established a collaboration between Moorfields Eye Hospital and Google DeepMind to apply deep learning algorithms to ocular imaging. This collaboration will initially involve the application of deep learning to approximately 1 million retinal optical coherence tomography (OCT) images and fundus photographs.

Buttontwitter Buttonlinkedin

WEARABLES IN HEALTHCARE

14:50

Johanna Ernst

Johanna Ernst, University of Oxford

10,000 Steps; So What? Are Wearable Technologies the Future of Clinical Trials?

10,000 steps; so what? Are wearable technologies the future of clinical trials?

Wearable technologies such as activity trackers have the potential to speed up the evaluation of medical treatments and reduce the costs associated with their development. Supporters of the use of wearable technologies in clinical trial monitoring argue, that the access to continuous, objective data may allow for a faster, more detailed understanding of the impact of a treatment even in situations that are currently difficult to assess (rigidity in Parkinson’s patients). There is also however an appreciation of the challenges associated with the use of unregulated devices including, amongst others, reliability, interchangeability and data security. Even if the latter challenges were overcome, there is a need to understand how to best utilise data, such as step counts or energy expenditure, in a meaningful manner. This session aims to assess the current use of general wellness tools within clinical trials, potential benefits and limitations of their use as well as the role that ‘deep medicine’ can play to overcome these limitations.

Johanna Ernst is a DPhil student at the University of Oxford working in affilitation with the Institute for Biomedical Engineering and the George Institute for Global Health, where she is involved with the center’s Program on Deep Medicine. As part of her research, Johanna explores the use of wearable technologies for heart failure risk-stratification. She previously worked as a visiting researcher at Misfit Inc., a world-leading wearable technology developer, where she investigated the use of commercially available physical activity monitors for clinical trial monitoring.

Buttontwitter Buttonlinkedin

15:10

COFFEE

DEEP LEARNING IN NEUROSCIENCE

15:55

Bashar Awwad Shiekh Hasan

Bashar Awwad Shiekh Hasan, University of Newcastle

Micro EMG: Imaging the Inner Structure of the Human Muscle

Micro EMG: Imaging the Inner Structure of the Human Muscle Guided by a Deep Learning Approach to Muscle Fiber Localization

The presentation will discuss our latest development of a multi-channel Electromyography needle. Using flexible electrodes technology, 64 electrodes are placed in a custom design pattern to maximize the information available for the localization of muscle fibres in human. After the motor units are isolated, an unsupervised stacked denoising auto-encoder is employed to further decompose the motor unit into its constituent muscle fibres leading to localization of fibres with 100 micro meter accuracy of over 50 fibres simultaneously. This will potentially revolutionize neurophysiology and the diagnosis of neuromuscular disease.

Dr. Awwad Shiekh Hasan is a senior research associate in computational neuroscience at Newcastle University. His research is focused on the use of computational modelling to expand our understanding of the fundamental neural mechanisms of cognition and perception, and how that understanding can be translated into action. He worked in several interdisciplinary areas including Brain-Computer Interfaces, neural imaging, and most recently the development of medical devices. He has a British patent and has published extensively in leading scientific outlets in neuroscience, and machine learning.

Buttontwitter Buttonlinkedin

TRANSFER LEARNING IN HEALTHCARE DATA

16:20

Gilles Wainrib

Gilles Wainrib, Owkin

Collaborative Artificial Intelligence for Healthcare Data

Collaborative Artificial Intelligence for Healthcare Data

Everyday, new deep learning algorithms are trained to solve specific tasks, such as medical images classification. What if we could share and connect those algorithms and create the conditions for a cross-fertilization between these powerful artificial intelligence systems ? In this talk, we will discuss the fundamental role of transfer learning to foster the emergence of collaborative artificial intelligence and show how it can bring the power of big-data-trained deep learning algorithms into the world of medical not-so-big data. As an illustration, we will present a new platform for medical image recognition based on deep transfer learning and collaborative AI.

Gilles Wainrib is Chief Scientific Officer and co-founder at Owkin, where he leads the data science team. He holds a PhD in applied mathematics from Ecole Polytechnique and was a former researcher at Stanford University and Ecole Normale Supérieure in Paris, working on machine learning algorithms and their applications in biology and medicine. He is the author of 30+ scientific publications in mathematics, physics, biology, medicine and computer science.

Buttontwitter Buttonlinkedin

PRECISION AND PERSONALISED MEDICINE

16:40

Michel Vandenberghe

Michel Vandenberghe , AstraZeneca

Clinical Relevance of Deep Learning to Facilitate the Diagnosis of Cancer Tissue Biomarkers

Clinical Relevance of Deep Learning to Facilitate the Diagnosis of Cancer Tissue Biomarkers

Tissue biomarker scoring by pathologists is central to defining the appropriate therapy for patients with cancer. However, inter-pathologist variability in the interpretation of ambiguous cases can affect diagnostic accuracy. Modern artificial intelligence methods such as deep learning have the potential to supplement pathologist expertise to ensure constant diagnostic accuracy. We developed a computational approach based on convolutional neural networks that automatically scores HER2, an immunohistochemistry biomarker that defines patient eligibility for anti-HER2 targeted therapies in breast cancer. Our results show that convolutional neural networks substantially agree with pathologist based diagnosis. Furthermore, we found that convolutional neural networks highlighted cases at risk of misdiagnosis providing preliminary evidence for the clinical utility of deep learning aided diagnosis. More studies are needed to show not only the validity of deep learning, but also its utility i n clinical practice to improve diagnostic accuracy. Beyond correlations of artificial intelligence and human made diagnosis, new study designs should be investigated to enable the demonstration that deep learning can improve clinical decision making.

Michel Vandenberghe is working at AstraZeneca, developing deep learning algorithms to analyse immunohistochemistry biomarkers and evaluating the potential uses of deep learning to support biomarker development and clinical decision making. Prior to that, he gained a PhD in Computer Science at University Pierre and Marie Curie, and a Doctorate in Pharmacy, at the University Paris Sud XI.

Buttontwitter Buttonlinkedin

17:00

Conversation & Drinks

08:15

REGISTRATION

09:00

WELCOME

STARTUP SESSION

09:15

Fangde Liu

Fangde Liu, Imperial College London

SurgicalAI: Can we make Surgeries Autonomous

SurgicalAI: Can we make Surgeries Autonomous

Lancet reports that a 5 billion world population has no access to a surgery service with which 1/3 of the death and disability could be saved. Recent technology has enabled fully autonomous surgery, however, fully autonomous surgical systems are facing many ethical and safety issues and are brining many challenges for regulation. SurgicalAi is an autonomous surgical planning system living in the cloud with the hope that autonomous technology could improve surgery. We present our thinkings of how to break the barrier of technology deployment and our work using AI to building a patient specific medical device which makes surgery safer, easy and more efficient.

Dr Fangde Liu is a Research Associate from Imperial College London, currently head of Imaging Informatics at the Data Science Institute. His work focuses on bringing autonomy technology to daily clinical practise such as surgical robots and pharmacovigilance system. He is architecture of the surgical navigation system of EDEN2020, which is the largest surgical robots project in the EU, and currently managing several medical imaging big data project for cardiology disease quantification and neurology pharmacoviglance. He is an expert on GPU and parallel computing and contribute many technology for medical imaging processing and surgery planning with gpu. SurgicalAI is a new startup provides autonomous surgery planning and patient specific medical devices design with GPU cloud.

Buttontwitter Buttonlinkedin

09:35

Václav Potesil

Václav Potesil, Optellum

Helping Clinicians Cure Cancer Using Artificial Intelligence and Big Image Datasets

Helping Clinicians Cure Cancer Using Artificial Intelligence and Big Image Datasets

Optellum’s vision is to enable earlier, better cancer diagnosis and treatment, by using Machine Learning to unlock deep insights in huge image databases. Our platform, the Digital Image Biomarker, links scans with data and ground-truth outcomes mined from the Electronic Medical Records. By pooling the collective experience of thousands of clinicians to uncover patterns not obvious to the human eye, it will give every clinician expert-level decision support. In this talk, we will share our journey so far, in transforming a proof of concept towards an intelligent decision support system to be deployed in the clinic. We will discuss challenges in collecting and curating vast datasets in partnerships with leading hospitals, and addressing key technical, regulatory and business challenges.

Vaclav is a co-founder of Optellum, a startup formed by a team of AI, medical imaging and clinical experts who met at the University of Oxford. Optellum's vision is to enable earlier and better cancer diagnosis and treatment by using Machine Learning to unlock new insights in huge image databases. Vaclav holds Oxford PhD in Computer Vision (lung cancer therapy planning) in collaboration with Siemens Molecular Imaging and Mirada Medical. He developed and launched pioneering medical robotics devices as Global Product Manager at Hocoma, the global market leader in neuro-rehabilitation exoskeletons. He has worked in 10 countries and speaks 7 languages.

Buttontwitter Buttonlinkedin

09:55

Viktor Kazakov

Viktor Kazakov, SkinScanner

Disrupting Dermatology with Deep Learning

Disrupting Dermatology with Deep Learning

Viktor will go through the current challenges of dermatology, and how deep learning could contribute to their solution. He will introduce SkinScanner, a deep learning algorithm for universal skin disease image classification. He will demonstrate how transfer learning can be used to train a skin disease classification algorithm, whose cross-validation accuracy is close to trained human-eye performance. He will also share the long-term vision on how deep learning technologies could be used to automate diagnosis of skin diseases.

Viktor is co-founder of SkinScanner, a London-based startup specialized in using Deep Learning algorithms for skin conditions image classification. SkinSkanner’s ambition is to disrupt dermatology by making the early diagnosis of skin conditions quicker, cheaper and more accurate. Viktor holds a Master’s Degree from SciencesPo, Paris and is a member of CFA Institute. Viktor is a full stack developer with several years of med-tech experience, having launched a number of mobile and desktop-based applications in the healthcare space.

Buttontwitter Buttonlinkedin

10:15

Natalia Simanovsky

Natalia Simanovsky, CVEDIA

Creating Training Sets Quickly and Easily For Computer Vision Applications for the Healthcare Sector

Creating Training Sets Quickly and Easily For Computer Vision Applications for the Healthcare Sector

Computer vision sits at the forefront of improving the healthcare sector with neural network models already identifying and analyzing medical images from many sources. To develop these models, scientists and researchers must train these systems to identify and aggregate a large quantity of medical data, and the scale of data that scientists need to work with is enormous and ever-growing. Holding back the development of computer vision applications is the tedious and cumbersome process associated with creating training sets, that is, collecting, preparing and managing large, image datasets. In many cases, data management takes up more time than training. One of the biggest challenges in computer vision, therefore, is the current inefficiencies regarding data collection, preparation and management. Imagine a platform that provides you with standardized versions of large, annotated medical datasets so you no longer have to waste time with converting large files into one single format. A platform that provides you with a series of flexible tools that normalize the data and allows you to search, filter, browse with no requirement for serious hardware capabilities. A platform that enables you to transform your raw datasets to augmented and preprocessed training sets quickly and easily. I will discuss the current challenges facing researchers and scientists with managing large image datasets and the ways in which CVEDIA is helping data scientists simplify the data management process

A graduate from the London School of Economics, Natalia's professional experience includes over 10 years writing and research for a variety of global clients including: think tanks in the US, Canada and Israel; intergovernmental organizations including the United Nations Coordination for Humanitarian Affairs; PR and advertising firms; financial services firms; and startups focusing on hi-tech. Having been invited to join CVEDIA, she is on a steep learning curve and is humbled to work alongside a team of incredibly forward-thinking, technical geniuses.

Buttontwitter Buttonlinkedin

10:35

Stephen Hicks

Stephen Hicks, OxSight

Enhancing Sight with Machine Learning and Augmented Reality

Enhancing Sight with Machine Learning and Augmented Reality

"Becoming blind" is commonly ranked in the top three fears that people have (the others are paralysis and cancer), and it's no wonder - our world is predominantly visual and blindness robs people of a great deal of independence. 40 million people are living with legal blindness which often prevents them from seeing the face of loved ones, reading to themselves, driving, and walking in crowded or dimly lit spaces. We have developed a sophisticated array of software to detect a wide range of objects and scenarios which we have built into a pair of almost regular looking glasses. Our aim is to boost any residual vision that a person might have to the point in which they can use their own memory of vision to see and function more effectively in everyday life. Machine learning and Computer Vision are at the heart of our software as they offer a wide range of abilities: from detecting objects and scenes to learning and tracking any arbitrary object, face or person.

Dr Stephen Hicks is a Lecturer in Neuroscience and Visual Prosthetics at the University of Oxford and founder of OxSight Ltd, a startup developing augmented reality systems to enhance daily vision for partially sighted people. Stephen holds a PhD from the University of Sydney and was the recipient of a number of awards including the Royal Society Award for Innovation in 2013 and the Google Global Impact Challenge Award in 2015.

Buttontwitter Buttonlinkedin

Luca Bertinetto

Luca Bertinetto, University of Oxford

Enhancing Sight with Machine Learning and Augmented Reality

Enhancing Sight with Machine Learning and Augmented Reality

"Becoming blind" is commonly ranked in the top three fears that people have (the others are paralysis and cancer), and it's no wonder - our world is predominantly visual and blindness robs people of a great deal of independence. 40 million people are living with legal blindness which often prevents them from seeing the face of loved ones, reading to themselves, driving, and walking in crowded or dimly lit spaces. We have developed a sophisticated array of software to detect a wide range of objects and scenarios which we have built into a pair of almost regular looking glasses. Our aim is to boost any residual vision that a person might have to the point in which they can use their own memory of vision to see and function more effectively in everyday life. Machine learning and Computer Vision are at the heart of our software as they offer a wide range of abilities: from detecting objects and scenes to learning and tracking any arbitrary object, face or person.

Luca has obtained a joint MSc in Computer Engineering between the Polytechnic University of Turin and Telecom Paris Tech. At the moment he is at the third year of his PhD program within the Torr Vision group at the University of Oxford. The focus of his doctorate is learning representations from video when very little supervision is present - the so called one-shot learning scenario. He is interested in applying these techniques to the problem of arbitrary object tracking, which is a key component of many AI-equipped video processing systems.

Buttontwitter Buttonlinkedin

10:55

COFFEE

APPLICATIONS OF DEEP LEARNING IN HEALTHCARE

11:35

Anastasia Georgievskaya

Anastasia Georgievskaya, Beauty.AI

Deep Learning for Analyzing Perception of Human Appearance in Healthcare and Beauty

Deep Learning for Analyzing Perception of Human Appearance in Healthcare and Beauty

Deep learning techniques can be used to extract facial imaging biomarkers of human health status and to track the effects of cosmetic interventions. Here we present a set of tools for analysis of perception of human age and health status. We also demonstrate that when certain population groups are under-represented in the training sets, these populations are left out or may be subject to higher error rates. This is why Youth Laboratories launched Diversity.AI, a think tank for anti-discrimination by the deep-learned systems. The presentation describes the strategies for evaluating human appearance for machine-human interaction and reveals the risks and dangers of deep-learned biomarkers.

Anastasia Georgievskaya is the co-founder and research scientist at Youth Laboratories, a company developing tools to study aging and discover effective anti-aging interventions using advances in machine vision and artificial intelligence. She helped organize the first beauty competition judged by the robot jury, Beauty.AI and develop an app for tracking age-related facial changes and testing the effectiveness of various treatments called RYNKL. Anastasia has a degree in bioengineering and bioinformatics from the Moscow State University. She won numerous math and bioinformatics competitions and successfully volunteered for some of the most prestigious companies in aging research including Insilico Medicine.

Buttontwitter Buttonlinkedin

12:00

Nils Hammerla

Nils Hammerla, bablyon Health

Deep Learning in Health - It's Not All Diagnostics

Deep Learning in Health - It's Not All Diagnostics

Diagnostics is not the only challenge that we need to solve in order to provide accessible and affordable healthcare to everyone on Earth. Your GP has many skills that we each take for granted but are still a major challenge for machines. This includes the ability to see, to understand language and medical concepts, and the ability to hold a goal-oriented conversation. Translating breakthrough findings from other application domains of machine learning to healthcare is key to achieving our vision of universal healthcare.

Nils Hammerla leads the machine learning at babylon, the UK’s leading digital healthcare service. Its purpose is to democratise healthcare by putting an accessible and affordable health service into the hands of every person on earth. In order to achieve this the company is bringing together one of the largest teams of scientists, clinicians, mathematicians and engineers to focus on combining the ever-growing computing power of machines with the best medical expertise of humans to create a comprehensive, immediate and personalised health service and make it universally available. Nils holds a PhD in Computer Science from Newcastle University and has published extensively on the application of machine learning to a variety of challenges in healthcare, including automated assessment in Parkinson's disease, Autism, rehabilitation and sports.

Buttontwitter Buttonlinkedin

12:20

Daniel Nathrath

Daniel Nathrath, Ada Health

The AI Will See You Now: Will Your Doctor Be Replaced by an Algorithm?

The AI Will See You Now: Will Your Doctor Be Replaced by an Algorithm?

Daniel has lived and worked in Germany, Denmark, the UK and the USA as Founder, Managing Director and General Counsel at several internet startups. He also spent some years as a Consultant at the Boston Consulting Group. He trained as a lawyer in Germany and the USA, where he was a Fulbright Scholar, and earned his MBA from the University of Chicago.

Buttontwitter Buttonlinkedin

12:40

LUNCH

13:50

Marzieh Nabi

Marzieh Nabi, Xerox PARC

Applications of AI and Machine Learning in Healthcare: Focus on Comorbidities

Applications of AI and Machine Learning in Healthcare: Focus on Comorbidities (Co-occurrence of Multiple Chronic Conditions)

Patients suffering from chronic conditions often have multiple heterogeneous disease processes of varying severities. These conditions and their comorbidities interact with each other to affect the physical state of the patient. We focus on patients suffering from congestive heart failure, which is a common condition plaguing a large percentage of chronic patients. Several systems have been created to use physiologic data from measurements and tests to predict patient outcomes for such conditions. In the recent past, data scientists in conjunction with clinicians have created numerous machine learning models which use electronic healthcare records of a patient to predict their mortality as well. These records have the advantages of showing a patient's disease progression and interactions over time, which add a new dimension to mortality prediction. We use these records as input to a stacked deep Long Short Term memory network or LSTM and achieve predictive accuracy better than standard machine learning models used for this task. In addition, we endeavour to extract an explanation for these predictions from these hitherto opaque architectures in order to enable their greater integration as a diagnostic tool by clinicians.

Marzieh is a scientist by profession and an entrepreneur by heart. Her research lies in the intersection of systems science, AI, and machine learning and their wide range of applications from energy, to transportation, to aerospace, to multi-agent and autonomous systems, and more recently healthcare. She graduated with a PhD in Aeronautics and Astronautics and M.Sc. in mathematics from University of Washington at the end of 2012 with focus on mathematical modelling, probabilistic analysis, distributed control and optimization, networked dynamic systems, and cyber-physical systems and also obtained an executive MBA from Stanford’s Graduate School of Business (Ignite, Summer of 2015). Marzieh is holding an AIR (Analyst in Residence) position in HealthTech Capital, an investing firm focusing on healthcare related startups. She is also an Associate at Sand Hill Angels helping with business analysis, technical analysis, and due diligence.

Buttontwitter Buttonlinkedin

14:10

Valentin Tablan

Valentin Tablan, Ieso Digital Health

Artificial Neural Networks Giving Back - Applications of Deep Learning to Mental Health Therapy Provision

Artificial Neural Networks Giving Back - Applications of Deep Learning to Mental Health Therapy Provision

Ieso aim to revolutionise mental health by significantly improving the outcomes, reducing the costs, and dramatically lowering the barriers to access therapy. We do that by developing and enforcing a structured therapy process that codifies decades of experience and is continuously updated with the latest research. Since 2011 we have been pioneering the approach of providing Cognitive Behavioural Therapy via an on-line channel. The NHS England IAPT programme ('Improving Access to Psychological Therapies') provides a national framework for outcomes measurement, and our results consistently surpass the national average. Having accumulated a sizeable dataset of therapy sessions and other types of patient communication, we are now able to start building models that introduce AI capabilities into our processes. These make our work more efficient and effective, and apply at all stages of the patient's journey, from diagnosis to recovery and relapse prevention. This talk will introduce Ieso's current work and vision for mental health therapy, available to all who need it, in an AI-pervaded future.

Valentin is a principal scientist at Ieso, and heads their AI initiatives. He has worked on Natural Language Processing, Knowledge Representation, and Artificial Intelligence, spanning both symbolic methods, and machine learning, including deep learning. Prior to joining Ieso, he was the lead scientist on the question answering service that powers Amazon's Alexa smart assistant. Valentin has a PhD from the University of Sheffield, UK, where he also worked as a senior researcher on the popular 'GATE' open-source framework for text mining. He has authored more than 70 academic publications in journals and peer-reviewed conferences.

Buttontwitter Buttonlinkedin

14:30

PANEL: What Trends and Opportunities Can be Expected for the Future of Healthcare?

Rowland Manthorpe

Rowland Manthorpe , WIRED

MODERATOR

Rowland Manthorpe is Associate Editor of WIRED, where he writes about technology and its impact on society. His award-winning writing has been published in the Guardian, Economist and The Atlantic. He is also co-author of the philosophical novel, Confidence, which was published by Bloomsbury in 2016

Buttontwitter Buttonlinkedin

Claire Novorol

Claire Novorol, Doctorpreneurs

PANELLIST

Claire is Co-founder and Chief Medical Officer of Ada, the first and only closed feedback loop, AI-powered, global consumer healthcare platform. She is also the founder of Doctorpreneurs, a professional network for doctors involved in startups and healthcare technology. She is based in London and travels frequently to Berlin. Previously Claire spent 10 years working in the NHS and as a Wellcome Trust funded academic clinician. She trained as a paediatrician at London teaching hospitals including Chelsea and Westminster and Great Ormond Street Hospital, before specialising in Clinical Genetics. She has a PhD in Neurobiology from the University of Cambridge.

Buttontwitter Buttonlinkedin

Alex Matei

Alex Matei, Nuffield Health

PANELLIST

Alex is a Digital Health Manager at Nuffield Health. With an academic background in Software Engineering at University College London, he is now working on embedding machine learning into health prevention and wellbeing services. Within Nuffield, he advocates personalisation and tailoring across the customer journey. To improve health outcomes, Alex is investigating how behaviour change techniques can be amplified though artificial intelligence.

Buttontwitter Buttonlinkedin

Aureli Soria-Frisch

Aureli Soria-Frisch, Starlab

PANELLIST

Dr.-Ing. Aureli Soria-Frisch obtained a MSc from the Polytechnic University of Catalonia– UPC (1995), and a PhD from the Technical University Berlin. He is R&D Manager of the Neuroscience Business Unit at Starlab Barcelona working in DeepLearning for DigitalHealth. His research interest is focused on the fields: data and multi-sensory fusion, computational intelligence for data analysis, and machine learning for electrophysiological signal analysis. He was Project Manager of the FP7 HIVE project. He is PI of the MJFF grant for the development of Machine Learning PD biomarker discovery, and coordinator of the H2020 FET Open project LUMINOUS on the characterization of consciousness.

Buttontwitter Buttonlinkedin

15:00

END OF SUMMIT

Connect

Be Sociable

  • Twitter
  • Facebook
  • Linkedin
  • Youtube
  • Flickr
  • Lanyrd
  • Instagram
  • Google plus
  • Medium