
REGISTRATION

WELCOME

Neil Lawrence - Recently Elected DeepMind Professor of Machine Learning - University of Cambridge/University of Sheffield
Welcome
Neil Lawrence - University of Cambridge/University of Sheffield
Machine Learning systems Design
Machine learning solutions, in particular those based on deep learning methods, form an underpinning of the current revolution in “artificial intelligence” that has dominated popular press headlines and is having a significant influence on the wider tech agenda. In this talk I will give an overview of where we are now with machine learning solutions, and what challenges we face both in the near and far future. These include practical application of existing algorithms in the face of the need to explain decision-making, mechanisms for improving the quality and availability of data, dealing with large unstructured datasets.
Neil Lawrence is a Professor of Machine Learning at the University of Sheffield. His main technical research interest is machine learning through probabilistic models. He focuses on both the algorithmic side of these models and their application. He has a particular interest on applications in personalized health and applications in the developing world. Neil is well known for his work with Gaussian processes, and has proposed Gaussian process variants of many of the successful deep learning architectures. He is also an advocate of of the ideas behind “Open Data Science” and active in public awareness (see https://www.theguardian.com/profile/neil-lawrence) and community organization. He has been both program chair and general chair of the NIPS Conference.


THE DEEP LEARNING LANDSCAPE


Neil Lawrence - Recently Elected DeepMind Professor of Machine Learning - University of Cambridge/University of Sheffield
The Data Delusion: Challenges for Democratising Deep Learning
Neil Lawrence - University of Cambridge/University of Sheffield
Machine Learning systems Design
Machine learning solutions, in particular those based on deep learning methods, form an underpinning of the current revolution in “artificial intelligence” that has dominated popular press headlines and is having a significant influence on the wider tech agenda. In this talk I will give an overview of where we are now with machine learning solutions, and what challenges we face both in the near and far future. These include practical application of existing algorithms in the face of the need to explain decision-making, mechanisms for improving the quality and availability of data, dealing with large unstructured datasets.
Neil Lawrence is a Professor of Machine Learning at the University of Sheffield. His main technical research interest is machine learning through probabilistic models. He focuses on both the algorithmic side of these models and their application. He has a particular interest on applications in personalized health and applications in the developing world. Neil is well known for his work with Gaussian processes, and has proposed Gaussian process variants of many of the successful deep learning architectures. He is also an advocate of of the ideas behind “Open Data Science” and active in public awareness (see https://www.theguardian.com/profile/neil-lawrence) and community organization. He has been both program chair and general chair of the NIPS Conference.


Raia Hadsell - DeepMind
Deep Reinforcement Learning in Complex Environments
Where am I, and where am I going, and where have I been before? Answering these questions requires cognitive navigation skills--fundamental skills which are employed by every intelligent biological species to find food, evade predators, and return home. Mammalian species, in particular, solve navigation tasks through integration of several core cognitive abilities: spatial representation, memory, and planning and control. I will present current research which demonstrates how artificial agents can learn to solve navigation tasks through end-to-end deep reinforcement learning algorithms which are inspired by biological models. Further, I will show how these agents can learn to traverse entire cities by using Google Street View, without ever using a map.
Raia Hadsell, a senior research scientist at DeepMind, has worked on deep learning and robotics problems for over 10 years. Her thesis on Vision for Mobile Robots won the Best Dissertation award from New York University, and was followed by a post-doc at Carnegie Mellon's Robotics Institute. Raia then worked as a senior scientist and tech manager at SRI International. Raia joined DeepMind in 2014, where she leads a research team studying robot navigation and lifelong learning.

Ben Medlock - Swiftkey
As co-founder and CTO of SwiftKey, Ben Medlock invented the intelligent keyboard for smartphones and tablets that has transformed typing on touchscreens. The company’s mission is to make it easy for everyone to create and communicate on mobile.
SwiftKey is best known for its smart typing technology which learns from each user to accurately autocorrect and predict their most-likely next word, and features on more than 250 million devices to date. SwiftKey Keyboard for Android is used by millions around the world and recently went free on Google Play after two years as the global best-selling paid app. SwiftKey Keyboard for iPhone and iPad launched in September 2014, following the success of iOS note-taking app SwiftKey Note. SwiftKey has been named the No 1 hottest startup in London by Wired magazine, ranked top 5 in Fast Company’s list of the most innovative productivity companies in the world and has won a clutch of awards for its innovative products and workplace. Ben has a First Class degree in Computer Science from Durham University and a PhD in Natural Language and Information Processing from the University of Cambridge.



COFFEE


Irina Higgins - Research Scientist - DeepMind
Early Visual Concept Learning with Unsupervised Deep Learning
Irina Higgins - DeepMind
Early Visual Concept Learning with Unsupervised Deep Learning
Automated discovery of early visual concepts from raw image data is a major open challenge in AI research. Addressing this problem, we propose an unsupervised approach for learning disentangled representations of the underlying factors of variation. We draw inspiration from neuroscience, and show how this can be achieved in an unsupervised generative model by applying the same learning pressures as have been suggested to act in the ventral visual stream in the brain. By enforcing redundancy reduction, encouraging statistical independence, and exposure to data with transform continuities analogous to those to which human infants are exposed, we obtain a variational autoencoder (VAE) framework capable of learning disentangled factors. Our approach makes few assumptions and works well across a wide variety of datasets. Furthermore, our solution has useful emergent properties, such as zero-shot inference and an intuitive understanding of "objectness".
Irina Higgins is a Research Scientist at Google DeepMind, where she works in the Neuroscience team. Her work aims to bring together insights from the fields of machine learning and neuroscience to advance artificial intelligence. Before joining DeepMind, Irina was a British Psychological Society Undergraduate Award winner for her achievements as an undergraduate student in Experimental Psychology at Westminster University, followed by a DPhil at the Oxford Centre for Computational Neuroscience and Artificial Intelligence, where she focused on understanding the computational principles underlying speech processing in the auditory brain. During her DPhil, Irina also worked on developing poker AI, applying machine learning in the finance sector, and working on speech recognition at Google Research.

REINFORCEMENT LEARNING


Murray Shanahan - Professor of Cognitive Robotics - Imperial College London
Enhancing Deep Reinforcement Learning with Symbolic Reasoning
Murray Shanahan - Imperial College London
Enhancing Deep Reinforcement Learning with Symbolic Reasoning
Despite its dramatic successes, contemporary deep reinforcement learning methods have certain shortcomings. Because they rely on the statistics of large datasets, they tend to learn very slowly. We see this, for example, in DeepMind's DQN, which attains superhuman performance after playing a very large number of (certain) Atari games, but takes much longer than a human would to reach beginner-level. Humans, by contrast are able to generalise much more quickly. Here I will discuss ongoing work that aims to supplement deep learning with a symbolic component in order to achieve rapid generalisation at a high level of abstraction.
Murray Shanahan is Professor of Cognitive Robotics in the Dept. of Computing at Imperial College London, where he heads the Neurodynamics Group. Educated at Imperial College and Cambridge University (King’s College), he became a full professor in 2006. His publications span artificial intelligence, robotics, logic, dynamical systems, computational neuroscience, and philosophy of mind. He was scientific advisor to the film Ex Machina, and regularly appears in the media to comment on artificial intelligence and robotics. His book “Embodiment and the Inner Life” was published by Oxford University Press in 2010, and his latest book “The Technological Singularity” was published by MIT Press in August 2015.

COMPUTER VISION


Miriam Redi - Research Scientist - Bell Labs Cambridge
Can Machines See The Invisible?
Miriam Redi - Bell Labs Cambridge
Can Machines See The Invisible?
In this talk we will explore the invisible side of visual data, investigating how machine learning can detect subjective properties of images and videos, such as beauty, creativity, sentiment, style, and more curious characteristics. We will see the impact of such detectors in the context of web and social media. And we will analyse the precious contribution of computer vision in understanding how people and cultures perceive visual properties, underlining the importance of feature interpretability for this task.
Miriam Redi is a Research Scientist in the Social Dynamics team at Bell Labs Cambridge. Her research focuses on content-based social multimedia understanding and culture analytics. In particular, she explores ways to automatically assess visual aesthetics, sentiment and creativity, and exploit the power of computer vision in the context of web, social media, and online communities. Miriam got her Ph.D. at the Multimedia group in EURECOM, Sophia Antipolis. After obtaining her PhD, she was a Postdoc in the Social Media group at Yahoo Labs Barcelona and a Research Scientist at Yahoo London.



LUNCH


Dmitry Ulyanov - PhD Student - Skolkovo Institute of Science & Technology
Image Artistic Style Transfer, Neural Doodles & Texture Synthesis
Dmitry Ulyanov - Skolkovo Institute of Science & Technology
Image Artistic Style Transfer, Neural Doodles & Texture Synthesis
A recent advances in image style transfer allowed incredible end-user applications. At first, Gatys et al. demonstrated that deep neural networks can generate beautiful textures and stylized images from a single example. The core idea of the method was used then to create so-called neural doodles. While the visual quality of both style transfer and neural doodles was astonishing, the methods required a slow and memory-consuming optimization process, which limited their usage. We lately improved the speed of both algorithms significantly, while preserving the quality. This allowed almost real-time stylization using GPU and was used as a core technology in several successful applications. In this talk we overview and discuss the mentioned methods.
Dmitry persuaded his Masters degree at Moscow State University and currently enrolled in PhD studies in Skolkovo Institute of Science and Technologies. His research supervisors are Victor Lempitsky (Skolkovo) and Andrea Vedaldi (Oxford). He is also employed in Yandex Research (#1 search engine in Russia). His research focuses mostly on style transfer and super-resolution. His is also interested in more general research on neural network performance improvement.


AFFECTIVE COMPUTING & FACIAL RECOGNITION


Nadia Berthouze - Professor in Affective Computing - UCL
Sensing the Affective State Of Users
Nadia Berthouze - UCL
Bringing the Affective Body Outside the Controlled World: a Challenge for Deep Learning
Affective-aware technologies are becoming popular but they remain confined to very controlled situations using a limited set of channels. These restrictions are due in part to the complexity of the affective phenomena themselves and in part to the current lack of embedding of wearable sensors. Progress is being made regarding the latter with sensors now being integrated in clothes, making it possible to capture with greater bandwidth the contexts in which emotional experiences are taking place. Being able to infer affective states in ubiquitous and uncontrolled scenarios requires new modelling paradigms capable of dealing with high-dimensional unlabelled data. In my talk, I will present what we have learned from our studies on how body and touch explain how we feel and how there is a need and opportunity to extend these studies into our everyday life.
Nadia Bianchi-Berthouze is a Full Professor in Affective Computing and Interaction at the Interaction Centre of the University College London (UCL). Her research focuses on designing technology that can sense the affective state of its users and use that information to tailor the interaction process. She has pioneered the field of Affective Computing, first investigating body movement and more recently touch behaviour as means to recognize, measure and steer the quality of the user experience in full-body computer games, physical rehabilitation and textile design.




Hongying Meng - Assistant Professor - Brunel University
Deep Learning for Facial Expression Analysis
Hongying Meng - Brunel University
Deep Learning for Facial Expression Analysis
Facial expression analysis has becoming a popular research topic in recent years due to multidiscipline collective efforts from researchers in computer science, psychology, and cognitive science. Artificial intelligence has made significant contribution for facial expression analysis that can be used for the design of advanced human machine interaction system, intelligent robots and computer games. It can also be used for human mental health analysis such as dementia, autism and clinical diagnosis application such as shoulder pain and low back pain. In my talk, I will present what we have developed on how to build the automatic emotion analysis systems and how deep learning have been applied in these systems with improved performance.
Dr Hongying Meng is a lecturer (assistant professor) in Department of Electronic and Computer Engineering at Brunel University London, UK. He is also a member of Institute of Environment, Health and Societies, and Human Centred Design Institute (HCDI) there. He has a wide research interests including digital signal processing, machine learning, human computer interaction, image processing and embedded systems. His present research focuses on image processing and machine learning (deep learning) with applications, such as facial expression analysis. He has developed two different facial expression analysis systems that won the international challenge completions AVEC2011 (http://sspnet.eu/avec2011/) and AVEC2013 (http://sspnet.eu/avec2013/) respectively.


DEEP LEARNING SYSTEMS


Arjun Bansal - VP of Algorithms & Co-Founder - Nervana Systems
Catalyzing Deep Learning’s Impact in the Enterprise
Arjun Bansal - Nervana Systems
Catalyzing Deep Learning’s Impact in the Enterprise
Deep learning is in the early stages of unlocking tremendous economic value outside its impact in the large technology companies. While the algorithms have revolutionized consumer experiences in domains as varied as speech interfaces, image search, language translation, and game AI, enterprises are relatively early in their efforts to apply these algorithms to domains such as improving automotive speech interfaces, agricultural robotics and genomics, financial document summarization and finding anomalies in IoT data. Individual data scientists can draw from several open source frameworks and basic hardware resources during the very initial investigative phases but quickly require significant hardware and software resources to build and deploy production models. Nervana has built a deep learning platform to make it easy for data scientists to start from the iterative, investigatory phase and take models all the way to deployment. Nervana’s platform is designed for speed and scale, and serves as a catalyst for all types of organizations to benefit from the full potential of deep learning.
Arjun is the founder and VP Algorithms (Heading ML/DL & Data Science) at Nervana. His prior work has spanned neurophysiology and large scale machine learning. His interests are artificial intelligence, virtual reality, brain-machine interfaces, entrepreneurship, and tennis.

COFFEE

PANEL: What can be Done to Make Deep Learning as Impactful as Possible in the Near-Term?
Shaona Ghosh - University of Cambridge
Despite the ubiquity of mobile and wearable text messaging applications, the problem of keyboard text decoding is not tackled sufficiently in the light of the enormous success of the deep learning Recurrent Neural Network (RNN) and Convolutional Neural Networks (CNN) for natural language understanding. In particular, considering that the keyboard decoders should operate on devices with memory and processor resource constraints, makes it challenging to deploy industrial scale deep neural network (DNN) models. In this talk, we will cover a sequence-to-sequence neural attention network system for automatic text correction and completion. Given an erroneous sequence, our model encodes character level hidden representations and then decodes the revised sequence thus enabling auto-correction and completion. Unlike traditional language models that learn from billions of words, our corpus size is only 12 million words; an order of magnitude smaller. The memory footprint of our learnt model for inference and prediction is also an order of magnitude smaller than the conventional language model based text decoders. We report baseline performance for neural keyboard decoders in such resource constrained domain.
Shaona is a researcher in Machine Learning and NLP at Apple Inc. Previously, she was a postdoc at the Department of Engineering, University of Cambridge where she worked on developing deep learning sequence-to-sequence algorithms for prediction and auto-correction on keyboard decoders. Before that she was a postdoc in Machine Learning at the NVIDIA GPU Center of Excellence OeRC, University of Oxford. She has a PhD in Machine Learning from University of Southampton, UK. She was the Area Chair of Women in Machine Learning Workshop, 2017 and has been a reviewer at NIPS, Machine Learning Journal among others.
She has worked with and contributed to United Nations policy making for sustainable data development approaches using machine learning and for bridging the digital gender divide in AI. As a finalist of IET Young Woman Engineer in UK, 2017, she has been featured in Cosmopolitan UK. She has been selected as a ambassador for the Year of Engineering, with UK Government, Department of Transport and the IET. Her work in Cambridge was shortlisted for WISE Tech Innovation Award, 2017. Her work at HP Labs led to the merger of different business units within HP and led to a multi-national multi-year commercialization project. She has been nominated for the Telegraph top 50 Women in Engineering. She was twice awarded by Samsung Electronics for her work on innovative healthcare by using mobile phones as health sensors and predicting abnormalities using machine learning. She will be awarded the Hind Rattan (Jewel of India) award in January, 2018.


Amir Banifatemi - XPRIZE
Amir Banifatemi is the Prize Lead of the IBM Watson AI XPRIZE. Prior to joining XPRIZE, Mr. Banifatemi began his career at the European Space Agency and then held executive positions at Airbus, AP-HP and the European Commission division for information society and media. He managed two venture capital funds and contributed to the formation of more than 10 startups with emphasis on Predictive Technologies, IoT, and Healthcare. Mr. Banifatemi is a guest lecturer and an adjunct MBA professor at UC Berkeley, Chapman University, Claremont McKenna College, UC Irvine, and HEC Paris.
He holds Masters degrees in Electrical Engineering from the University of Technology of Compiègne, a Doctorate in System Design and Cognitive Sciences from the University Paris Descartes, as well as an MBA from HEC Paris.


Simon Edwardsson - Aipoly
Simon Edwardsson is a Swedish software developer and entrepreneur. He is the Co-founder of Aipoly, a startup using artificial intelligence to give the sense of sight to the blind. Before Aipoly he was the R&D Lead at Retail Solutions Inc, a well know big data company with 60% of the world retail analytics market share. Simon has had experience in leading teams of senior technologists and software developers in UK, US, and China. Having taught himself to code at the age 6, he has worked on projects ranging from smartphone apps to worldwide supply chain management platforms and state-of-the-art deep learning.


Will Heaven - New Scientist
Will Douglas Heaven is a writer and editor. For the last 4 years he has worked at New Scientist, first as a reporter, then features editor and most recently chief technology editor. From October he will be a technology writer and editor with BBC Worldwide. Will has a PhD in computing from Imperial College London. Before moving into journalism, he was a computing researcher for several years at Imperial and UCL. He also has degrees in Philosophy, English Literature and Science Communication.




Jack Watts - Industry Business Development Manager, Deep Learning - NVIDIA
Deep Learning in Industry with NVIDIA
Jack Watts - NVIDIA
Deep Learning in Industry with NVIDIA
Artificial intelligence won’t be an industry, it will be part of every industry. Jack will talk through some of the industry use cases today for Deep Learning and how companies are leveraging NVIDIAs Deep Learning platform and the world’s first Supercomputer for Deep Learning.
AI will be part of every industry, during my talk I will share the key industries that are leading the field in AI Research and platform development as well as the latest news from NVIDIAs Deep Learning Platform including our Deep Learning Supercomputer and Embedded modules for robotics/IOT. I'll also bring a special guest on stage to talk about why they chose an NVIDIA DGX-1!
Jack Watts, Industry Business Development – Deep Learning for NVIDIA has been in the IT Industry for over 8 years with expertise in delivering x86 server solutions along with managing worldwide supplier & customer relations. Jack joined NVIDIA in 2014 to work with the increasing number of Industry start-up and commercial companies who are leveraging NVIDIA technology in their Deep Learning research and applications. Jack loves reading about all of the research being done and ever expanding use cases for NVIDIA GPU technology in teaching computers to see, hear, speak and more.


Derek Wise - Benevolent.ai
AI will be part of every industry, during my talk I will share the key industries that are leading the field in AI Research and platform development as well as the latest news from NVIDIAs Deep Learning Platform including our Deep Learning Supercomputer and Embedded modules for robotics/IOT. I'll also bring a special guest on stage to talk about why they chose an NVIDIA DGX-1!
Derek is a veteran technology manager who’s done everything from top secret cryptographic systems for the US Marines to building and scaling some of the most successful online games on the planet. Derek is currently the VP of Engineering with Benevolent.ai working on brining deep learning to drug discovery as well as other complex applications in the future.



CONVERSATION & DRINKS

REGISTRATION

WELCOME
Nic Lane - UCL
Deep Learning for Embedded Devices: The Next Step in Privacy-Preserving High-Precision Mobile Health and Wellbeing Tools
State-of-the-art models that, for example, recognize a face, track emotions, or monitor activity are increasingly based on deep learning principles. But bleeding-edge health tools, like smartphone apps and wearables, that require such user information must rely on less reliable learning methods to locally process data because of the excessive device resources demanded by deep models. In this talk, I will describe our research that drives towards a complete rethinking of how existing forms of deep learning executes at inference time on embedded health platforms. Not only does this cause radically lower energy, computation and memory requirements; it also significantly increases the utilization of commodity processors (e.g., GPUs, CPUs) -- and even emerging purpose-built hardware, when available.
Nic Lane is a Principal Scientist at Bell Labs where he is a member of the Internet of Things research group. Before joining Bell Labs, he spent four years as a Lead Researcher at Microsoft Research based in Beijing. Nic received his Ph.D. from Dartmouth College (2011), his dissertation pioneered community-guided techniques for learning models of human behavior. These algorithms enable mobile sensing systems to better cope with diverse user populations and conditions routinely encountered in the real-world. More broadly, Nic's research interests revolve around the systems and modeling challenges that arise when computers collect and reason about people-centric sensor data. At heart, he is an experimentalist who likes to build prototype sensing systems based on well-founded computational models.


STARTUP SESSION


Victor Botev - CTO - Iris AI
Input Tailored Concepts Maps - A Way to Facilitate AI for Navigating Scientific Knowledge
Victor Botev - Iris AI
Input Tailored Concepts Maps - A Way to Facilitate AI for Navigating Scientific Knowledge
Iris AI uses non-semantic models to apply text understanding techniques to a scientific body of knowledge. The current algorithm is a mixture of neural topic models, keyword extraction and heuristics functions to form an input tailored concept map filled in with scientific articles. The current approach facilitates both existing unsupervised as well as supervised techniques for AI training. To verify our current progress we use state-of-the-art metrics, but moreover we conduct real life experiments comparing our tool to existing tools in the field through sci-thons. The results show great potential for new techniques in text understanding, and will change the way people navigate scientific knowledge.
Victor Botev is the CTO of Iris AI. Before joinin Iris he was an Artificial Intelligence Research Engineer at Chalmers University, in Gothenburg, Sweden. He has conducted research on clustering and predictive neural networks models, as well as usage of signal processing techniques in studying Big Data. As a Masters Thesis Student at CPAC Systems AB he has worked in the development of an autonomous compactor for pavement. Previously he was a senior software developer at Pinexo, a tech lead at Skrill and a web developer at Seedburger AG. He also has a second Masters (Artificial Intelligence) and BSc. (Software Development) degrees from Sofia University.


Dağhan Çam - AI Build
3D Printing with Autonomous Robotics and AI
The presentation will showcase the recent works of Ai Build, a London based company developing Artificial Intelligence and Additive Manufacturing technologies for the built environment. Current methods of construction are wasteful, labour intensive, unsafe and time consuming. They are also incapable of creating complex forms. Ai Build’s large scale robotic 3D printing technology is combining computer vision and machine learning techniques to automate low volume manufacturing process to enable mass customization of designs by architects, designers and engineers.
Daghan Cam is the co-founder and CEO of Ai Build, a London based startup developing Artificial Intelligence and Additive Manufacturing technologies for the built environment. He is also a visiting lecturer at University College London doing research on robotic fabrication, large scale 3d printing and parallel algorithms with GPU computing. His work focuses on developing intelligence for automating complex tasks in design and manufacturing by using computer vision and machine learning techniques. Before starting Ai Build, he worked at Zaha Hadid Architects and ran his own architectural design practice. He holds a masters degree with distinction from the Architectural Association.



Alex Dalyac - Co-Founder & CEO - Tractable
Addressing the Labeling Bottleneck in Computer Vision for Learning Expert Tasks
Alex Dalyac - Tractable
Thanks to deep learning, AI algorithms can now surpass human performance in image classification. However, behind these results lie tens of thousands of man hours spent annotating images. This significantly prohibits commercial applications where cost and time to market are key. At Tractable, our solution centers on creating a feedback loop from learning algorithm to human, turning the latter into a “teacher” rather than a blind labeler. Dimensionality reduction, information retrieval and transfer learning are some of our core proprietary techniques. We will demonstrate a 15x labeling cost reduction on the expert task of estimating from images the cost to repair a damaged vehicle – an important application for the insurance industry.
Alex is Co-founder & CEO of Tractable, a young London-based startup bringing recent breakthroughs in AI to industry. Tractable's current focus is on automating visual recognition tasks. Its long term vision is to expand into natural language, robot control, and spread disruptive AI throughout industry. Tractable was founded in 2014 and is backed by $2M of venture capital from Silicon Valley investors, led by Zetta Venture Partners. Alex has a degree in econometrics & mathematical economics from the LSE, and a postgraduate degree in computer science from Imperial College London. Alex's experience within Deep Learning investing is on the receiving side, particularly on how to attract US venture capital into Europe as early as the seed stage.

Viktor Taranenko - Whisk
Layer Cake - AI in Food
Deep Learning demonstrates impressive results these days and NLP is definitely among the areas which can benefit significantly. Whisk redesigned their core engine for recipe understanding and product matching with Deep Bidirectional LSTM networks on top of TensorFlow library. This solution not only immediately outperformed previous methods with context-free grammars, but also it is way easier to maintain. The new system allows Whisk to operate on bigger scale of recipes with significantly reduced need for manual intervention and keep improving as Whisk expands it’s training set.
As CTO of Whisk.com, Viktor is now leading an agile team of 10 people following Lean practices when developing products. With extensive implementational and architectural experience with distributed technologies and compute engines, Viktor has been one of the technology leaders behind Whisk for 5 years. Recently he has been building a data science team who have been implementing modern Machine Learning approaches and techniques to scale parts of Whisk.




Edward Challis - Co-Founder & CEO - re:infer
Building AI to Better Understand & Respond to Your Customers
Edward Challis - re:infer
Building AI to Better Understand & Respond to Your Customers
Businesses are talking to their customers more and more. Every customer conversation is an opportunity to directly engage with your customer, to recommend them products, to better understand them and the user's experience. But supporting these conversations and extracting insights from them is expensive and difficult. re:infer provides a solution powered by the latest advances in AI that does the heavy lifting to help businesses better understand and interact with their customers. Using Machine Learning to understand customer conversations is hard. More often than not, customers use informal, poorly spelt and ambiguous language. Because of this, traditional computer science techniques that use hand coded rules fail. And since most businesses have insufficient training data, traditional machine learning methods do not work in this context either. To solve this problem we take a different approach. In this talk I'll describe the business problem we're solving, the engineering constraints it imposes and how we've built a novel, deep learning, natural language processing system to solve it.
AI and machine learning has dominated my entire professional career — I’ve been working in this space for over 10 years. I gained my PhD and MSc in AI from the research groups at UCL and Edinburgh. My research has been published in the leading AI journals and conferences including NIPs, AISTATs, Neural Computation, the Journal of Machine Learning Research and NeuroImage. Outside of academia and before re:infer, I helped to build production AI technologies for problems in search, ad-tech, consumer intent modelling and finance.



COFFEE
DEEP LEARNING APPLICATIONS & REAL-USE CASES


Eiso Kant - Co-Founder & CEO - source{d}
Source Code Abstracts Classification Using CNN
Eiso Kant - source{d}
Source Code Abstracts Classification Using CNN
Convolutional neural networks (CNN) are becoming the standard approach for many machine learning related problems. Usually, those problems are related to images, audio or natural language data. At source{d} we are trying to apply the common and novel deep learning patterns to the problems with software developers and projects as the input which is something very different. We are standing at the beginning of our fascinating journey, but already have something to share. In this particular talk I am going to present the bits of our SourceNN deep neural network that enable classification of short source code fragments (50 lines) taken randomly from several projects. The input features are extracted by a syntax highlighter and look similar to minimaps in source code editors.
Eiso Kant is the co-founder & CEO of source{d}.




Jeffrey de Fauw - Research Engineer - DeepMind
Detecting Diabetic Retinopathy with Convolutional Neural Networks
Jeffrey de Fauw - DeepMind
Diabetic retinopathy is when there is retinal damage in the eye due to diabetes, potentially leading to loss of vision and even blindness. In his talk Jeffrey will reflect on his experience of trying to build a model, using convolutional neural networks, to grade the severity of diabetic retinopathy in high-resolution fundus images (images of the back of the eye). He did this work in the context of the Kaggle Diabetic Retinopathy Detection competition where he finished fifth.
Jeffrey De Fauw studied pure mathematics at Ghent University before becoming more interested in machine learning problems through Kaggle competitions. Soon after he was introduced to (convolutional) neural networks and has since spent most of his time working with them. Besides always looking for challenging problems to work on, he has also become very interested in trying to find more algebraic structure in methods of representation learning.




John Overington - Director of Bioinformatics - Benevolent.ai
AI is Changing the Drug Discovery Paradigm
John Overington - Benevolent.ai
AI is Changing the Drug Discovery Paradigm
Drug discovery is a challenging business, despite huge societal and commercial benefits in the discovery of new drugs it is incredibly challenging to develop discover and develop new therapies, with typically around 30 new drugs developed per year from the entire worldwide pharma and biotech R&D budget. The reasons for this are complex, but the bottom line is that the vast majority of started projects do not successfully finish, there is huge attrition from the idea of a scientist through discovery and clinical development stages. We are developing powerful real world evidence-based artificial intelligence solutions to address drug discovery. Key to recent progress is the availability of large quantities of data, high performance computing and developments in deep-learning approaches to mine for hypotheses that can be rationally scored and prioritised for success.
John studied Chemistry at Bath, graduating in 1987. He then studied for a PhD at Birkbeck College, on protein modelling, followed by a postdoc at ICRF (now CRUK). John then joined Pfizer, eventually leading a multidisciplinary group combining rational drug design, informatics and structural biology. In 2000 he moved to a start-up biotech company, Inpharmatica, where he developed the drug discovery database StARLite. In 2008 John moved to the EMBL-EBI, where the successor resource is known as ChEMBL. Most recently John joined Benevolent.ai, where he continues his research as director of bioinformatics. In this role, John is involved in integrating deep learning and other AI approaches to drug target validation and drug optimisation




Ali Parsa - CEO - Babylon Health
How Artificial Intelligence will Change the Face of Healthcare
Ali Parsa - Babylon Health
How Artificial Intelligence will Change the Face of Healthcare
Everyone in the world is facing differing degrees of the same issue - the accessibility and affordability of healthcare. For some, the problem is convenience, cost or speed. For others, the issue is more serious with almost 50% of the world having little access to quality healthcare. Yet, four unstoppable trends are coming together to see the creative reconstruction of medicine within the next decade. The result will be a service that is more accessible, effective and democratic, irrespective of where people live. Today, everyone has near equal access to everything that is digital. The same may soon be happening to healthcare, and these trends show why: 1 - Diagnostics is improving at double the rate of Moore’s Law 2 - Information is free and getting smarter 3 - Smartphones and “the internet of everything” will create a global channel of healthcare delivery 4 - Intervention will be unrecognisable
These four trends are melting all that is solid in medicine into air. How it will develop no one can know, but one thing is for sure; a very different model of healthcare delivery is unfolding, and it should make the future of healthcare significantly smarter and better value for everyone.
Ali is an engineer and healthcare entrepreneur, and the founder and CEO of babylon, the UK’s leading digital healthcare service. Its purpose is to democratise healthcare by putting an accessible and affordable health service into the hands of every person on earth. In order to achieve this the company is bringing together one of the largest teams of scientists, clinicians, mathematicians and engineers to focus on combining the ever-growing computing power of machines with the best medical expertise of humans to create a comprehensive, immediate and personalised health service and make it universally available. Launched in February 2015, the service now has over 600,000 registered users globally.. babylon’s home grown success has seen the company expand into Europe and Africa in 2016, with a second head office located in Rwanda. Around 120 businesses, including Citigroup, BNY Mellon, LinkedIn and leading employee benefits and health insurance providers have partnered with babylon to offer its services to UK employees. Further, the company has partnered with the NHS to make its services available to the broader UK population. Prior to Babylon, Ali founded Circle and built it within a few years to become Europe's largest partnership of clinicians, with some £200m of revenue, near 3,000 employees and a successful IPO. Earlier, Ali was the recipient of the Royal Award for the Young Entrepreneur of the year for founding his first business, V&G, and the Healthcare Entrepreneurial Achievement Award for establishing Circle. Ali was named by The Times among the 100 global people to watch, and by HSJ among the 50 most influential people in healthcare. Ali is the UK Cabinet Office Ambassador for Mutuals and has a PhD in Engineering Physics.



LUNCH


Nic Lane - Associate Professor - UCL
Deep Learning for Embedded Devices: Next Step in Privacy-Preserving High-Precision Mobile Health and Wellbeing Tools
Nic Lane - UCL
Deep Learning for Embedded Devices: The Next Step in Privacy-Preserving High-Precision Mobile Health and Wellbeing Tools
State-of-the-art models that, for example, recognize a face, track emotions, or monitor activity are increasingly based on deep learning principles. But bleeding-edge health tools, like smartphone apps and wearables, that require such user information must rely on less reliable learning methods to locally process data because of the excessive device resources demanded by deep models. In this talk, I will describe our research that drives towards a complete rethinking of how existing forms of deep learning executes at inference time on embedded health platforms. Not only does this cause radically lower energy, computation and memory requirements; it also significantly increases the utilization of commodity processors (e.g., GPUs, CPUs) -- and even emerging purpose-built hardware, when available.
Nic Lane is a Principal Scientist at Bell Labs where he is a member of the Internet of Things research group. Before joining Bell Labs, he spent four years as a Lead Researcher at Microsoft Research based in Beijing. Nic received his Ph.D. from Dartmouth College (2011), his dissertation pioneered community-guided techniques for learning models of human behavior. These algorithms enable mobile sensing systems to better cope with diverse user populations and conditions routinely encountered in the real-world. More broadly, Nic's research interests revolve around the systems and modeling challenges that arise when computers collect and reason about people-centric sensor data. At heart, he is an experimentalist who likes to build prototype sensing systems based on well-founded computational models.




Armando Vieira - Lead Data Scientist - Bupa Global
The Challenges of Applying Deep Learning in Corporate Business
Armando Vieira - Bupa Global
Armando Vieira is a Physicist turned into a data scientist. He started working on machine learning after his PhD in Physics in 1997. From the beginning he is an aficionado of Artificial Neural Networks, and recently he is focused on Deep Neural Networks, especially for unsupervised and semi-supervised learning problems. He has more than 50 publications and writing a book on business applications of Deep Learning. He works as a Data Scientist consultant on several companies and startups.


DEEP LEARNING APPLICATIONS IN ROBOTICS


Ingmar Posner - Associate Professor in Engineering Science - University of Oxford
Deep Learning from Lots of Demonstration
Ingmar Posner - University of Oxford
Ingmar is an Associate Professor in Engineering Science at the University of Oxford specialising in applied machine learning solutions for robot perception and decision making. He is a long-standing member of the Mobile Robotics Group (now the Oxford Robotics Institute) where he leads research in machine perception and planning. His research is guided by his vision to create machines which constantly improve through use in their dedicated workspace by implicitly leveraging expert demonstrations in a manner entirely transparent to the user. Highlights of his work include state-of-the-art approaches to deep and shallow object detection, semantic segmentation, tracking and inverse reinforcement learning. Ingmar has coauthored over 40 research publications and is the recipient of a number of best paper awards at international robotics conferences such as ISER and ICAPS. He serves on the board of IJRR, the premier international robotics research journal and has repeatedly served as area chair and programme committee member for reputed conferences in robotics and machine learning. Recently Ingmar led a team to develop and demonstrate the first autonomous urban concept vehicle on a purpose built slow-speed racetrack at Shell’s Make the Future London event. In 2014 Ingmar also co-founded Oxbotica, a leading provider of mobile autonomy software solutions including the Selenium autonomy stack, which underpins a variety of public and commercial autonomous vehicle programmes such as the LUTZ and GATEWay projects in Milton Keynes and Greenwich. In 2015 Oxbotica was singled out by the Wall Street Journal as one of the top ten EMEA technology startups.


FACE ANALYSIS & CREATIVE COMPUTING


Tae-Kyun (T-K) Kim - Associate Professor - Imperial College London
Conditional Convolutional Neural Network for Modality-aware Face Recognition
Tae-Kyun (T-K) Kim - Imperial College London
Conditional Convolutional Neural Network for Modality-aware Face Recognition
We propose a conditional Convolutional Neural Network, named as c-CNN, to handle multimodal face recognition. Different from traditional CNN that adopts fixed convolution kernels, samples in c-CNN are processed with dynamically activated sets of kernels. The activations of convolution kernels in a certain layer are conditioned on its present intermediate representation and the activation status in the lower layers. The activated kernels across layers define the sample-specific adaptive routes that reveal the distribution of underlying modalities. The proposed method is implemented via incorporating the binary decision tree, which is evaluated for multimodality face recognition problems.
T-K Kim is Associate Professor and the leader of Computer Vision and Learning Lab at Imperial College London, UK. He obtained his PhD and Junior Research Fellowship, from Univ. of Cambridge, in 2008 and for 2007-2010 respectively. His research interests primarily lie in tree-structure classifiers for articulated hand pose estimation, face recognition by image sets and videos, and 6D object pose estimation. He has co-authored over 40 papers in top-tier conferences and journals, his co-authored algorithm for face image retrieval is MPEG-7 ISO/IEC standard. He is co-recipient of the KUKA best paper award at ICRA14, and general co-chair of CVPR15/16 workshop on HANDS and ICCV15 workshop on Object Pose.



Terence Broad - Artist & Research Engineer - Goldsmiths, University of London
Autoencoding Blade Runner
Terence Broad - Goldsmiths, University of London
Autoencoding Blade Runner
Generative models for images have come a long way in recent years. This talk recalls the creative exploration of an unconventional use of a convolutional autoencoder trained with a learned similarity metric. By using all the frames from Blade Runner as the training dataset for this model, we have a generative model that has learned the distribution of scenes from Blade Runner. We use this model to reinterpret the film, and to reinterpret other films based on its understanding of Blade Runner.
Terence is an artist and research engineer at Goldsmiths, where he recently completed his Masters in Creative Computing. His focus has been on the generative capabilities and creative applications of deep learning, developing an interactive topological visualisation of a convolutional neural network, and remaking the film Blade Runner with a convolutional autoencoder.



END OF SUMMIT

TALENT EXPO - Networking break to meet potential new employees
Coffee Break & Recruitment - 3.25-4pm