New Approaches to Unsupervised Domain Adaptation

Original
The cost of large scale data collection and annotation often makes the application of machine learning algorithms to new tasks or datasets prohibitively expensive. One approach circumventing this cost is training models on synthetic data where annotations are provided automatically. 

However, despite their appeal, such models often fail to distinguish synthetic images from real images, necessitating domain adaptation algorithms to manipulate these models before they can be successfully applied. Dilip Krishnan, Research Scientist at Google, is working on two approaches to the problem of unsupervised visual domain adaptation (both of which outperform current state-of-the-art methods), which he will share alongside other insights and knowledge during his presentation at the Deep Learning Summit in Boston.

I spoke to him ahead of the summit on 25-26 May to learn more about his work, and what we can expect in the next few years from the deep learning field.

Tell us more about your work, and give us a short teaser to your session?
I am a Research Scientist in Google's office in Cambridge, MA. I work on supervised and unsupervised deep learning for computer vision. In my talk, I will focus on my work in the area of Domain Adaptation, where networks trained for a task in one domain (e.g. computer graphics imagery) can generalize to other domains (e.g. real-world images). This allows us to leverage large amounts of synthetic data with ground truth labels. This work has applications in Robotics and other domains where labeled training data is expensive to collect.

What started your work in deep learning?
I studied for my PhD at New York University, under the supervision of Rob Fergus, and in the same lab as Yann LeCun, one of the pioneers of deep learning. I was a co-author on the first paper on deconvolutional networks, which are useful as visualization and synthesis tools for deep convolutional networks.

What are the key factors that have enabled recent advancements in deep learning?
Clearly, large amounts of data and compute power are the biggest factors. This allows us to make larger models that are able to ingest larger amounts of training data. Better optimization methods (Adam, AdaDelta) and better tools (e.g. Tensorflow, distributed/asynchronous model training) have also played a role in allowing for more efficient engineering.

Which industries do you think deep learning will benefit the most and why?
Initially it will be industries/applications with large amounts of fairly clean labeled data. Examples are internet industries (Google, Facebook). Medical imaging applications can also benefit. We are seeing huge traction with intelligent voice-based assistants such as Amazon's Alexa and Google Home. In the medium term, self-driving cars powered by deep learning systems will arrive. Longer term, better generative models could impact fields such as art and music.

What advancements in deep learning would you hope to see in the next 3 years?
Better models for unsupervised learning, and generative models. Also, more robust models for supervised learning, which are less susceptible to adversarial examples. Finally, better theory to explain models.

Dilip Krishnan will be speaking at the Deep Learning Summit in Boston on 25-26 May, taking place alongside the Deep Learning in Healthcare Summit. Confirmed speakers include Carl Vondrick, PhD Student, MIT; Sanja Fidler, Assistant Professor, University of Toronto; Charlie Tang, Research Scientist, Apple; Andrew Tulloch, Research Engineer, Facebook; and Jie Feng, Founder, EyeStyle. View more speakers here.

Early Bird tickets are available until Friday 31 March for the summits in Boston. Register your place here.

[Image: Visual groupings applied to image patches, frames of a video, and a large scene dataset. Work by Dilip Krishnan, Daniel Zoran, Phillip Isola & Edward Adelson, more here.] Original
Neural Networks Machine Learning Deep Learning Algorithms Deep Learning Computer Vision A I Robotics Image Retrieval Deep Learning Summit Image Analysis

0 Comments

Search

Recommended Posts

Latest Posts

Upcoming Events

Deep Learning Summit London

21 September 2017, London

The Deep Learning Summit is at the forefront of AI. Explore the impact of image & speech recognition as a disruptive trend in business and industry. How can multiple levels of representation and abstraction help to make sense of data such as images, sound, and text. Hear the latest insights and technology advancements from industry leaders, startups and researchers.

AI Assistant Summit London

21 September 2017, London

The next generation in predictive intelligence. Anticipating user & business needs to alert & advise logical steps to increase efficiency. The summit will showcase the opportunities of advancing trends in AI Assistants & their impact on business & society. What impact will predictive intelligence have on business efficiency & personal organization?

Track 1: Deep Learning Summit Montreal

10 October 2017, Montreal

What are the latest advancements in deep learning research? Where are the most recent scientific breakthroughs? Hear the latest research news from global pioneers in Natural Language Processing, GANs, Reinforcement Learning, CNNs and Unsupervised Learning at the summit.

Connect

Be Sociable

  • Twitter
  • Facebook
  • Linkedin
  • Youtube
  • Flickr
  • Lanyrd
  • Instagram
  • Google plus
  • Medium