By Sophie Curtis on December 17, 2015
Named one of MIT Technology Review's '35 Innovators Under 35' for 2015
, Ilya Sutskever has become a well-known name in the deep learning field. Just this week he hit headlines when he was announced as Research Director
, a non-profit company formed with Elon Musk, Sam Altman and Greg Brockman, with the goal to advance artificial intelligence that benefits humanity.
Completing his PhD with the Machine Learning Group of the University of Toronto, working under Geoff Hinton, Ilya went on to co-found DNNresearch (acquired by Google
) with Hinton and fellow graduate Alex Krizhevsky, as well as completing postdoctoral work at Stanford University with Andrew Ng's group. Until his new appointment at OpenAI this week, he was a member of the Google Brain team, working as a research scientist.
In his talk, at the Deep Learning Summit in San Francisco
next month, Ilya will discuss what deep networks are and why they work so well, as well as surveying some of the exciting recent applications and research frontiers. I caught up with him ahead of the summit to hear more.
What are the key factors that have enabled recent advancements in deep learning?
What are the main types of problems now being addressed in the deep learning space?
- Sufficiently fast computers
The availability of sufficiently large, high-quality labelled datasets
Algorithms, techniques, and skills for training large deep nets
At present, large and deep neural networks are applied to a very large variety of problems. For example, there have been nearly 50 product launches within Google, all to different problems.
What are the practical applications of your work and what sectors are most likely to be affected?
The practical applications are vast, mainly because deep learning algorithms are largely domain-agnostic. Perception has already been affected. In the near future, I think that robotics, finance, medicine, and human-computer interaction are very likely to be affected. I don't think that this list is exhaustive, however.
What developments can we expect to see in deep learning in the next 5 years?
We should expect to see much deeper models, models that can learn from many fewer training cases compared to today's models, and substantial advances in unsupervised learning. We should expect to see even more accurate and useful speech and visual recognition systems.
What advancements excite you most in the field?
I am very excited by the recently introduced attention models, due to their simplicity and due to the fact that they work so well. Although these models are new, I have no doubt that they are here to stay, and that they will play a very important role in the future of deep learning.Ilya Sutskever will be speaking at the RE•WORK Deep Learning Summit in San Francisco, on 28-29 January 2016. Other speakers include Andrew Ng, Baidu; Clement Farabet, Twitter; Naveen Rao, Nervana Systems; Pieter Abbeel, UC Berkeley; and Andrej Karpathy, Stanford University.The Deep Learning Summit is taking place alongside the Virtual Assistant Summit. For more information and to register, please visit the event website here.