2016 saw some progressive advancements in AI technology, such as AlphaGo beating Go grandmaster Lee Sedol. We have seen other great developments such as with image recognition, where we can one day expect to see computers that will be able to read X-ray, MRI and CT scans more efficiently than radiologists, enabling the quicker diagnosis of cancer. This is just one example of how the progress of deep learning is rapidly advancing and impacting the world we live in, from the way we shop to predicting energy sources to shaping modes of transport.
We asked some of our influential speakers, who will be presenting at our deep learning summits this year, for their predictions for deep learning in 2017. Here are their forecasts:
In 2017, we will probably see further rapid exploration of applications of current deep learning techniques, as well as further theoretical advances, improving robustness and sample efficiency. We will also see various fun new applications of deep learning to image and voice resynthesis. In 3 years, due to developments in special-purpose AI hardware, we may see orders of magnitude faster compute. This would enable application of unsupervised learning to video data, and fused with reinforcement and supervised learning, will bring us closer to general AI. It will still be mainly based on deep neural networks, backpropagation and SGD.
Durk will be sharing his latest work on Improving Variational Autoencoders with Inverse Autoregressive Flow at the Deep Learning Summit, 26-27 January in San Francisco. Tickets are now limited, confirm your place here.
Deep Learning has made incredible progress since 2012, most notably in image and speech recognition. I expect 2017 to be the year where our industry starts to fully embrace video and, consequently, replace image-based visual representations with a deeper, more fine-grained understanding of the world. Unlike images, videos can teach neural networks that the world is three-dimensional; that it contains more or less independent objects; that there are physical concepts such as gravity, material types or object permanence.
Over the next three years, a better understanding of how the world works will start to infect other domains, such as language processing, where it will lead to better natural language and dialog systems through proper grounding of linguistic concepts. This may start a feedback loop, by which better language capabilities make it easier to provide supervision signals for learning better systems themselves. But all of this can and will start with videos which, starting in 2017, will allow networks to learn much more about the world than they currently know.
At the Deep Learning Summit in San Francisco, Roland will show how neural networks can learn from data to make fine-grained predictions about actions and situations.
In the next few years, we should expect to see deep learning systems that are capable of learning from fewer examples and less experience, and systems which learn online as more data and experience becomes available. I also expect to see much better generative models of the future. Prediction is central to human cognition and planning, and is immensely useful for artificial agents. Video prediction is an active area of research now, but there is still significant improvement needed, both in terms of the video length and the frame quality, before these methods can be generally useful.
Chelsea will be presenting at the Deep Learning Summit, 26-27 January in San Francisco. Chelsea will share her work on how robots can learn mental models of the visual world and imagine the outcomes of their actions, as well as her vision for the future of deep robotic learning.
I think that the growing community of engaged researchers and developers will bring a lot of interesting architecture and training solutions. But two trends I am seeing are the most exciting and promising: generative models and transfer learning. While the basic concept of both is not new, only now we are seeing the application of such methods to real-world problems such as the creation of new molecules with desired properties or 3-D photo reconstruction. I think we could accelerate drug development process significantly just by changing the lead generation process. Generative models with deep architecture have a potential to generate new targeted molecules and to replace blind screening process of lead compounds. Transfer knowledge could increase a translation rate from model organisms into the clinic.
At the Deep Learning in Healthcare Summit, 28 February - 01 March in London, Polina will be discussing the Application of Deep Neural Networks to Biomarker Development. Early Bird passes end this Friday - 6 January.
2017 will be another banner year for deep learning and AI. We will see speech become popular for interacting with machines, made possible by deep learning technologies that keep boosting accuracy with more data and computing power. We’ll also see big changes in hardware specifically to support AI and deep learning. Chipmakers are already designing and integrating AI-specific features into their products and this will enable us to train and ship bigger neural networks than ever before. That will help make speech, vision, language and other AI technologies even better this year and make it possible to wire them into homes, cars, and mobile applications.
In the next 3 years we’ll see AI’s impact rippling through many industries. There will be advances everywhere — logistics, medical, finance and more. Enterprises will have access to cutting edge AI technology built by tech giants like Baidu through cloud platforms and APIs. Hiring of AI and Machine Learning talent will keep growing as the impact of AI expands. Machine learning skills are some of the most valuable in Silicon Valley today and that will remain true for years to come. Demand for AI education and the number of engineers with AI skills will grow dramatically. This will fuel the next wave of AI innovations in products and businesses.
To hear more from Durk, Roland, Chelsea and Adam, as well as Shivon Zilis from Bloomberg and Andrew Tulloch from Facebook, register now for the Deep Learning Summit, San Francisco. Apply the discount code NEWYEAR by 6 January to get 20% off all summit tickets. View the full agenda here. Place are now very limited!
Other deep learning predictions:
The Deep Learning Summit will also be running alongside the Virtual Assistant Summit, 26-27 January in San Francisco.
Upcoming Deep Learning Events Include:
26 January 2017, San Francisco
The Deep Learning Summit is the next revolution in artificial intelligence. The increasingly popular branch of machine learning explores advances in methods such as image analysis, speech and pattern recognition, natural language processing, and neural network research. This summit will explore how deep learning algorithms and methods are being applied to solve challenges in industries including healthcare, manufacturing, transport, security and communications.
26 January 2017, San Francisco
The next generation in predictive intelligence. Anticipating user & business needs to alert & advise logical steps to increase efficiency. The summit will showcase the opportunities of advancing trends in VAs & their impact on business & society. What impact will predictive intelligence have on business efficiency & personal organization?
21 February 2017, London
Leading minds in machine intelligence will come together for an evening of networking and keynote presentations. Join us for a three course meal to support women in AI and Machine Intelligence.