Visual Localization & Mapping for Autonomous Driving


Image source: V-Charge Project by ETH Zurich Autonomous Systems Lab

The battle against climate change needs new mobility concepts, and advancing autonomous electric vehicles could be key in the global mission to reduce CO2 emissions. Congested traffic in cities and drivers searching suitable parking spots is a massive problem that must be addressed if we are to successfully combat climate change and meet sustainable development goals.

The V-Charge project, an EU consortium of Volkswagen, ETH Zurich, Bosch, University of Oxford and more, has been established to tackle issues these with traffic and parking, by researching and building fully-autonomous electric vehicles. The V-Charge concept requires state-of-the-art progress in multiple fields:

Mathias Bürki is a PhD Candidate in the Autonomous Systems Lab at ETH Zurich, working on the V-Charge project with a focus on visual localization and mapping systems for autonomous cars. At the Machine Intelligence in Autonomous Vehicles Summit in Amsterdam, Mathias will share expertise on bringing fully autonomous driving into urban environments, and visual localization and mapping systems in the context of autonomous driving. 

I spoke to him ahead of the summit to learn more about his work, and what we can expect to see in autonomous vehicles over the coming years.

How did you begin your work in autonomous vehicles?

An advertisement for an internship position at the autonomous driving department of Volkswagen Corporate Research in Wolfsburg immediately caught my attention. A few years back (2012), there was already a lot of momentum in autonomous driving research, and hence, the prospect to contribute in this rising field was very attractive. The internship has further grown my interest in the technologies involved. I therefore decided to stay in the field, finish my studies with a Master's Thesis about motion estimation for autonomous vehicles, and afterwards join the EU research project V-Charge, which aimed at developing an automated valet parking service.

What are the key factors that have enabled recent advancements in visual localization for autonomous vehicles?

As is the case for many fields in robotics, advancements in computer processing power has been a main driver also for visual localization systems. With SURF, a first real-time capable local feature descriptor was developed in 2006, enabling robust on-board processing of images for metric localization. Subsequent research on binary descriptors (BRIEF (2010), BRISK (2011), FREAK(2012)) further reduced the computational demands for real-time image processing on robotic platforms by allowing fast descriptor computation directly on regular CPUs. In addition to that, improvements on the sensor systems, most notably with high-resolution cameras, improved image quality has allowed for increased localization precision. 

What are the key challenges to progressing autonomous vehicles in an urban environment?

In my opinion, the biggest challenge for autonomous urban driving lies in true semantic understanding of traffic situations. Scenarios during urban driving can be extremely complex, and involving a large number of different traffic participants (cars, cyclists, pedestrians, etc.). Additionally, certain procedures are heavily aligned with specific human skills, such as understanding human gestures, both in the case of human traffic duty on crossroads, as well as for negotiations between two traffic participants. Such complex situations are also often rare in occurrence, and thus hard to train properly with machine learning techniques. Another significant challenge which is especially pronounced in urban driving is the bending of rules by human traffic participants, a form of behavior which is challenging to predict and act upon correctly.

What developments can we expect to see in autonomous vehicles in the next 5 years?

I believe there will be advancements on two somewhat disconnected levels: On the one side, further development on ADAS technologies will allow for partially automated driving, mainly on highways and in traffic jam situations. In many cases, such systems more constitute nice-to-have "gadgets", without actually offering many of the praised advantages of autonomous driving, since a human still carries the driving responsibility and therefore must monitor the system and keep themselves ready to interfere at any time.

On the other side, more and more level 5 autonomous driving pilot projects will emerge in somewhat small and (partially) restricted environments with potentially reduced complexity. This development is already clearly visible today, although in all of the currently ongoing pilot projects there is still a human safety driver on-board. Nevertheless, these projects offer the opportunity to soon deliver a clear user benefit, although only in special locations. Furthermore, they allow for testing and evaluating both user behavior and acceptance, as well as traffic law reforms necessary for fully autonomous driving in the future.

Outside of your field, what area of machine learning advancements excites you most?

Clearly, deep learning constitutes the field in machine learning with most exciting recent developments and prospects for the future. The community has becomes very active in the recent past, and although the topic is clearly also hyped to some degree (and the same is true for autonomous driving), it's potential and disruptive powers seem enormous, and undoubtedly, for many of the open problems in autonomous driving, deep learning will play key role in solving them in the near future.

Mathias Bürki will be speaking at the Machine Intelligence in Autonomous Vehicles Summit in Amsterdam on 28-29 June, taking place alongside the Machine Intelligence Summit. View more information here.

Other confirmed speakers include Pablo Puente Guillen, Researcher, Toyota Motors; Jan Erik Solem, Co-founder & CEO, Mapillary; Sven Behnke, Head of Autonomous Intelligent Systems Group, University of Bonn; Damian Borth, Director of the Deep Learning Competence Center, DFKI, Julian Togelius, Associate Professor, NYU Tandon School of Engineering and more.

Early Bird passes expire on Friday 12 May. Book your place now.

Opinions expressed in this interview may not represent the views of RE•WORK. As a result some opinions may even go against the views of RE•WORK but are posted in order to encourage debate and well-rounded knowledge sharing, and to allow alternate views to be presented to our community. Original
Machine Learning Deep Learning Computer Vision A I Smart Transport Connected Car Vehicle-to-Vehicle Communications Autonomous Vehicles Autonomous Vehicles Summit Machine Intelligence Intelligent Automated Systems



Recommended Posts

Latest Posts

Upcoming Events

Deep Learning Summit London

21 September 2017, London

The Deep Learning Summit is at the forefront of AI. Explore the impact of image & speech recognition as a disruptive trend in business and industry. How can multiple levels of representation and abstraction help to make sense of data such as images, sound, and text. Hear the latest insights and technology advancements from industry leaders, startups and researchers.

AI Assistant Summit London

21 September 2017, London

The next generation in predictive intelligence. Anticipating user & business needs to alert & advise logical steps to increase efficiency. The summit will showcase the opportunities of advancing trends in AI Assistants & their impact on business & society. What impact will predictive intelligence have on business efficiency & personal organization?

Track 1: Deep Learning Summit Montreal

10 October 2017, Montreal

What are the latest advancements in deep learning research? Where are the most recent scientific breakthroughs? Hear the latest research news from global pioneers in Natural Language Processing, GANs, Reinforcement Learning, CNNs and Unsupervised Learning at the summit.


Be Sociable

  • Twitter
  • Facebook
  • Linkedin
  • Youtube
  • Flickr
  • Lanyrd
  • Instagram
  • Google plus
  • Medium