Back at the Machine Intelligence in Autonomous Vehicles Summit in Amsterdam, today’s compere Julie Choo kicked off the session by asking the room a question:
‘Who caught an uber to the event today? Who caught an uber in the last month? That’s practically the whole room - Uber is taking over, is that an assumption we can make? It is if you’re an uber customer, but not if you’re a displaced driver who is working for other taxi companies, but one day all uber drivers will be displaced. Lots of jobs will be gone from AI and Machine Intelligence coming in and disrupting the industry.’
Yesterday we heard from the likes of TomTom, Mapillary, EasyMile, and NEVS about their journey towards getting fully autonomous vehicles on the road and the obstacles that they’re currently facing in this industry. Safety, environment perception, sustainability, and general public apprehension are only a few of the challenges these companies are facing, and today we delved further into the progression of self-drive cars and heard about how machine intelligence is pushing this further.
‘We hear about self-drive cars all the time, but you know what’s really cool, autonomy for aircraft!’
Jasmine Kent, CEO of Daedalean AG
Jasmine Kent began this morning’s discussions by talking to us about autonomy in the air and explored the obstacles and regulations that are currently preventing human carrying autonomous aircrafts being deployed. Electric aircrafts that are currently in circulation have ‘a fantastic property in that they’re quieter than helicopters, but can still land in the same space’. These electric machines, however, they still require a pilot, ‘so what do we do? We make them autonomous.’ Human error is the biggest cause of aviation accidents, so Airbus concluded that ‘electrically operated aerial vehicles combined with more autonomous features are far safer’ than human operated planes.
‘Whilst drones are reaching higher levels of autonomy than aircrafts, they aren’t built to the same autonomy as personal aircrafts. Occasionally drones fall out of the sky - reliability and safety levels are far below what would be considered acceptable for human transport.’
The solution Daedalean are proposing?
To build a system which can pass the commercial pilot exam. This would allow the system to pass SAE’s ranking system that quantifies aircraft safety requirements. To do this, Jasmine explained that they are currently working on defining the requirements, collecting data with their high quality simulation to analyse the metrics, and build a system that has deep learned components.
Huaji Wang, Researcher, Cranfield University
With our feet back on the ground, we next heard from Huaji Wang from Cranfield University who is currently working with automated driving. Although fully autonomous driving (and flight!) has been the core focus of the discussions of the Summit so far, he explained that ‘although we’re pushing towards fully autonomous vehicles, we need to stay in the loop with automated driving because it’s going to take a long time to reach the end goal.’ Recently there have been accidents and near misses where test cars have crashes because they’ve failed to identify certain objects, and until this failure rate becomes minute driverless vehicles cannot become a mainstream reality. Additionally, as humans we sometimes want to be able to drive recreationally so the driver needs to be given the choice to operate the vehicle independently, which is also why the SAE provides 6 levels of autonomy. Haji said that ‘we need to design a system that combines human intelligence with machine intelligence to make autonomous driving much safer.’ To achieve this, Huaji has been working with two kinds of systems, one steering and one breaking based. The steering based system focuses on swerving away from objects and the braking system employs an emergency stop feature to avoid collision.
‘Emergency steering assistant is most beneficial for highway driving, braking is more effective in city driving because vehicle speeds are lower’ - Continental AG.
Huaji’s research focuses on these solutions where the application of game-theoretic modelling methods are employed to analyse results from driver behaviours using the collision avoidance system as the stepping stone to fully autonomous vehicles with semi-autonomous and automated driving.
Emanuel Ott, Customer Success Technologist, iMerit
Continuing the discussion and also drawing on safety and collisions, the issue of semantic segmentation was raised by Emanuel Ott from iMerit. Currently the way we make algorithms ‘see’ is through annotating machine learning and computer vision. It’s ‘no longer a secret that lots of companies are working on self driving cars’, and many of them want to understand what a road scene contains. To label this, semantic segmentation is employed to partition an image into semantically meaningful parts and classifying each part into a predetermined class. At the end of the day, it’s important ‘for clients to receive high level accuracy and precision to use as their datasets for their supervised training data.’ There are of course challenges in semantic training of datasets; they are manually labeled which is time consuming - an hour per image is incredibly laborious especially due to the size of current datasets and the need for fast processing. The four main categories that slow down the image analysis are ‘tool functionality, sense complexity, quality expectation, and definitions and guidelines. To overcome these issues, Emanuel proposes the implementation of a standardised set of rules and taxonomies to label these images, similar to the rules that human drivers follow when operating vehicles. As he states quite rightly, ‘self driving cars are safer when they talk to each other’ and ultimately even safer if the rules they’re operating by are universal.
Eugene Tsyrklevich, CEO, Parkopedia
We have heard a lot about the actual driving of the cars on the road, but what about manoeuvring the vehicles in tighter scenarios? Eugene Tsyrklevich from Parkopedia spoke to us about parking cars autonomously and explained how machine and deep learning can help with this. ‘Where can I park? This question is asked time and time again by too many drivers’. Especially in busy cities, tracking down a space and parking quickly and safely isn’t an easy task. Eugene explained how Parkopedia are using huge datasets (over 1 billion data points) every day to analyse and deliver predictions on the available parking spaces in your area. This not only alleviates stress, but is much more efficient, safer and environmentally friendly than having drivers circling neighbourhoods over and over again. Not only do they accurately find a space, but they can predict the availability of spaces at any given time and they are working towards fully autonomous parking using machine learning methods.
Couldn’t make it to Amsterdam? Our next Machine Intelligence Summit will take place in Hong Kong, 12 & 13 April 2017. Find out more here.
21 September 2017, London
The Deep Learning Summit is at the forefront of AI. Explore the impact of image & speech recognition as a disruptive trend in business and industry. How can multiple levels of representation and abstraction help to make sense of data such as images, sound, and text. Hear the latest insights and technology advancements from industry leaders, startups and researchers.
21 September 2017, London
Following day 1 of the summit, attendees will come together for an evening of networking, discussions and fine food & wine. Mix with leaders on topics including NLP, speech recognition, reinforcement learning and image analysis, as well as applications in sectors including manufacturing, transport, healthcare, finance and security.
10 October 2017, Montreal
What are the latest advancements in deep learning research? Where are the most recent scientific breakthroughs? Hear the latest research news from global pioneers in Natural Language Processing, GANs, Reinforcement Learning, CNNs and Unsupervised Learning at the summit.