Next-Level Social Robotics: The Personalised Family Bot

Original
To create a truly socially interactive robot, researchers and scientists use extensive knowledge of human social behaviour, psychology, and a whole host of different methodologies and computational models. As technological advances in AI and robotics grow exponentially, can we expect to see robot butlers and companions becoming a normal part of family life?

Tin Lun Lam is Founder & CEO of NXROBO, a company that specializes in social robotics. Tim's research focuses on the areas of human-computer interaction, intelligent control, and novel mechanism design, and he has extensive experience on development of automation systems, such as telepresence robots and rescue robots. 

NXROBO have created BIG-i, a family robot with natural language interaction, movable body, and active perception, that is designed to manage smart appliances and act as a bridge between family members. 

At the Virtual Assistant Summit in San Francisco, Tim will share expertise on family robots, and voice recognition and programming for applications in daily life. I asked him a few questions ahead of the summit for learn more.

What is your main goal at NXROBO?
I believe that robots are born for humans and everyone should have right to enjoy the benefit of robotic technologies. That is why I founded NXROBO and created BIG-i, to act as a bridge between family members. Family members can communicate with BIG-i using natural language, and can use voice programming to teach BIG-i how it should respond when something happens. Even when you are away, BIG-i will still care for your loved ones. BIG-i takes charge of the daily grind so you are released to enjoy every precious moment. It makes everyone feel the love and forms a strong bond with family.

Which industries do you feel will be most disrupted by virtual assistants and AI in the future?
In the near future, a lot of customer service jobs by phone call or email will be replaced by virtual assistants. This is because the contents are within a certain scope, the answers for the questions can be well-defined. It is a strength for computers to handle these kind of repetitive tasks. In addition, physical agents powered by natural language processing will appear everywhere, such as the home, shopping malls or hotels, to act as an intelligent interface and to provide a different kind of service for humans. Natural language interfacing is much more natural to humans than manipulating a touchscreen keyboard. I think reception jobs will soon be replaced by robots with AI. 

What do you feel is essential to future progress in this field?
Currently, almost all developments in virtual assistants are using text-based context extraction methods, which actually misses a lot of information that comes from human body movement, facial expressions and their tone of speech. In order to extract as the context as exact as possible, those other details like speech-tone and expression are essential to obtain, analyse together with the text. Likewise, how the virtual assistant generates their corresponding emotional response is essential for future development, to provide a better user experience.

In your opinion, are we ready for emotional AI?
From a customers' point of view, they are always ready for emotional AI, as emotion is one of the essential elements of natural communication for humans. From a technical point of view, although there are a lot of people working to create AI that can mimic emotional understanding and response, there is still a long way to go to achieve a seamless experience. However, I am sure that it is the future.

What developments can we expect to see in virtual assistants in the next 5 years?
As the technology of emotional understanding and response is gradually improved, you may find it more and more difficult to distinguish real humans from virtual assistants in the coming years!

Tim Lun Lam will be speaking at the Virtual Assistant Summit in San Francisco on 26-27 January! Other speakers include Roberto Pieraccini, Director of Advanced Conversational Technologies at Jibo; Anjuli Kannan, Software Engineer at Google; Alonso Martinez, Technical Director, at Pixar; Milie Taing, Founder of Lili.ai; Jordi Torras, CEO & Founder of Inbenta; and Lionel Cordesses, Innovation Project Manager at Renault.

Book your pass and view more speakers on the event website here.
Deep Learning Virtual Assistant Summit A I Robotics Voice Recognition Facial Recognition Social Robotics AI Assistants NLP

0 Comments

Search

Recommended Posts

Latest Posts

Upcoming Events

Deep Learning Summit London

21 September 2017, London

The Deep Learning Summit is at the forefront of AI. Explore the impact of image & speech recognition as a disruptive trend in business and industry. How can multiple levels of representation and abstraction help to make sense of data such as images, sound, and text. Hear the latest insights and technology advancements from industry leaders, startups and researchers.

AI Assistant Summit London

21 September 2017, London

The next generation in predictive intelligence. Anticipating user & business needs to alert & advise logical steps to increase efficiency. The summit will showcase the opportunities of advancing trends in AI Assistants & their impact on business & society. What impact will predictive intelligence have on business efficiency & personal organization?

Track 1: Deep Learning Summit Montreal

10 October 2017, Montreal

What are the latest advancements in deep learning research? Where are the most recent scientific breakthroughs? Hear the latest research news from global pioneers in Natural Language Processing, GANs, Reinforcement Learning, CNNs and Unsupervised Learning at the summit.

Connect

Be Sociable

  • Twitter
  • Facebook
  • Linkedin
  • Youtube
  • Flickr
  • Lanyrd
  • Instagram
  • Google plus
  • Medium