Creating a More Humane Artificial Intelligence

By Sophie Curtis on November 26, 2015

Original
Virtual assistants have received a lot of attention as helpful tech that will make our lives easier, whether it's finding the quickest route to our destination; controlling smart home products; suggesting restaurants or other daily-life enhancements. But how can we make intelligent agents more humane, to seamlessly integrate them into our lives?

At Botanic they are building conversational characters who act as an interface. Driven by machine intelligence, the virtual assistants can understand context and state for task completion, using speech and animation to provide contextually and emotionally appropriate responses to human input. One of the many benefits of this is collecting a lot of rich data, as people’s actions are better predicted in the context of their feelings and attitudes than through text alone.

At the RE•WORK Virtual Assistant Summit, in San Francisco on 28-29 January, Mark Stephen Meadows, Founder & President of Botanic, will discuss how human communication is largely based on body language, and likewise intelligent agents need to be able to both express themselves and accurately read the conversant's body language. Specific solutions for driving animation based on NL output are provided, as are specific solutions for gathering the input of the conversant's affective body language. This is a circuit of natural language input and output, and understanding this circuit helps improve the design of communication and helps build personalities – the very UX of NLP.

We caught up with Mark ahead of the summit in January to hear more about Botanic and what we can expect to see in the future of virtual assistants.

Can you tell us why you founded Botanic and what problem you are trying to solve?
We're trying to fix the future. There's something kind of like global warming going on and you've heard it in your phone: robots that pester us, frustrate us, follow us, and waste our time. The problem is that we're gradually being surrounded by an army of uncanny monstrosities. We can talk simply and say that this is a design problem, but it goes deeper than that as the emerging field of virtual assistants - and software robotics in general - has to address questions of privacy and surveillance, marketing and advertising, and identity itself. So Botanic is building humane machines.

What are the key factors that have enabled recent advancements in virtual assistants?
The key factor is a circuit of tools that solve a need.

We've got more tools, for one. We now have access to expensive and complex systems like NLP toolkits, cognitive computing ecosystems, mature online services, and libraries for stuff like voice recognition, connected devices, etc. Oh, and APIs for everything from sentiment analysis to sensor integration. So we've got more stuff to work with.

Second, there's this sort of understanding in the industry that our computer interfaces can be simplified, and that's great. We all seem to be grokking that our interface to the Internet (and even the world around us) can be simplified, and not only can we get more done faster, but virtual assistants can both make and save us money. So there's a need.

I don't know if the tools are producing a need, or if the need is producing the tools, but this tools / need circuit is spinning faster and faster, so we're at the launch of a long, upward flight.

What are the main applications for virtual assistants at the moment?
'Assistance' is the simplest answer. But this applies to a ton of use cases like customer relations, healthcare, education, entertainment, advertising and mostly operating system interfaces. Siri, Cortana, and several others sort of dress up like assistants but they're really a layer, kind of like the GUI, that allows us to operate our phones better. Other systems, like Amazon's Echo is there to collect our information while helping you play music, learn about stuff like the weather, and make shopping lists. So we're seeing a pretty fertile bloom of assistance in a range of industries.

Which new verticals and industries will this expand to in the future?
Where won't they expand!? If we consider virtual assistants as a branch of software robotics then we're on the doorstep of some crazy expansion.

Anywhere we currently have one person giving information to another, via some media, seems like a potential use case. But there's a lot more use cases than just knowledge-workers because virtual assistants will allow us to provide information where people aren't, as well. So I'm seeing virtual assistants squatting not just on our phones, but on websites, televisions, in our homes, in our cars, and as we saw in August of 2015 in a London bus station, on city streets. We'll talk with them on our wearables, we'll have them built into tractors, combines, airplanes, boats, elevators, and robots that are both in factories and homes. They'll be used to interface with knowledge-bases like Wikipedia, they'll appear as the front-end of civic infrastructure management, in schools, hospitals, police stations, front-desk offices, and next to our beds when we go to sleep at night. What's weird for me is that if natural language interfaces are a trend that's emerging to overlay the GUI (just as the GUI overlaid command-lines) then anywhere computers are appearing virtual assistants will be appearing as well. We're moving into a world that will be filled with virtual assistants because this is an interface more than it is an entity.

What developments can we expect to see in virtual assistants in the next 5 years?
Affective, graphical, and contextual, are the top three I see emerging now.

First, the general school of affective computing is important because it associates how we feel about something and, let's face it, most of our decisions are emotional. If the system is pegging our mood as we make a decision this not only helps developers and providers improve products and services but it means that the experience is smoother and more consistent for the end user. Today there are many systems that allow virtual assistants to sense our emotions, and that will only get better and more accurate. Meanwhile, we'll find that the more affective a virtual assistant is the easier it is for us to understand that system. So I'm pointing to emotions as both a form of input, from the user, and a form of output, from the virtual assistant.

Second, a picture is worth a thousand words. So we'll see more pictures and graphical interfaces integrated with virtual assistant interaction. Also, communication's this really visual thing as stuff like body language and gestures are a part of how we humans communicate. Natural language processing needs to be considered something that includes body language and tone of voice. Language is about more than words so we'll see more body language and visual expression of language. It looks like virtual assistants will adopt graphical conventions to better communicate and I suspect they'll start to be able to read our body language, too. Some of them - like USC's Ellie, or the stuff we develop at Botanic - can already. Like affective elements, graphical elements become both a form of input, from the user, and a form of output, from the virtual assistant. Virtual assistants are the indigenous citizens of virtual reality; they'll live in systems like Oculus, Magic Leap, and HoloLens, and develop into visual personalities.

Third, context is key to a good conversation. Virtual assistants are getting better at building context from user state of past interactions, current interaction, and projecting probabilities. So context is built out of all of these things like where we are and when it is and what we've done and what we're doing. Lining up these platters of information and finding where they overlap - associating use data - helps build that context and that's key to a good interaction. So contextualization is improving and I can only think it will get stronger in the next 5 years.

What advancements excite you most in the field?
The aesthetics of personality. Ok, while recently we got our own system to transfer bitcoins via voice (and that means that virtual assistants can now sell things like movie tickets, plane tickets, goods, services, etc) what really excites me are the aesthetics of personality.

I love figuring out how to embody AI; how to give it a tasteful, engaging, and humane personality. After all, personality's the UX of these systems. That's directly linked to aesthetic integrity. Now, just to be clear, aesthetic integrity doesn’t measure the beauty of an agent's art or style; aesthetics represents how well a virtual assistant's appearance and behavior integrate with its function to communicate coherently. That's about the coordination of a bunch of features. This is visual, auditory, behavioral, cognitive, affective, graphical, and so on. Coordinating this stuff means that we have to think about when a virtual assistant should be funny, or serious, severe or soft and how they present those personality traits to the end user. After all, virtual assistants aren't just about task completion: they're also about providing companionship. And that's pretty exciting, pretty weird, at times sad, and an important design task to work on.

As a side note (and I don't know if "excitement' is the feeling or if its more of a concern), there's this dialogue in the media about the robot apocalypse and The Singularity and when Terminator is going to come down the chimney and bust into the bedroom. Well, that's fun to chew on, but we really need to ask ourselves about the source of these concerns. People design these systems. *WE* design these systems. So its up to *US* to make them things we want to live with. We determine the relationship we'll have with them. They are us. We are the robots. And that's fun and not terribly scary. I'm looking forward to the conference. We're at the forefront of an amazing new medium, and it's great to get a chance to meet the folks at the fore and discuss how best to make these things.

Mark Stephen Meadows will be speaking at the RE•WORK Virtual Assistant Summit in San Francisco on 28-29 January. Early Bird tickets are available until 4 December, for more information visit the event page here

For further discussions on Virtual Assistants, Deep Learning, AI & more, join our group here!

Machine Learning Deep Learning Algorithms Deep Learning Predictive Intelligence Voice Recognition NLP AI Assistants Virtual Assistant Summit Mobile Devices A I Speech Recognition Machine Intelligence

0 Comments

Search

Recommended Posts

Latest Posts

Upcoming Events

Deep Learning Summit London

21 September 2017, London

The Deep Learning Summit is at the forefront of AI. Explore the impact of image & speech recognition as a disruptive trend in business and industry. How can multiple levels of representation and abstraction help to make sense of data such as images, sound, and text. Hear the latest insights and technology advancements from industry leaders, startups and researchers.

AI Assistant Summit London

21 September 2017, London

The next generation in predictive intelligence. Anticipating user & business needs to alert & advise logical steps to increase efficiency. The summit will showcase the opportunities of advancing trends in AI Assistants & their impact on business & society. What impact will predictive intelligence have on business efficiency & personal organization?

RE•WORK Deep Learning Summit Attendee Dinner

21 September 2017, London

Following day 1 of the summit, attendees will come together for an evening of networking, discussions and fine food & wine. Mix with leaders on topics including NLP, speech recognition, reinforcement learning and image analysis, as well as applications in sectors including manufacturing, transport, healthcare, finance and security.

Connect

Be Sociable

  • Twitter
  • Facebook
  • Linkedin
  • Youtube
  • Flickr
  • Lanyrd
  • Instagram
  • Google plus
  • Medium