CES 2018: Artificial intelligence needs to get to know you better

In order to better predict users' needs, AI is hungry for more, and more personal, data, say experts from Amazon, Microsoft and IBM.

CES 2018: Artificial intelligence needs to get to know you better

Plug in an Amazon Echo and it’s obvious that artificial intelligence is using algorithms to produce responses. But drivers in Singapore probably don’t realize traffic signal patterns are being determined AI that examines data from street cameras. In the next few years, AI will become increasingly integrated our modern lives—in ways that will be at times conspicuous, and other times unobtrusive. AI will get to know you.

So said a trio of AI experts who work on some of the best recognized digital AI assistants—Amazon’s Alexa, Microsoft’s Cortana and IBM’s Watson—at the "Future of AI" panel Thursday morning at CES 2018 in Las Vegas, Nevada. 

"It’s about friction removal. Whatever evolution of computing and processing, every step is less friction than the step before," said Cameron Clayton, General Manager, IBM Watson Content & IoT Platform at IBM. Right now, interaction with AI is typically human-initiated. Users ask Alexa a question and she answers. Researchers query Watson. "A lot of the things we’re working on is actually a Watson-initiated experience," Clayton added. "So Watson knows and predicts when you need something, and it speaks, or it gives you data or presents a recommendation for an answer. We’re right at the infancy of this." 

"The idea that software is now coming to learn about you, rather than you having to learn how the software works is really, really powerful," said Andrew Shuman, corporate VP for Cortana engineering at Microsoft. That means AI will need to be able to read context cues. Who is in the room? Where are they standing? What gestures do they use? What are they looking at when they’re speaking? Some AI devices are already equipped with cameras that can capture more data to help understand user queries, and that’s likely to increase, the panel said, to a bit of consternation from the moderator, Avram Pitch, editorial director of Tom’s Guide and LaptopMag. 

But that data is needed to take AI to the next level. "A lot of software today is fairly stupid about things you’re doing," Shuman said. "If I send an email to Harry, which Harry do I mean? You have to resolve that again and again with the same name." With the right contextual data, the interaction will be more useful and feel more useful, he added. "Kids expect every screen to be a touchscreen. What will it mean to expect every room to have an ambient device that you can speak to? We’ll have the context of both the physical world and, more important probably, the person themselves, really understanding the people they interact with most frequently, the projects they’re working on, the things that matter the most to them, and being able to leverage that knowledge graph of a person to do great things."

So far, most consumer interactions with AI have been through these voice-activated assistants. "If you can talk to something like it’s a human and for the most part get the response that you would expect from another human, then it’s a natural interface, there’s less friction, it takes less effort," said Al Lindsay, VP, Alexa Engine Software at Amazon. "It’s a delightful, magical experience. You just ask for something and it happens."

But AI will evolve beyond voice interfaces. Dictation can be distracting for people nearby, or embarrassing for the user. Lengthy results may be easier to read than hear. And "voice is not the primary communication method for teenagers," Clayton noted, a major demographic for early adopters. Form factors are already beginning to change to accommodate displays, but it might be that some other visual technology that doesn’t require touch, either, will emerge, said Lindsay.

For anyone worried that smarter AI could lead to killer robots or dystopian VRscapes—including one questioner at the end of the panel—the experts scoffed. "The assistants of today are like 2-year-olds, really. That’s taken about 70 years," Lindsay said. "There’s no attention span. You have short interactions. You can’t actually carry on a conversation about social topics or what’s trending or what’s going on in the news with your assistant. Five years from now, will the 2-year-old  be a 7-year-old? Maybe, but probably not."