If you’re looking for a pithy answer to the question "Why is artificial intelligence still so bad at replicating human speech?", then Harvard psychologist professor Steven Pinker (pictured above, left) has a good one.
Describing the complexity of language at a recent talk, Pinker laid bare the scale of the task that machines face: "If a person knows about 10,000 nouns, there are 10,000 ways to begin a sentence… then 4,000 verbs. So, you have 40,000 ways to begin a sentence with just two words into it."
During this year’s Cannes Lions International Festival of Creativity, Pinker appeared on stage alongside Adam Sigolda, chief executive and founder of content discovery platform Taboola. On the face of it, it seems quite an odd pairing – what brings these two together?
"I’ve been interested in artificial intelligence since I was a grad student many decades ago," Pinker tells Campaign alongside Singolda on the eve of their Cannes talk. He is widely regarded as one of the world’s most influential scientific thinkers, having written bestselling books about cognitive psychology and human nature. He is the author of The Blank Slate, a provocative book that convincingly daggered the belief that human minds are essentially blank canvases. This was controversial among left-leaning egalitarians, but Pinker was able to forcefully demonstrate how we are not, in fact, created equal because of key traits of human nature – including biases – that are hardwired into our brains from birth.
This is deeply problematic for creating bias-free AI, Pinker explains, because the algorithms they are built on rediscover human biases through the way data is collected and analysed. But that hasn’t stopped interesting efforts to try; just this year, we have seen "genderless voice tech" and the growing popularity of using chatbot recruiters in adland.
"It really is true that a man is more likely to swat a fly with his bare hands than a woman. That’s a stereotype. It’s also true. On average, it’s true that you have a lot of Jews who grow up to become doctors. It’s a stereotype, but it’s a true stereotype and not every last one," he continues.
"And for moral and political reasons, there are a lot of biases that we wish we didn’t have. Algorithms rediscover them, because algorithms pick up the statistical structural world. There may be cases where we decide to sacrifice statistical accuracy for fairness; that is, you don’t factor in someone’s race or religion or sex in justice or university admissions, where we decide for moral reasons to make this a system essentially stupid – and thereby more moral – by essentially stripping it of what we call bias."
Choosing bias over accuracy
Instead of fighting bias altogether, Pinker argues, creators of AI systems will have to decide "what kind of biases we want to live with" – even if it means the model is less accurate.
"It’s a case where there is a trade-off between our moral values and intelligence in the sense of prediction accuracy; we may decide to sacrifice some accuracy to buy some bias for fairness," he adds.
Amazon learned this lesson last year when it scrapped an AI recruiting trial that showed bias against women. The system discriminated against female job candidates because its machine-learning process observed patterns in CVs over a 10-year period – most of which had been written by men.
It’s a much more personal issue for Singolda, who maintains that AI will have the biggest impact on his business by making it harder for spammers to create fake news. Taboola helps publishers monetise content by creating personalised content recommendations (presented as a box of articles titled "around the web" at the bottom of news articles).
"It’s worth it because my partners are NBC News. People trust NBC News as a reputable brand. I can’t afford NBC News recommending at the bottom of the article that you may like 'If you take this thing, then this will happen to you'. It’s a mistake I can’t afford," Singolda warns.
On the topic of bias in AI, Singolda makes the self-evident point that "garbage in means garbage out", but then provocatively suggests: "Maybe we should avoid questions altogether?"
He explains: "Maybe Alexa will suggest questions to us that we should have asked her… it’s all those moments that I can suggest things you might like to do but you never knew existed. An AI has an unfair advantage by comparing you to millions of people, or maybe hundreds of millions of people."
Human creativity versus artificial inspiration
While the idea of a know-it-all Alexa is mildly horrifying, it raises an important issue of how far consumers will be willing to trust machines to make decisions on their behalf. Instead of merely trusting machines not to harm us when performing limited tasks, we will be asked to take that precarious step towards accepting that machines may know what is better for us than we do.
Pinker doesn’t think so and makes clear that AI’s limits are bound by our innate persistence to care about where an idea comes from, no matter how good that idea may be.
"It’s not just enough to accomplish something. The difference, say, between a work of art and a very convincing portrait: even though they may be identical, we just care about was this really painted by da Vinci or by a student? Or an example from [Yale University psychologist] Paul Bloom is that there’s an auction for John F Kennedy’s golf clubs sold for millions of dollars. They’re just golf clubs and it turned out that they weren’t actually used by John F Kennedy; their value become virtually zero."
This psychological need to know where something comes from has profound (and perhaps relieving) implications for the creative industries. In recent years, there have been myriad stories about AIs creating symphonies, poetry and even rap music. Some agencies, such as McCann Japan, have tried to lean in to the "threat" by developing an AI creative director, while others such as David Miami have chosen to pour cold water on the whole idea through its Burger King campaigns.
AI can deliver the long tail
Pinker insists that humans need to have a psychological connection to where ideas and things come from – and this even applies to something as seemingly minor as why people generally prefer wearing leather over faux-leather.
He explains: "We’re all wearing cotton, linen, wool, leather – the same as people did 600 years ago. Yeah, we do have synthetic substitutes – some polyester, artificial leather; they’re pretty similar and physicists would be hard-pressed to tell you what the difference is between polyester products in terms of strength, flexibility. But we can sense that little difference. And it makes a big difference to us as users.
"And it could be that algorithms get 99% of human creativity, [but] we’re going to really care about that remaining 1%. [To us] They’re just going to feel fake; it’s going to be cheesy, it’s going to feel artificial."
Taking this notion back to advertising – rather than media owners or creatives, it may be media buyers that end up getting most utility from AI through its ability to analyse and independently iterate its model.
Singolda explains: "For decades, advertisers used to buy campaigns at an annual spend of about half a trillion dollars by having an idea and opinion about who is your target audience and they advertise against that target audience. With AI now, advertisers are much more surprised and delighted to know that there are many pockets of potential clients."
He makes the point that 20th-century ways of doing business made it impossible for media owners to capture the long tail of advertisers.
He continues: "There was usually a 1,000-people sample or something like that. The entire TV industry rating is based on about 3,000 people having a box at home. So, by definition, the sample size is not good enough to find a long tail of potential clients. So we do see, when you use AI, an opportunity for an advertiser to say ‘Here’s my story, find people that like it' versus ‘Tell us ahead of time who to go after’."