Learning without prejudice: the new job spec for AI
A view from Liz Wilson

Learning without prejudice: the new job spec for AI

The growing use of AI risks perpetuating long-held social biases.

Use of artificial intelligence has well and truly established itself in the world of brand communications. It is now more accessible to marketers than ever before and plays an important role in many aspects of customer service and experience. More so than consumers even realise. 

The technology is now a brand’s hot ticket to understanding consumers, personalising ads, marketing content and getting in front of the right customer. According to recent Accenture research, 80% of managers believe that AI will take on a bigger role in the next two years, working alongside humans to determine how we deliver our brands to consumers. 

Since the Advertising Standards Authority cracked down on ads that paint harmful gender stereotypes and banned those that are likely to cause serious offence, ad breaks have been void of men with their feet up while women clean, along with other long-standing gender tropes. It’s a giant breath of fresh air for the ad industry and our viewers, but as the use of AI grows there’s greater risk that we will continue to perpetuate long-held social biases and assumptions, quietly backtracking on the progress we’ve been making. 

AI offers huge opportunities for the advertising game, yet it’s only as smart as the information we feed it. Biased AI – fuelled by a lack of diversity among data sets and the people behind the algorithms – could be, if we’re not careful, setting us all up to fail. By pushing out misguided, tone-deaf insights, strategies and stereotypes, these ad campaigns will be destined for the ASA’s dustbin. 

Time and again, we see how easy it is for our thinking to reveal hidden assumptions. A recent study into UK knife crime found that 25% of victims were women and children. While shocking, this was the only statistic to feature in most of the news coverage, without mention of the 75% of victims who were men. There is an underlying premise here that it’s "more normal" for men to be stabbed. 

It’s simple to see how unconsciously held assumptions like this could catch AI out, reinforcing the stereotypes that advertisers are trying to avoid, damaging a brand’s reputation and ultimately discouraging people from reaching their full potential.

While it’s great that new tech can power fresh kinds of creativity across the industry, it’s vital to understand that machines aren’t inherently impartial. If we don’t check in on our biases and scrutinise our processes, we could unwittingly inject our own cultural assumptions into tech. This could result in products that give unfair advantage to some groups and issues such as gender inequality getting worse rather than better. It’s time for us to move on from the fearful "the robots are coming" narrative and instead take charge of how AI is used, moulding it to ensure equality and fairness for all. 

And we humans are the key to putting this right. 

It’s time we were the gatekeepers of prejudices and used AI as an opportunity to eradicate discrimination before it starts. There are three key areas for us to focus on.

Make adland a microcosm of real society

Our first task is to ensure that there is a diverse mix of people working in the sector who are developing the tech for future use. Organisations are already seeking to shake up their talent pool and encourage under-represented groups to take up tech and creative jobs, developing skills in data analysis, AI and automation – especially women from various social backgrounds. There are a huge variety of roles in need of this hybrid creative/technical skillset and we must continue to look at what we can do to nurture existing workers and attract new employees. We need to show them just how rewarding a career in the industry can be. 

Unilever was ahead of the curve when, in 2016, it issued a rallying cry to the marketing industry by launching its "Unstereotype" campaign to eradicate stereotypes in advertising. Then, this year, it put its own staff under the microscope – offering them DNA profiling to get a better understanding of their own diversity in an attempt to end unconscious bias. Not only did the experiment reduce stereotypical thinking and content, it improved original thinking (isn’t this our raison d'être, after all?).

Check and cross-check our inputs

Secondly, we need to continually review how representative the content and data inputs are. We must ask ourselves: are we collecting the right data from consumers and in relevant and responsible ways? If drawing on pre-existing data sets, are these fit for purpose and do they properly represent our customers? What about our content? If stock image libraries are full of photos of only women cooking or just men at war, then the creative assets the AI reaches for will perpetuate outdated stereotypes.

Manage AI as an employee

Thirdly, on a very practical level, we need to manage AI just as we would any colleague. The tech must be educated, coached and monitored to ensure its performance, and adjustments made where necessary. 

While we have the ASA, there’s no regulatory framework for AI in our industry. It’s our responsibility to ensure we use AI in the right and responsible way and set it up for a positive and equal future for all. To evolve beyond classic stereotypes and tropes, any AI underpinning the insights that guide creative thinking must consider the world through a variety of lenses and, from the outset, include diverse mindsets across the breadth of our industry.

Liz Wilson is chief operating officer at Karmarama

Picture: A 1950s washing-up liquid ad (Getty Images)