To treat consumers fairly, brands need to use responsible AI
A view from Athina Kanioura

To treat consumers fairly, brands need to use responsible AI

With MWC 2019 taking place, it's important to highlight that responsible AI is no longer a 'nice to have'.

Artificial intelligence and intelligent technologies are becoming engrained in people’s lives and jobs. With this comes an interest in people’s experience of these technologies and how they are impacting us more broadly – for example, our health or ways that society works around us. 

In 2018, organisations worldwide spent almost $24bn (£18.2bn) on AI systems, according to IDC. Clearly, AI is no longer an amorphous, catch-all concept. Large businesses, start-ups and academic researchers have all been busy experimenting with the technology, starting with figuring out how machine learning, computer vision and natural language processing fit together with advanced data analytics and automation. From there, they can start to think about application.

The reality is that there’s a plethora of intelligent applications and devices out there, many of which companies and their leaders will showcase and debate at MWC 2019. This is a great opportunity to see what’s right for a business and take some inspiration from these applications. 

Looking after big issues

The more sophisticated AI becomes, the more it can be applied to serve the common good. According to Accenture Strategy, customers are no longer making decisions solely based on product quality or price; they’re assessing what a brand says, what it does and what it stands for. AI used in the right way means meeting customer expectations as well as helping your bottom line.

Take the Madrid metro, which moves two million commuters along 300km of track and through 300 stations every day. It’s using AI to reduce energy consumption and environmental impact. The self-learning ventilation system has helped reduce its energy costs for ventilation by 25% and cut CO2 emissions by 1,800 tons annually. 

Another example shines a light on the dark web. The trafficking of illegal drugs is increasingly moving online. However, image recognition, text extraction and deep embedded clustering give law enforcement agencies the opportunity to discover where specific narcotics are being sold on the dark web and in what quantities. They can also spot "narcotics marketing" trends, such as purity of a drug, and compare global and local drug popularity – all contributing to a more complete picture of what they’re up against. In the same token, AI can be used to combat fraudulent advertising.

New ways to serve

A lot of AI applications are shaping the future of customer service.

Companies across industries are implementing AI to monitor what users do, allowing the machine to jump right in with help if there is a problem. Far from being creepy, this can save a lot of frustration and instead win points for excellent customer service. For example, a large telecommunications provider uses AI to spot when a customer’s internet drops or their streaming slows down and automatically connects them to a technician to help them resolve the problem.

Brands and retailers have an opportunity to gather lots of the right kinds of consumer data using different types of technology. For example, Kellogg embedded eye-tracking technology into a virtual-reality headset. Shoppers are immersed in a full-scale, simulated store, while the technology analyses how they move through the space, pick up products and place them in their trolleys. After analysis, Kellogg was able to see the most effective location for new products and increased sales by 18% during testing.

An ethical guardian

Trust, transparency and responsibility are big issues for every brand around the world. It’s what consumers expect. As well as carrying a bigger obligation to use technology more responsibly than before, AI can also help to achieve fairness.

Responsible use of AI means pulling the plug immediately when we see human prejudice creeping into algorithms – like Amazon did last year with a recruiting tool that was biased against women. But how can we help discriminating AI outputs in the first place?

First, by not fooling ourselves into believing that it is AI’s fault. Algorithms and models are developed by people; they learn and act on the data that is generated by how we live, work and do business. Second, by educating those who build and configure AI systems on the responsible use of AI. Third, by equipping them with tools and methods that discover blind spots and unfair results before they do harm. Not just the data scientists and developers, but also business users, leaders and all the way up to the board.

An AI misstep can breach bonds and damage trust between companies and people. This means responsible AI is no longer a "nice to have". It is imperative to build and reinforce the trust that organisations need to drive success and scale AI with confidence.

Dr Athina Kanioura is chief analytics officer and global lead at Accenture Applied Intelligence