Am I being played by my own game?
A view from Nik Roope

Am I being played by my own game?

Brands need to consider the ethics of AI, even if the rest of the world has yet to catch up.

I admit it. I’m one of those Two Dots junkies. Sitting on the morning Tube, podcast in the ears, Two Dots on the screen. One of the tens of millions seduced by its myriad levels of lush, varied ludic conundrums.

The game design is so good that, even when I’m winning, I feel like I'm being "played" by the game, not the other way around. When I’m getting beaten and attempts start to trail off as determination wanes, I suddenly seem to get served an easier round, just to get me over the line. That tiny rush of success buoys my confidence for the next level.

It starts to feel like I’m being handled like my five-year-old daughter at sports day, where everyone gets a medal, even if they came last, to buoy them up for the next level. In a game, if I win, I want to really win. Not just get a little help over the line like a mum or dad might mess up a ping pong point on purpose, just to hand the sweet taste of victory to the child.

After a bit of digging, I could only find references to Two Dots’ "great design" by New York-based Dots studio. No mention of a friendly-but-fierce algorithm that could read its players like a book and intimately know every switch to flick to provide the most compelling experience possible; that could maintain attention and habitual use; that could attract the most revenue-generating power-ups (last month, Two Dots gained $400,000 in revenue).

But you know what, it actually doesn’t matter whether Two Dots does or does not employ some serious machine learning in its reading and response to every user nuance. The fact is that we live in an age when we must ask: am I winning or am I being allowed to win just to string me along?

Are the emotions generated by my interactions really tracking with the usual cause and effect relationship I know from reality, or are these relationships being warped to create a "personalised" journey, optimised not for my health and happiness, but in the service of someone else’s KPI?

We're at a point when algorithms can predict a break-up before the couple even know it themselves, based on senseable interactions. We may feel like individuals with our own free will, but with a big enough sample and a super-brain chomping through the data, signs and patterns in these signals can be incredibly accurate in their predictions of those seeming free-will choices.

And now, with more data points than ever, suddenly we can reliably spot the nuances and market against them in real time. But should we just assume that’s OK?

Crossing the line?

We are already using the complex correlations, mixed with other contextual triggers and cues. But is there a line we shouldn’t cross? Can we market brollies more aggressively when we know it's about to rain where you are, but not start peddling homeopathic antidepressants when we can tell you're about to go through a psychological rough patch?

The advertising business is here because of our manipulation chops. Why would we get hired otherwise? Our ideas, creativity and skill have long been deployed into persuasions and seductions that use psychological insights along with the pen to claw attention, stir emotion and win preference. In the age of artificial intelligence, is this just more of the same or has something more profound occurred to make us stop and take stock?

UK AI companies have raised £1.87bn since 2011, giving you a sense of how much gold is buried down there in AI’s future. There’s clearly huge value attributed to the potential efficiencies and power it can ascribe to businesses that can successfully harness it. So now is probably the time to assess whether some of this potential power could be as abusive as it is enhancing.

The Facebook/Cambridge Analytica case this year showed in the stark light of day how lethal the mix of power, scale and naivety can be. I don’t think we can afford to be several years behind this dawning reality as it scales and takes a grip.

The Advertising Standards Association has done a great job over the years to maintain clear lines between what is and isn’t an ad and what is and isn’t a claimable truth, predicated on the respect for a user’s right not to be fibbed to or be befuddled in the process of selling.

But what of these new, more pervasive manipulations? If I know you better than you know yourself and my steadfast behavioural predictions guarantee that I can get you to buy my chocolate bar, should I be allowed to conduct that orchestra of interactions? Should brands be allowed to be the public’s puppeteers?

So brands shouldn’t just be looking for ways AI can enhance their businesses. They need to be thinking about an ethical framework to limit their dalliances into problematic uses of AI, while the rest of the world catches up.

Nik Roope is co-founder and executive creative director at Poke