Opinion

Big data is about to go mega and the results could be evil

Over this weekend, The Economist declared Moore's Law was finished, writes Andrew Pemberton, director of Furthr.

AlphaGo: Google's Go-playing supercomputer trounced a human competitor this week
AlphaGo: Google's Go-playing supercomputer trounced a human competitor this week

As you know this is the big idea that has driven tech development for the last 44 years or so.

(Recap: Moore's law dictates that the number of transistors in a dense integrated circuit has doubled approximately every two years. It's what gave us the smartphone, the super computer in our pocket, the most important single device of the 21st century.)

Today, tech money needs to find new places to play. So we can all expect to hear loads and loads about things like virtual reality goggles etc.

But one of the places that is already attracting a lot of investment and generating activity is big data.

Or, more accurately, the algorithms that read that data and make predictions about future behaviour based on it.

They are called things like Naïve Bayesian for sentiment analysis, ALS for recommender systems, Logarithmic Regression to classify, ARIMA for time series.

Google's Go-playing supercomputer - which trashed a human adversary last week - is one such algorithm. Google search is another (the price of keywords is in fact, a prediction of the future value of search terms to your business.)

What does this mean for marketing? Well, programmatic isn't going anywhere. In fact, it might take your job. 

What does this mean for the rest of us? Ah, now it gets truly interesting. A lot of people think we are opening the door to all kinds of problems. Very, very serious problems. 

Who profits from predicting the future?

If you can predict behaviour, you can shape it. And if you sell that information you are a member of a way-too-privileged, elite that knows much more than anyone else and can get rich - or much, much worse - off the back of what they know about our behaviour. Harvard academic Soshana Zuboff, calls this whole process "surveillance capitalism." Read her paper. It'll scare the pants off you. 

Is she being over the top?  Well, as she points out, society is constructed on the expectation of uncertainty.  Trust, society, family, legitimate authority, contracts , even "free will", is all predicated on the idea that we have no idea what is around the corner. 

But what if we do? Google is the biggest aggregation of human intention the world has ever seen. When you stare into the homepage of Google Trends or Google ad words, you are looking into the near-future - It's better at predicting what we are going to do than anyone else right now. Searches of candidates in the New Hampshire primary mapped to the end result. 

Ultimately, Google wants to remove uncertainty. And that's a problem. 

"When we eliminate uncertainty, we swap the benefits of an always-unknown future in favour of perpetual compliance with someone else’s plan," writes Shoshana Zuboff.

Whose plan? Google's plan. And what is that plan? Get more data, turn it into surveillance assets, master it as prediction, sell it into exclusive markets for future behaviour, and ultimately transform it into profits.

Google's prediction engine has the power to change our behaviour to serve its data-gathering needs  - and that erodes our free will.  It could also be a threat to capitalism itself and, given the massive scale that Google works at, a challenge to our sovereignty as peoples.

That's why people are already talking about "ethical algorithms".

Sure, Google once declared it would not "be evil", but last year it famously abandoned that position

As tech and the money that powers it moves further and further into the world of prediction, the rest of us need to catch up and regulate their innovations before we all live to regret it.

We have to make sure they won't be evil. And make no mistake, they won't do it on their own.