How the 'Trump track' at SXSW tackled tech totalitarianism
A view from Dan Machen

How the 'Trump track' at SXSW tackled tech totalitarianism

A global debate is raging around the combined use of behavioral modeling, social media and big data analytics and programmatic, writes the innovation director at HeyHuman.

Given his tumultuous first months in office, SXSW dedicated a large number of talks related to topics surrounding Donald Trump. Not since Ed Snowden's revelations about the NSA's bulk data collection, has there been such a significant addition to SXSW content as the "Trump track." And that's not where the Snowden and Trump comparison ends. One week after Snowden's disclosures, sales of George Orwell's "1984" increased 7,000 percent. Similarly, the classic saw a "Trump bump" during inauguration week this year and became the No. 1 bestseller on Amazon.

So what do surges in dystopian novel sales reflect? In many ways, it's people arming themselves for what they perceive as an adversarial future. So apart from Donald Trump's election, what else is in play to raise such concerns? Considering what I saw at SXSW, it's a perfect storm, which left unchecked, could enable a form of tech totalitarianism. The threads involved encompass behavioral modeling, micro-targeting, big data and programmatic placement—empowered by messaging iteration accelerated and amplified by machine learning. There is also fake news to consider as if this weren't complex enough.

The forces at play here are formidable and remarkable in their alleged role in recent global events. The firm Cambridge Analytica was recently implicated in affecting the results of both the UK's Brexit and US election. According to Forbes, the firm helped Jared Kushner swing the election for Donald Trump:

"Kushner's crew was able to tap into the Republican National Committee's data machine, and it hired targeting partners like Cambridge Analytica to map voter universes and identify which parts of the Trump platform mattered most: trade, immigration or change ... For fundraising they turned to machine learning, installing digital marketing companies on a trading floor to make them compete for business. Ineffective ads were killed in minutes, while successful ones scaled. The campaign was sending more than 100,000 uniquely tweaked ads to targeted voters each day."

So, what's the big deal here? Shouldn't we just applaud the Brexit and the Trump campaigns for a job well done? Well, yes and no. A global debate is raging around the combined use of such technologies—using behavioural modeling based on social media insights and big data analytics with programmatic placement. The debate is threefold, relating to privacy and—especially where machine learning is involved—the question of their being a lack of an ethical hand on the controls. These concerns are heightened when the content vehicle for the advertising is fake news.

We saw amplified commentary on the problem in Austin this year, with CNN's Jake Tapper reiterating that, when merited, he was prepared to "express incredulity at the lies and falsehoods coming out of the White House." Equally, in a keynote hosted by head of research and development at tech incubator Jigsaw, Yasmin Green, we learned of fake news aficionado Jestin Coler, who made more than $100,000 just three days before the election with a fictitious story about an FBI agent investigating the Clintons being murdered. It was shared over half a million times. The backlash to fake news has also grown in UK marketing and beyond, with the Guardian's Hamish Nicklin highlighting that with the pursuit of dirt-cheap ad inventory: "Fake news is being used as a key weapon to fight truth. And the digital advertising paradigm is helping to fund it, in fact, I'd go as far as to say that it rewards it."

He spoke against totally fictitious or extremist, baseless content that people could simply plug into the programmatic exchange and make serious money. "That's digital alchemy right there. What you have done is take something totally worthless and create gold out of it ... We are all complicit in this in some way," he added.

The solution here was partly alluded to at the Guardian's "Changing Media Summit" in London. Advertising Group Founder, Johnny Hornby, remarked, "Clients need to be willing to get themselves off the drug of cheap digital media and invest in proper brand protection. Pre-bid verification technology costs all of 3 pence per 1,000 impressions, accounting for about 2 percent of a brand's overall media spend."

In conclusion, and reflecting on a wider perspective across SXSW 2017, we must refocus on the human use of technology. The powerful engines of behavioral modeling, micro-targeting, big data and programmatic placement requires a strong ethical compass in their application—we need a "dead man's handle" on the train. Just this month, Sir Tim Berners-Lee—the father of the web no less—added his persuasive voice against fake news designed to be sensationalist or shocking and therefore favored by algorithms looking for better ad inventory bang per buck. 

Dr. Simon Moores, a cyber security expert captures the argument brilliantly for me when he says: "Behavioural modeling involving big-data analytics has arguably passed an inflection point ... Thanks to the growth of predictive analytics, algorithms and big data-mining businesses you can now look forward to a future that's made up of equal parts Orwell, Kafka and Huxley."

This needs serious industry debate now on both sides of the Atlantic. The dynamics in play are never going to be as slow as they are now as machine learning accelerates. Communications sectors need to focus on the humane and ethical application of the monster we've created, to avoid a form of tech totalitarianism and a future that is closer to "1984" than 2017.

—Dan Machen is innovation director at UK integrated agency HeyHuman.