In the years leading up to the first Obama campaign, a new logic of accumulation emerged in the boardrooms of Silicon Valley: Tech companies began making money from their ability to map out and organise information. At the core of this model was an essential asymmetry in knowledge – the machines knew a lot about our behaviour, but we knew very little about theirs. In a trade-off for convenience, these companies offered people information services in exchange for more information – data. The data has become more and more valuable, with Facebook making on average $30 from each of its 170 million American users. At the same time, we have fallen for the idea that these services are ‘free’. In reality, we pay with our data into a business model of extracting human attention.
More data led to more profits, and so design patterns were implemented to encourage users to share more and more about themselves. Platforms started to mimic casinos, with innovations like the infinite scroll and addictive features aimed at the brain’s reward systems. Services such as Gmail began trawling through our correspondence in a way that would land a traditional postal worker in prison. Live geotracking, once reserved for convicts’ ankle bracelets, was added to our phones, and what would have been called wiretapping in years past became a standard feature of countless applications.
Soon we were sharing personal information without the slightest hesitation. This was encouraged, in part, by a new vocabulary. What were in effect privately owned surveillance networks became ‘communities’, the people these networks used for profit were ‘users’, and addictive design was promoted as ‘user experience’ or ‘engagement’. People’s identities began to be profiled from their ‘data exhaust’ or ‘digital breadcrumbs’. For thousands of years, dominant economic models had focused on the extraction of natural resources and the conversion of these raw materials into commodities. Cotton was spun into fabric. Iron ore was smelted into steel. Forests were cut into timber. But with the advent of the internet, it became possible to create commodities out of our lives – our behaviour, our attention, our identity. People were processed into data. We would serve as the raw material of this new data-industrial complex.
One of the first people to spot the political potential of this new reality was Steve Bannon, the relatively unknown editor of right-wing website Breitbart News, which was founded to reframe American culture according to the nationalist vision of Andrew Breitbart. Bannon saw his mission as nothing short of cultural warfare, but when I first encountered him, Bannon knew that something was missing, that he didn’t have the right weapons. Whereas field generals focused on artillery power and air dominance, Bannon needed to gain cultural power and informational dominance – a data-powered arsenal suited to conquer hearts and minds in this new battlespace. The newly formed Cambridge Analytica became that arsenal. Refining techniques from military psychological operations (PSYOPS), Cambridge Analytica propelled Steve Bannon’s alt-right insurgency into its ascendancy. In this new war, the American voter became a target of confusion, manipulation and deception. Truth was replaced by alternative narratives and virtual realities.
Cambridge Analytica (CA) first piloted this new warfare in Africa and tropical islands around the world. The firm experimented with scaled online disinformation, fake news and mass profiling. It worked with Russian agents and employed hackers to break into opposition candidates’ email accounts. Soon enough, having perfected its methods far from the attention of western media, CA shifted from instigating tribal conflict in Africa to instigating tribal conflict in America. Seemingly out of nowhere, an uprising erupted in America with manic cries of MAGA! and Build the wall! Presidential debates suddenly shifted from policy positions into bizarre arguments about what was real news and what was fake news. America is now living in the aftermath of the first scaled deployment of a psychological weapon of mass destruction.
As one of the creators of Cambridge Analytica, I share responsibility for what happened, and I know that I have a profound obligation to right the wrongs of my past. Like so many people in technology, I stupidly fell for the hubristic allure of Facebook’s call to ‘move fast and break things’. I’ve never regretted something so much. I moved fast, I built things of immense power, and I never fully appreciated what I was breaking until it was too late.
As I made my way to the secure facility deep under the Capitol that day in the early summer of 2018, I felt numbed to what was happening around me. Republicans were already conducting opposition research on me. Facebook was using PR firms to smear its critics, and its lawyers had threatened to report me to the FBI for an unspecified cybercrime. The DOJ was now under the control of a Trump administration that was publicly ignoring long-held legal conventions. I had enraged so many interests that my lawyers were genuinely concerned the FBI might arrest me after I was finished. One of my lawyers told me the safest thing to do was stay in Europe.
I cannot, for security and legal reasons, quote directly from my testimony in Washington. But I can tell you that I walked into that room with two large binders, each containing several hundred pages of documents. The first binder contained emails, memos and documents showing the extent of Cambridge Analytica’s data-harvesting operation. This material demonstrated that the company had recruited hackers, hired personnel with known links to Russian intelligence, and engaged in bribery, extortion and disinformation campaigns in elections around the world. There were confidential legal memos from lawyers warning Steve Bannon about Cambridge Analytica’s violations of the Foreign Agents Registration Act, as well as a cache of documents describing how the firm exploited Facebook to access more than eighty-seven million private accounts and used that data in efforts to suppress the votes of African Americans.
The second binder was more sensitive. It contained hundreds of pages of emails, financial documents and transcripts of audio recordings and text messages that I had covertly procured in London earlier that year. These files had been sought by US intelligence and detailed the close relationships between the Russian embassy in London and both Trump associates and leading Brexit campaigners. This file showed that leading British alt-right figures met with the Russian embassy before and after they flew to meet the Trump campaign, and that at least three of them were receiving offers of preferential investment opportunities in Russian mining companies potentially worth millions. What became clear in these communications was how early the Russian government had identified the Anglo-American alt-right network, and that it may have groomed figures within it to become access agents to Donald Trump. It showed the connections among the major events of 2016: the rise of the alt-right, the surprise passage of Brexit, and the election of Trump.
Four hours went by. Five. I was deep into describing Facebook’s role in – and culpability for – what had happened.
Did the data used by Cambridge Analytica ever get into the hands of potential Russian agents? Yes.
Do you believe there was a nexus of Russian state-sponsored activity in London during the 2016 presidential election and Brexit campaigns? Yes.
Was there communication between Cambridge Analytica and WikiLeaks? Yes.
I finally saw glimmers of understanding coming into the committee members’ eyes. Facebook is no longer just a company, I told them. It’s a doorway into the minds of the American people, and Mark Zuckerberg left that door wide open for Cambridge Analytica, the Russians, and who knows how many others. Facebook is a monopoly, but its behaviour is more than a regulatory issue – it’s a threat to national security. The concentration of power that Facebook enjoys is a danger to American democracy.
Dancing a delicate ballet among multiple jurisdictions, intelligence agencies, legislative hearings and police authorities, I have given more than two hundred hours of sworn testimony and handed over at least ten thousand pages of documents. I found myself travelling around the world, from Washington to Brussels, to help leaders unpack not only Cambridge Analytica but also the threats social media poses to the integrity of our elections.
Yet, in my many hours of giving testimony and evidence, I came to realise that the police, the legislators, the regulators and the media were all having a difficult time figuring out what to do with this information. Because the crimes happened online, rather than in any physical location, the police could not agree on who had jurisdiction. Because the story involved software and algorithms, many people threw up their hands in confusion. Once, when one of the law enforcement agencies I was dealing with called me in for questioning, I had to explain a fundamental computer science concept to agents who were supposedly specialists in technology crime. I scribbled a diagram on a piece of paper, and they confiscated it. Technically, it was evidence. But they joked that they needed it as a crib sheet to understand what they were investigating. LOL, so funny, guys.
We are socialised to place trust in our institutions – our government, our police, our schools, our regulators. It’s as if we assume there’s some guy with a secret team of experts sitting in an office with a plan, and if that plan doesn’t work, don’t worry, he’s got a plan B and a plan C – someone in charge will take care of it. But in truth, that guy doesn’t exist. If we choose to wait, nobody will come.
This excerpt from Mindf*ck: Inside Cambridge Analytica’s Plot to Break the World by Christopher Wylie has been published with permission from Profile Books. Paperback Rs 599.