Cambridge Analytica, Dirty Tricks, and a Glimmer of Hope
An unexpected glimmer of hope emerged from Channel 4 News’s Cambridge Analytica exposés and the revelation that the company might have been involved in some old-fashioned political dirty tricks.
If the confession of their now deposed CEO, Alexander Nix, is to be taken as anything other than the baseless boasting of a businessman trying to boost the notoriety of his company, then what we have is evidence that the magic of Cambridge Analytica was never quite so magical as we first supposed. After all: if their mining of data was still producing such spectacular results, why would the company need to move into what was euphemistically known as “ratfucking” in the Nixon era? Why get involved in a line of work where exposure of wrongdoing would be so damaging and yet quite easily proven?
The answer, one suspects, is found in the description of the company’s business practices exposed by whistleblower Christopher Wylie to The Guardian last week. The success of any data mining operation is reliant on having data to mine and what made Cambridge Analytica’s work so potent was that they had access to an enormous data set compiled by Aleksandr Kogan, a neuroscientist who had gained permission from Facebook to legitimately extract data for academic use. It was in July 2014 that Kogan launched a personality-test app for which 270,000 Facebook users were (mostly) paid to answer questions. Unknown to users was that the app also exploited a security weakness in Facebook which allowed it to access the data of all their friends. In total, it allowed Kogan to gather information on around 50 million users before Facebook closed the vulnerability.
That timing is beginning to look key. It suggests that Cambridge Analytica’s initial success was built upon a unique set of circumstances that it would ultimately fail to turn into a wider business model. The emerging facts of the story make one wonder how, without more recent data to work from (certainly, none that have been made public), Cambridge Analytica could produce targeted campaigns in other elections and in other parts of the globe. It is only speculation but one might begin wonder if they indeed did and if the real revelation of the Channel 4 News story was that the company was exploring other opportunities because of a crisis in its core business.
There is, in this, an important technical point to make. Unlike most users who access Facebook data through a web browser, this is impractical for data-mining operations where software engineers need access to vast amounts of well-formatted data. This is not as simple as downloading millions of HTML web pages and extracting the useful bits. Instead, data is usually made available through a website’s API or Application Programming Interface. All big websites operate this way with custom software running their business behind the graphically impressive frontend with which you’ll be more familiar. Amazon, eBay, Twitter, and, of course, Facebook, each have their own proprietary API that allows third-party software to access their databases. The API usually provides a robust and secure point of access, and, in some instances, the techniques are not even documented for the public use. The reputation of any web company partly rests in the security of their API and, in this, it’s important to repeat that Facebook did not suffer a “breach” in the way we commonly understand that term. This was not a “hack”. There was no unauthorized access that bypassed security.
That should give us a little solace in that we are not entirely at the mercy of the psychological profilers. Tech firms can and should protect our data. Sandy Parakilas, the former platform operations manager at Facebook and the latest to reveal the firm’s secrets, describes his concern (again to The Guardian, who have led the way with this story) that “all of the data that left Facebook servers to developers could not be monitored by Facebook, so we had no idea what developers were doing with the data”. In other words: it was a failure of company policy, a willful failure of imagination, rather than a technical flaw. This most egregious case of data mining did not happen because Facebook’s data was not secured from unauthorized access. It’s just that they chose not to imagine what uses their authorized users could do with that data.
This might not mitigate the dangers of Big Data but it certainly makes the point that we need laws that ensures that companies protect our personal details from misuse by third-party users. What’s more: there is even a case that our data should also be protected from first-party misuse and that personal data should not be used to profile users. Google already lost a class action suit back in 2016 when it was forced to stop pre-scanning users’ emails for keywords by which it would then microtarget advertising. It stopped the practice entirely in 2017.
Facebook might now be facing their own crisis but it does not inherently represent a fundamental problem with the platform but, rather, an evolution of the ethics of bulk data collection. We are experiencing this for the first time in our history simply because computers have reached the point where they can perform this kind of analysis on huge data sets.
As for Cambridge Analytica: they are under the closest scrutiny and, as a company that enjoyed living in the shadows, it’s hard to imagine them continuing to exist in their current form. From what we now hear, however, perhaps they really stopped existing the moment they lost access to Facebook’s data. They would hardly be missed but might yet be seen as something of a blessing if their mistakes and misdeeds hopefully awaken us to a very real and growing danger.
Comments
Post a Comment