online desk | 27 March 2018 | 11:28 pm
A British political consultancy firm, Cambridge Analytica, harvesting data through an online personality test, has allegedly helped the Trump presidential campaign in the US and the UK’s campaign to leave the EU. Amid strong criticism against social media giant Facebook, CEO Mark Zuckerberg finally apologised for its role in the problem.
The truth is that in 2014, 270,000 users downloaded an app called This Is Your Digital Life and took an online survey. This meant their Facebook information could be accessed, but it also gave the app access to their friends’ data. A total of more than 50 million profiles were accessed. The information was later sold on to Cambridge Analytica for political purposes.
In this backdrop, from micro and macro perspectives, one can raise two fundamental questions. First, has this method actually managed to sway elections? And second, is privacy possible in this age of social media?
The mining of data is definitely quite a powerful and potent contributing factor to the elections. And in retrospect it seems obvious. The data was used to deliver targeted adverts to a psychometric profile to decide how to influence a person to buy something or to take a particular decision. So, it’s not that much of a step from there to influence one to vote in a certain way or to abstain from voting. This second aspect is important and there is evidence to suggest that was the focus. After all, some people think – ‘I’m a liberal, there’s no way you can get me to vote for Trump’.
Undeniably, one could be influenced to abstain from voting in one’s specific state and that could be very powerful tool to manipulate the outcome of an election. Moreover, people have lost their ability to fact check and separate real and fake news. Facebook has said it would take measures to deal with this problem on its platform, but has actually done very little. It has done very little to curb the immense problem with fake profiles and troll bots, to provide just one example. It is difficult to tell if one’s really arguing with real human being online. It is difficult to tell whether a human is posting links to news on a Facebook page or whether it is a bot. And we must also ask: is the news fake? For an average Facebook user to distinguish between fact and fiction becomes a nightmare.
The question of user data protection is a question of the design of the service. Facebook can design technologies that default to respecting users’ rights and keeping user data safe and have general technical safeguards. If a platform like Facebook has an API which shares not only the profiles one can access to but also those of their friends, it would require a specific design that allows the sharing of data. Facebook needs both a different technical design for its services and, simultaneously, a new kind of accountability process as a company. From the user’s point of view, we also need a law that grants them access to a system where they can get help if they suffer from any kind of wrongdoing.
One last lesson that is to be learned is to avoid the IQ or personality tests things that one sees constantly on the service. It is undeniable that it should be illegal for anyone to share the information with from their friends to a third party. So, there should be a line drawn and if this situation highlights anything, it highlights the sore lack of legislation on this matter. Because it is not right to put all the blame on the user, given that one can’t read all the reams and reams of small print. It is also difficult understand the ramifications of the sharing process. Even if one person understands it and doesn’t mind sharing their own data, it does not give them the right to share that of their friends. So this must be made illegal. One should still be allowed to share one’s own data, but they should read the fine print carefully.