Home

Snowden to Cambridge Analytica – making the case for the social value of privacy

Constitutionally inculcated rights and morality are slowly being undone “by the use of automated processes to assess risk and allocate opportunity”.

Anita Gurumurthy Amrita Vasudevan
4 April 2018

HRI

lead

An image of Mark Zuckerberg in Shadow Ink app. 2016. Wikicommons/ Annika Laas. Some rights reserved.After days of prodding by the media, Mark Zuckerberg has offered a mea culpa, apologizing for the “breach of trust between Facebook and the people who share their data with us and expect us to protect it”.

The news that Facebook shared user data with a number of organizations including Cambridge Analytica seems to reflect the paradox of surveillance society – that our personal data is never safe, but share it we must.

So once again, in a Snowdenish moment, we are hit by the revelation that Cambridge Analytica conducted behavioural modeling and psychographic profiling (creating personality profiles by gauging motives, interests, attitudes, beliefs, values etc.) based on data it collected, to successfully target – allegedly – Americans prior to the recent presidential election.

Meanwhile, in countries like India, political parties have accused each other of hiring Cambridge Analytica for their own election campaigns.

The soothsayer

Facebook collects all kinds of social data about its users, like their relationship status, place of work, colleagues, last time they visited their parents, songs they like listening to, as well as other kinds of information such as device data, websites visited from the platform etc.

This may be information that is shared by the user or what their friends may share about them on the platform. That aside, let us not forget that Facebook has bought WhatsApp and Instagram and can tap into data from those platforms, apart from the data it buys from data brokers.

Data that is collected is used to draw up the profile of users – a detailed picture of the persona that emerges by piecing together known activity and aptitude and generating predictions about possible proclivities and predispositions. That big data can be put to use in ways that reinforce ‘social and cultural segregation and exclusion’ is fairly well accepted now.

The mechanics of big data thus recreate the sum total of user traits and attributes – without necessarily verifying them per user. What follows then is the clustering of users into hyper segments with similar attributes for micro-targeting ads.

You may merely be ‘liking’ an article on the last male white Rhino, but Facebook will predict with a fair amount of accuracy your political affiliation and sexual orientation, using algorithmic modelling to nudge you to buy the thing you are most likely to buy. Hyper-segmentation based on social media profiling can also be used to create a consumer base for political messaging, as has been suggested in the case of Cambridge Analytica.

The target

Many digital corporations, including Uber, Twitter and Microsoft sell their data to third parties who build apps and provide services on top of it. With machine learning, the targeting of individuals assumes new dimensions; it becomes possible to do nanotargeting, that is, to zoom in precisely on one individual.

A fintech startup in India rejected an applicant because they could uncover that she had actually filed for a loan on behalf of her live-in partner who was unemployed. The boyfriend’s loan request had been rejected earlier. The start-up’s machine learning algorithm had used GPS and social media data – both of which the duo had given permissions for while downloading the app – to make the connection that they were in a relationship.

That big data can be put to use in ways that reinforce ‘social and cultural segregation and exclusion’ is fairly well accepted now. This slippery slope from micro-targeting to the social allocation by algorithms of opportunities and privileges poses serious concerns. A ProPublica investigation from 2016 collected more than 52,000 unique attributes that Facebook used to classify users into micro target-able groups. It then went on to buy Facebook advertisement for housing that demonstrated how it was possible to exclude African-Americans, Hispanics and Asian-Americans. As Frank Pasquale notes, constitutionally inculcated rights and morality is slowly being undone “by the use of automated processes to assess risk and allocate opportunity”.

The public sphere

This brings us to the question of what the social role of digital intelligence means for the future of democracy. Elections play a vital role in a robust democracy such as India. We seek to safeguard their free and fair nature through regulations that impose restrictions on exit polling or call out parties for unduly influencing voters through the distribution of freebies.

Wouldn’t then, a nudging of voters through intimate knowledge of their behaviour be a threat to this socio-political hygiene we seek to maintain? Can we allow the replacement of the will of the people by a market democracy in which the masses can be gamed? In the wake of the scandal, the Election Commission in India has said that it will come up with recommendations on how to prevent such unlawful activities. Can we allow the replacement of the will of the people by a market democracy in which the masses can be gamed?

However, beyond electoral fairness, there are severe repercussions for the sanctity of the public sphere in the rapidly unfolding role of algorithms. When people know that online behaviour is monitored, they carefully moderate how they interact online, a phenomenon referred to as social cooling.

The collusion

The Cambridge Analytica episode bears a close resemblance to the Snowden disclosure of the unholy nexus of the state, private corporations, and uninhibited surveillance. India has already succeeded in building a ‘cradle to grave’ panspectron by seeding citizens’ unique biometric identifier across data bases. The Aadhaar unique ID project has allowed for an informatization of life whereby “...the human body is reduced to a set of numbers that can be stored, retrieved and reconstituted across terminals, screens and interfaces”.

With this biometric, the body can never disassociate with its data, and may be recalled, whenever convenient, sans ‘the individual’. To add psychographic data akin to a ‘behavioural’ biometric to this mix is to endow a ‘God’s eye view’ of society to the state, one that the state is bound to abuse to determine the human condition.

For instance, China’s profoundly disturbing, Sesame Credit uses citizen data including data from everyday transactions, biometric data, etc., to dole out instant karma.

From the other end, corporations who already collect behavioural data are keen on accessing Aadhaar data, for this will allow them to trade data around a unique data point to attain, much like the state, a 360 degree view of their customers.

Facebook has faced criticism in the past for experimenting with users’ emotions, using unethical manipulation of information to influence the moods of users. The plausibility of nano-surveillance raises fundamental philosophical questions about society and human agency, calling attention to the urgent task of reining in the data capitalists. The plausibility of nano-surveillance raises fundamental philosophical questions about society and human agency, calling attention to the urgent task of reining in the data capitalists.

The norm

While digital corporations claim to audit how third parties may use the data they have shared, monitoring is lax. India, for instance, does have rules on data sharing; however, these pertain to a predefined list of data types. Traditionally, data protection legislations have focused on ‘personally identifiable information’ (PII), but with technological advances - a la big data analysis and Artificial Intelligence – what is or is not PII is contested. The law therefore needs to be re-imagined to suit contemporary techniques of data analysis.

Besides the fact of unencumbered data sharing, that Facebook was able to collect such vast amounts and varied kinds of data is itself unsettling. To prevent the frightening prospect of future society being reduced to an aggregate of manipulated data points, it may well be necessary to determine that certain kinds of data will not be collected and certain types of data processing will not be done.

Restrictions on collection and use can be sector specific, based on well debated social norms and constitutionally driven. For example, Delhi High Court recently upheld the judgment on the grounds of the right to health, that excluding genetic disorders from insurance policies is illegal.

It is time we moved from individual centered notions of privacy where the ‘user’ is constantly asked to barter the right for entitlements, credits, and convenience. Theories about how individuals must be empowered to monetise the plentiful data they generate and get easier credit, better healthcare, better skills and welfare benefits are a recipe for a disempowered society left to the whims of neo-liberal market democracy. The social value of privacy needs to be spotlighted, for it urges us to look not only at the individual’s rights over data, but the social benefits that we derive from their recognition.

In so far as societies are the products of behavioural modelling, Zuckerberg’s apology does not really count.

An earlier version of this article first appeared in Firstpost on March 23, 2018.

Had enough of ‘alternative facts’? openDemocracy is different Join the conversation: get our weekly email

Comments

We encourage anyone to comment, please consult the oD commenting guidelines if you have any questions.
Audio available Bookmark Check Language Close Comments Download Facebook Link Email Newsletter Newsletter Play Print Share Twitter Youtube Search Instagram WhatsApp yourData