digitaLiberties https://www.opendemocracy.net/taxonomy/term/19013/all cached version 14/11/2018 21:26:05 en Mental health and artificial intelligence: losing your voice https://www.opendemocracy.net/digitaliberties/dan-mcquillan/mental-health-and-artificial-intelligence-losing-your-voice-poem <div class="field field-summary"> <div class="field-items"> <div class="field-item odd"> <p>While we still can, let us ask, "Will AI exacerbate discrimination?" as the productive forces of mental health are restructured within a techno-psychiatric complex. Poem.</p> </div> </div> </div> <p><a href="https://opendemocracy.net/hri"><img width="460px" src="//cdn.opendemocracy.net/files/finalbannerhri2_0.jpg" alt="HRI" /></a></p> <p style="text-align: center;" class="FirstParagraph"><span class='wysiwyg_imageupload image imgupl_floating_none caption-xlarge'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/500209/40847777491_ec4d80afdb_z.jpg" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/500209/40847777491_ec4d80afdb_z.jpg" alt="lead " title="" width="460" height="345" class="imagecache wysiwyg_imageupload caption-xlarge imagecache imagecache-article_xlarge" style=""/></a> <span class='image_meta'><span class='image_title'>Sketch,2018. Flickr/Whinger. Some rights reserved.</span></span></span></p><p class="FirstParagraph">'You sound a bit depressed' we might say to a friend,<br /> Not only because of what they say but how they say it.<br /> Perhaps their speech is duller than usual, tailing off between words,<br /> Lacking their usual lively intonation.</p> <p>There are many ways to boil a voice down into data points;<br /> Low-level spectral features, computed from <a href="https://www.sri.com/work/publications/speech-based-assessment-ptsd-military-population-using-diverse-feature-classes">snippets as short as twenty milliseconds</a><br /> That quantify the dynamism of amplitude, frequency and energy,<br /> And those longer range syllabic aspects that human ears are tuned to,<br /> Such as pitch and intensity.</p> <p>A voice distilled into data<br /> Becomes the training material for machine learning algorithms,<br /> And there are many efforts being made to teach machines<br /> To deduce our mental states from voice analysis.</p> <p>The bet is that the voice is a source of biomarkers,<br /> Distinctive data features that <a href="https://www.scientificamerican.com/article/the-sound-of-your-voice-may-diagnose-disease/">correlate to health conditions</a>,<br /> Especially the emergence of mental health problems<br /> Such as depression, PTSD, Alzheimers and others.</p> <p>And of course there's the words themselves;<br /> We've already trained machines to recognise them.<br /> Thanks to the deep neural network method called <a href="https://medium.com/mlreview/understanding-lstm-and-its-diagrams-37e2f46f1714">Long Short-Term Memory (LSTM)</a><br /> We can command our digital assistants to buy something on Amazon.</p> <p>Rules-based modelling never captured the complexity of speech,<br /> But give neural networks enough examples,<br /> They will learn to parrot and predict any complex pattern,<br /> And voice data is plentiful.</p> <p>So perhaps machines can be trained to detect symptoms<br /> Of <a href="https://www.technologyreview.com/s/603200/voice-analysis-tech-could-diagnose-disease/">different kinds of stress or distress</a>,<br /> And this can be looped into an appropriate intervention<br /> To prevent things from getting worse.</p> <p>As data, the features of speech become tables of numbers;<br /> Each chunk of sound becomes a row of digits,<br /> Perhaps sixteen numbers from a <a href="https://betterexplained.com/articles/an-interactive-guide-to-the-fourier-transform/">Fourier Transform</a><br /> And others for types of intensity and rhythmicity.</p> <p>For machine learning to be able to learn<br /> Each row must end in a classification; a number that tags a known diagnosis.<br /> Presented with enough labelled examples it will produce a <a href="https://medium.com/@ageitgey/machine-learning-is-fun-80ea3ec3c471">probabilistic model</a><br /> That predicts the likelihood of a future speaker developing the same condition.</p> <p>It's very clever to model the hair cells in the human ear <a href="https://www.sri.com/work/publications/speech-based-assessment-ptsd-military-population-using-diverse-feature-classes">as forced damped oscillators</a><br /> And to apply AI algorithms that learn models through <a href="https://www.youtube.com/watch?v=Ilg3gGewQ5U">backpropagation</a>,<br /> But we should ask why we want machines to listen out for signs of distress;<br /> Why go to all this trouble when we could do the listening ourselves?</p> <p>One reason is the rise in mental health problems<br /> At the same time as <a href="https://www.eurekalert.org/pub_releases/2017-04/nlmc-sp041117.php">available services are contracting</a>.<br /> Bringing professional and patient together costs time and money,<br /> But we can acquire and analyse samples of speech via our network infrastructures.</p> <p>Machine listening offers the prospect of early intervention<br /> Through a pervasive presence beyond anything that psychiatry could have previously imagined.<br /> Machine learning's skill at pattern finding means it can be used for prediction;<br /> As Thomas Insel says, <a href="https://www.nytimes.com/2018/02/25/technology/smartphones-mental-health.html">"We are building digital smoke alarms for people with mental illness"</a>.</p> <h2>Disruption</h2> <p class="FirstParagraph">Insel is a psychiatrist, neuroscientist and former Director of the US National Institute of Mental Health,<br /> Where he prioritised the search for a preemptive approach to psychosis<br /> By <a href="https://www.technologyreview.com/s/541446/why-americas-top-mental-health-researcher-joined-alphabet/">"developing algorithms to analyze speech as an early window into the disorganization of thought"</a>.<br /> He jumped ship to Google to pursue a big data approach to mental health, then founded a startup called Mindstrong<br /> Which uses smartphone data to 'transform brain health' and <a href="https://mindstronghealth.com/about/">'detect deterioration early'</a>.</p> <p>The number of startups looking for traction on mental states,<br /> Through the machine analysis of voice,<br /> Suggests a restructuring of the productive forces of mental health,<br /> Such that illness will be constructed by a techno-psychiatric complex.</p> <p>HealthRhythms, for example, was <a href="https://www.smh.com.au/technology/tech-support-how-our-phones-could-save-our-lives-by-detecting-mood-shifts-20171106-gzfrg5.html">founded by psychiatrist David Kupfer</a>,<br /> Who chaired the task force that produced DSM-5, the so-called 'bible of psychiatry',<br /> Which defines mental disorders and the diagnostic symptoms for them.<br /> The HealthRhythms app uses voice data to calculate a <a href="https://www.healthrhythms.com/">"log of sociability"</a> to spot depression and anxiety.</p> <p>Sonde Health screens acoustic changes in the voice for mental health conditions<br /> With a focus on post-natal depression and dementia;<br /> <a href="https://www.technologyreview.com/s/603200/voice-analysis-tech-could-diagnose-disease/">"We're trying to make this ubiquitous and universal"</a> says the CEO.<br /> Their app is not just for smartphones but for any voice-based technology.</p> <p>Meanwhile Sharecare scans your calls and <a href="https://www.sharecare.com/health/stress-reduction/how-sharecare-app-voice-work">reports if you seemed anxious</a>;<br /> Founder Jeff Arnold describes it as <a href="https://www.nytimes.com/2018/02/25/technology/smartphones-mental-health.html">'an emotional selfie'</a>.<br /> Like Sonde Health, the company works with health insurers<br /> While HealthRhythms' clients include pharmaceutical companies.</p> <p>It's hardly a surprise that Silicon Valley sees mental health as a market ripe for Uber-like disruption;<br /> Demand is rising, orthodox services are being cut, but data is more plentiful than it has ever been.<br /> There's a mental health crisis that costs economies millions<br /> So it must be time to <a href="https://spectrum.ieee.org/at-work/innovation/facebook-philosophy-move-fast-and-break-things">'move fast and break things'</a>.</p> <p>But as Simondon and others have <a href="https://www.upress.umn.edu/book-division/books/on-the-mode-of-existence-of-technical-objects">tried to point out</a>,<br /> The individuation of subjects, including ourselves, involves a certain technicity,<br /> Stabilising a new ensemble of AI and mental health<br /> Will change what it is to be considered well or unwell.</p> <h2>Samaritans Radar</h2> <p class="FirstParagraph">There's little apparent concern among the startup-funder axis<br /> That all this listening might silence voices.<br /> Their enthusiasm is not haunted by the story of the Samaritans Radar<br /> When an organisation which should have known better got carried away by enthusiasm for tech surveillance.</p> <p>This was a Twitter app developed in 2014 <a href="https://www.theguardian.com/technology/2014/oct/29/samaritans-radar-twitter-suicide-risk">by the Samaritans</a>,<br /> The UK organisation which runs a 24 hour helpline for anyone feeling suicidal.<br /> You signed up for the app and it promised to send you email alerts<br /> Whenever someone you follow on Twitter appeared to be distressed.<br /> If any of their tweets matched a list of key phrases<br /> It invited you to get in touch with them.</p> <p>In engineering terms, this is light years behind the sophistication of Deep Learning,<br /> But it's a salutory tale about unintended impacts.<br /> Thanks to the <a href="https://psychcentral.com/blog/the-uproar-over-the-new-samaritans-radar-twitter-app/">inadequate involvement of service users</a> in its production,<br /> It ignored the fact that the wrong sort of well-meaning intervention at the wrong time might actually make things worse,<br /> Or that malicious users could use the app to target and <a href="https://www.adrianshort.org/posts/2014/samaritans-radar/">troll vulnerable people</a>.</p> <p>Never mind the consequences of false positives<br /> When the app misidentified someone as distressed,<br /> Or the <a href="https://jonmendel.wordpress.com/2014/11/01/samaritans-radar-questions-from-a-research-ethics-point-of-view-post-1-of-2/">concept of consent</a>,<br /> Given that the people being assessed were not even made aware that this was happening;<br /> All riding roughshod over the basic ethical principle of 'do no harm'.</p> <p>Although Twitter is a nominally public space,<br /> People with mental health issues had been able to hold supportive mutual conversations<br /> With a reasonable expectation that this wouldn't be put in a spotlight,<br /> Allowing them to <a href="https://paulbernal.wordpress.com/2014/11/01/samaritans-radar-misunderstanding-privacy-and-publicness/">reach out to others</a> who might be experiencing similar things.</p> <p>One consequence of the Samaritans Radar was that many people with mental health issues,<br /> Who had previously found twitter a source of mutual support,<br /> Declared their intention to withdraw<br /> Or simply went silent.</p> <p>As with the sorry tale of the Samaritans Radar,<br /> Without the voices of mental health users and survivors<br /> The hubris that goes with AI has the potential to override the Hippocratic oath.</p> <h3>Fairness and Harm</h3> <p class="FirstParagraph">The ubiquitous application of machine learning's predictive power<br /> In areas with real world consequences, such as <a href="http://journals.sagepub.com/doi/abs/10.1177/0003122417725865">policing and the judicial system</a>,<br /> Is stirring an awareness that its oracular insights<br /> Are actually constrained by complexities that are hard to escape.</p> <p>The simplest of which is data discrimination;<br /> A programme that only knows the data it is fed,<br /> And which is only fed data containing a racist bias,<br /> Will make <a href="http://science.sciencemag.org/content/356/6334/183.full">racist predictions</a>.</p> <p>This should already be red flag for our mental health listening machines.<br /> Diagnoses of mental health are already <a href="https://www.theguardian.com/society/2012/apr/17/bme-mental-health-patients-marginalised">skewed with respect to race</a>;<br /> A high proportion of people from black and ethnic minority backgrounds get diagnosed,<br /> And the questions about why are still very much open and contested.</p> <p>But surely, proponents will say, one advantage of automation in general<br /> Is to encode fairness and bypass the fickleness of human bias;<br /> To apply empirical and statistical knowledge directly<br /> And cut through the subjective distortions of face-to-face prejudice.</p> <p>Certainly, as the general dangers of reproducing racism and sexism have become clear,<br /> There have been conscientious efforts from engineers in one corner of machine learning<br /> To automate ways to <a href="http://arxiv.org/abs/1412.3756">de-bias datasets</a>.</p> <p>But here's the rub;<br /> Even when you know there's the potential for discrimination<br /> It's mathematically impossible to produce <a href="http://arxiv.org/abs/1703.00056">all-round fairness</a>.</p> <p>If you're designing a <a href="https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing">parole algorithm</a> to predict whether someone will reoffend,<br /> You can design it so that the accuracy for high risk offenders is the same for white and black.<br /> But if the overall base rates are different<br /> There will be more false positives of black people, which can be considered a harm,<br /> Because more black people who would not go on to reoffend will <a href="http://arxiv.org/abs/1609.05807">be refused bail</a> than white people.</p> <p>Machine learning's probabilistic predictions are the result of a mathematical fit,<br /> The parameters of which are selected to optimise on specific metrics,<br /> The are many mathematical ways to define fairness (<a href="https://www.youtube.com/watch?v=jIXIuYdnyyk">perhaps twenty-one of them</a>)<br /> And you can't satisfy them all at the same time.</p> <p>Proponents might argue that with machinic reasoning,<br /> We should be able to disentangle the reasons for various predictions,<br /> So we can make policy choices<br /> About the various trade-offs.</p> <p>But there's a problem with artifical neural networks,<br /> Which is that their <a href="https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/">reasoning is opaque</a>,<br /> Obscured by the multiplicity of connections across their layers,<br /> Where the weightings are derived from massively parallel calculations.</p> <p>If we apply this deep learning to reveal what lies behind voice samples,<br /> Taking different tremors as proxies for the contents of consciousness,<br /> The algorithm will be tongue-tied<br /> If asked to explain its diagnosis.</p> <p>And we should ask who these methods will be most applied to,<br /> Since to apply machinic methods we need data.<br /> Data visibility is not evenly distributed across society;<br /> Institutions will have much more data about you if you are part of the welfare system<br /> Than from a comfortable middle class family.</p> <p>What's apparent from the field of child protection,<br /> Where algorithms are also seen as promising objectivity and pervasive preemption,<br /> Is that the weight of harms from <a href="http://www.reimaginingsocialwork.nz/2015/06/predictive-risk-modelling-on-rights-data-and-politics/">unsubstantiated interventions</a><br /> Will fall disproportionately on the already disadvantaged,<br /> With the net effect of <a href="https://www.wired.com/story/excerpt-from-automating-inequality/">'automating inequality'</a>.</p> <p>If only we could rely on institutions to take a restrained and person-centred approach.<br /> But certainly, where the potential for financial economies are involved,<br /> The history of voice analysis is not promising.<br /> Local authorities in the UK were still applying Voice Stress Analaysis to detect housing benefit cheats<br /> Years after solid scientific evidence showed that its risk predictions were <a href="https://www.theguardian.com/society/2014/mar/10/councils-use-lie-detector-tests-benefits-fraudsters">'no better than horoscopes'</a>.</p> <p>Machine learning is a leap in sophistication from such crude measures,<br /> But as we've seen it also brings new complexities,<br /> As well as an insatiable dependency on more and more data.</p> <h3>Listening Machines</h3> <p class="FirstParagraph">Getting mental health voice analysis off the ground faces the basic challenge of data;<br /> Most algorithms only perform well when there's a lot of it to train on.<br /> They need voice data labelled as being from people who are unwell and those who are not,<br /> So that the algorithm can learn the patterns that distinguish them.</p> <p>The uncanny success of Facebook's facial recognition algorithms<br /> Came from having huge numbers of labelled faces at hand,<br /> Faces that we, the users, had kindly labelled for them<br /> As belonging to us, or by tagging our friends,<br /> Without realising we were also training a machine;<br /> "if the product is free, you are the training data".</p> <p>One approach to voice analysis is the kind of clever surveillance trick<br /> Used by a paper investigating 'The Language of Social Support in Social Media <a href="https://www.microsoft.com/en-us/research/publication/language-social-support-social-media-effect-suicidal-ideation-risk/">And its Effect on Suicidal Ideation Risk'</a>,<br /> Where they collected comments from Reddit users in mental health subreddits like<br /> r/depression, r/mentalhealth, r/bipolarreddit, r/ptsd, r/psychoticreddit,<br /> And tracked how many could be identified as subsequently posting in<br /> A prominent <a href="https://www.bbc.co.uk/programmes/b06mvx4j">suicide support community</a> on Reddit called r/SuicideWatch.</p> <p>Whether or not the training demands of voice algorithms<br /> Are solved by the large scale collection of passive data,<br /> The strategies of the Silicon Valley startups make it clear<br /> That the application of these apps will have to be pervasive,<br /> To fulfill the hopes for scaling and early identification.</p> <p>Society is already struggling to grapple<br /> With the social effects of pervasive facial recognition,<br /> Whether mainstreamed in China's system of social credit,<br /> Or <a href="https://www.aclu.org/blog/privacy-technology/surveillance-technologies/amazon-teams-government-deploy-dangerous-new">marketed to US police forces by Amazon</a>,<br /> Where it has at least led to some <a href="https://gizmodo.com/amazon-workers-demand-jeff-bezos-cancel-face-recognitio-1827037509">resistance from employees themselves</a>.</p> <p>The democratic discourse around voice analysis seems relatively hushed,<br /> And yet we are increasingly embedded in a listening environment,<br /> With Siri and Alexa and Google Assistant and Microsoft's Cortana<br /> And Hello Barbie and <a href="https://www.bbc.com/news/world-europe-39002142">My Friend Cayla</a> and our smart car,<br /> And apps and games on our smartphones that request microphone access.</p> <p>Where might our voices be analysed for signs of stress or depression<br /> In a way that can be glossed as legitimate under the <a href="https://theconversation.com/gdpr-isnt-enough-to-protect-us-in-an-age-of-smart-algorithms-97389">General Data Protection Regulation</a>;<br /> On our work phone? our home assistant? while driving? when calling a helpline?</p> <p>When will using an app like HealthRhythms, which 'wakes up when an audio stream is detected',<br /> Become compulsory for people receiving any form of psychological care?<br /> Let's not forget that in the UK we already have <a href="http://www.mentalhealthlaw.co.uk/Community_Treatment_Order">Community Treatment Orders for mental health</a>.</p> <p>Surveillance is the inexorable logic of the data-diagnostic axis,<br /> Merging with the benificent idea of Public Health Surveillance,<br /> With its agenda of epidemiology and health &amp; safety,<br /> But never quite escaping the long history of sponsorship of speech recognition research<br /> By the Defense Advanced Research Projects Agency (<a href="https://www.darpa.mil/">DARPA</a>).</p> <p>A history that Apple doesn't often speak of,<br /> That it <a href="https://www.sri.com/work/timeline-innovation/timeline.php?timeline=computing-digital&amp;_escaped_fragment_=%26innovation=siri">acquired Siri from SRI International</a>,<br /> Who'd developed it through a massive AI contract with DARPA.</p> <p>As the Samaritans example made clear,<br /> We should pause before embedding ideas like 'targeting' in social care;<br /> Targeting people for preemptive intervention is fraught with challenges,<br /> And forefronts the core questions of consent and 'do no harm'.</p> <p>Before we imagine that "instead of waiting for traditional signs of dementia and getting tested by the doctor<br /> The smart speakers in our homes could be monitoring changes in our speech as we ask for the news, weather and sports scores<br /> And <a href="https://www.futurehealthindex.com/2018/04/03/five-ways-ai-will-make-health-transformation-truly-global/">detecting the disease far earlier than is possible today"</a>,<br /> We need to know how to defend against the creation of a therapeautic Stasi.</p> <h3>Epistemic Injustice</h3> <p class="FirstParagraph">It might seem far fetched to say that snatches of chat with <a href="https://developer.amazon.com/alexa">Alexa</a><br /> Might be considered as signficant as a screening interview with a psychatrist or psychologist,<br /> But this is to underestimate the <a href="https://link.springer.com/article/10.1007%2Fs13347-017-0273-3">aura of scientific authority</a><br /> That comes with contemporary machine learning.</p> <p>What algorithms offer is not just an outreach into daily life,<br /> But the allure of neutrality and objectivity,<br /> That by abstracting phenomena into numbers that can be statistically correlated<br /> In ways that enable machines to imitate humans,<br /> Quantitative methods can be applied to areas that were previously the purview of human judgement.</p> <p>Big data seems to offer senstive probes of signals beyond human perception,<br /> Of vocal traits that are <a href="https://www.technologyreview.com/s/603200/voice-analysis-tech-could-diagnose-disease/">"not discernable to the human ear"</a>,<br /> Nor <a href="https://www.sri.com/work/publications/speech-based-assessment-ptsd-military-population-using-diverse-feature-classes">'degraded' by relying on self-reporting</a>;<br /> It doesn't seem to matter much that this use of voice<br /> Pushes the possibility of mutual dialogue further away,<br /> Turning the patient's opinions into noise rather than signal.</p> <p>Machinic voice analysis of our mental states<br /> Risks becoming an example of <a href="https://www.youtube.com/watch?v=duNAXfOAvK0">epistemic injustice</a>,<br /> Where an authoritative voice comes to count more than our own;<br /> The algorithm analysis of how someone speaks causing others to "give a deflated level of credibility to a speaker's word".</p> <p>Of course we could appeal to the sensitivity and restraint of those applying the algorithms;<br /> Context is everything when looking at the actual impact of AI,<br /> Knowing whether it is being adopted situations where existing relations of power<br /> Might indicate the possibility of overreach or arbitrary application.</p> <p>The context of mental health certainly suggest caution,<br /> Given that the very definition of mental health is historically varying;<br /> The asymmetries of power are stark, because treatment can be compulsory and detention is not uncommon,<br /> And the life consequences of being in treatment or missing out on treatment can be severe.</p> <p>Mental health problems can be hugely challenging for everyone involved,<br /> And in the darkest moments of psychosis or mania<br /> People are probably not going to have much to say about how their care should be organised,<br /> But, in between episodes, who is better placed to help shape specific ideas for their care<br /> Than the person who experiences the distress;<br /> They have the <a href="https://www.jstor.org/stable/3178066?seq=1#page_scan_tab_contents">situated knowledge</a>.</p> <p>The danger with all machine learning<br /> Is the introduction of a <a href="http://journals.sagepub.com/doi/full/10.1177/2056305118768303">drone-like distancing from messy subjectivities</a>,<br /> With the danger that this will increase thoughtlessness<br /> Through the outsourcing of elements of judgement to automated and automatising systems.</p> <p>The voice as analysed by machine learning<br /> Will become a <a href="https://foucault.info/documents/foucault.technologiesOfSelf.en/">technology of the self in Foucault's terms</a>,<br /> Producing new subjects of mental health diagnosis and intervention,<br /> Whose voice spectrum is definitive but whose words count for little.</p> <h3>User Voice</h3> <p class="FirstParagraph">The <a href="https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1467-9566.1991.tb00093.x">lack of user voice in mental health services</a> has been a bone of contention since the civil right's struggles of the 1960s,<br /> With the emergence of user networks that put forward alternative views,<br /> Seeking to be heard over the stentorian tones of the psychiatric establishment,<br /> Forming alliances with mental health workers and other advocates;<br /> Groups like <a href="https://en.wikipedia.org/w/index.php?title=Psychiatric_survivors_movement&amp;oldid=844920335">Survivors Speak Out, The Hearing Voices Network, the National Self-Harm Network and Mad Pride</a>.</p> <p>Putting forward demands for new programmes and services,<br /> Proposing strategies such as 'harm minimization' and 'coping with voices',<br /> Making the case for consensual, non-medicalised ways to cope with their experiences,<br /> And forming collective structures such as Patients' Councils.</p> <p>While these developments have been supported by some professionals,<br /> And some user participation has been assimilated as the co-production of services,<br /> The validity of user voice, especially the collective voice, is still precarious within the mental health system<br /> And is undermined by coercive legislation and reductionist biomedical models.</p> <p>The introduction of machinic listening,<br /> That dissects voices into quantifiable snippets,<br /> Will tip the balance of this wider apparatus,<br /> Towards further objectification and automaticity,<br /> Especially in this <a href="https://www.rs21.org.uk/2017/07/29/the-politics-of-mental-health/">era of neoliberal austerity</a>.</p> <p>And yet, ironically, it's only the individual and collective voices of users<br /> That can rescue machine learning from talking itself into harmful contradictions;<br /> That can limit its hunger for ever more data in pursuit of its targets,<br /> And save classifications from overshadowing uniquely significant life experiences.</p> <p>Designing for justice and fairness not just for optimised classifications<br /> Means discourse and debate have to invade the spaces of data science;<br /> Each layer of the neural networks must be balanced by a layer of deliberation,<br /> Each datafication by caring human attentiveness.</p> <p>If we want the voices of the users to be heard over the hum of the data centres,<br /> They have to be there from the start;<br /> Putting the incommensurability of their experiences<br /> Alongside the generalising abstractions of the algorithms.</p> <p>And asking how, if at all,<br /> The narrow intelligence of large-scale statistical data-processing machines<br /> Could support more <a href="https://www.nelft.nhs.uk/news-events/open-dialogue-national-conference-an-interview-with-dr-russell-razzaque-2818/">Open Dialogue</a>, where speaking and listening aim for shared understanding,<br /> More <a href="https://en.wikipedia.org/w/index.php?title=Soteria_(psychiatric_treatment)&amp;oldid=846452259">Soteria type houses</a> based on a social model of care,<br /> The development of progressive user-led community mental health services,<br /> And an <a href="https://discoversociety.org/2017/02/01/suicides-linked-to-austerity-from-a-psychocentric-to-a-psychopolitical-autopsy/">end to the cuts</a>.</p> <h3>Computation and Care</h3> <p class="FirstParagraph">As machine learning expands into real world situations,<br /> It turns out that interpretability is one of its biggest challenges;<br /> Even DARPA, the military funder of so much research in speech recognition and AI,<br /> Is panicking that targeting judgements will come without any way to <a href="http://www.federalgrants.com/Explainable-Artificial-Intelligence-XAI-61138.html">interrogate the reasoning behind them</a>.</p> <p>Experiments to figure out how AI image recognition actually works,<br /> Probed the contents of intermediary layers in the neural networks<br /> By recursively applying the convolutional filters to their own outputs,<br /> Producing the <a href="http://ai.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html">hallucinatory images of 'Inceptionism'</a>.</p> <p>We are developing AI listening machines that can't explain themselves,<br /> That hear things of significance in their own layers,<br /> Which they can't articulate to the world but that they project outwards as truths;<br /> How would these AI systems fare if diagnosed against DSM-5 criteria?</p> <p>And if objectivity, as some post-Relativity philosphers of science have proposed,<br /> Consists of invariance under transformation,<br /> What happens if we transform the perspective of our voice analysis,<br /> Looking outwards at the system rather than inwards at the person in distress.</p> <p>To ask what our machines might hear in the voices of the psychiatrists who are busy founding startups,<br /> Or in the voices of politicians justifying cuts in services because they paid off the banks,<br /> Or in the voice of the nurse who tells someone forcibly detained under the Mental Health Act,<br /> <a href="https://www.shortlist.com/news/what-is-it-like-to-be-sectioned/367733">"This ain't a hotel, love"</a>.</p> <p>It's possible that prediction is not a magic bullet for mental health,<br /> And can't replace places of care staffed by people with time to listen,<br /> In a society where precarity, insecurity and austerity don't fuel generalised distress,<br /> Where everyone's voice is not analysed but heard,<br /> In a context which is collective and democratic.</p> <p>The dramas of the human mind have not been scientifically explained,<br /> And the nature of consciousness still slips the net of neuroscience,<br /> Still less should we restructure the production of truths about the most vulnerable<br /> On computational correlations.</p> <p>The real confusion behind the Confusion Matrix,<br /> That table of machine learning accuracy that includes percentages of <a href="https://www.dataschool.io/simple-guide-to-confusion-matrix-terminology/">false positives and negatives</a>,<br /> Is that the impact of AI in society doesn't pivot on the risk of false positives<br /> But on the redrawing of boundaries that we experience as universal fact.</p> <p>The rush towards listening machines tells us a lot about AI,<br /> And the risk of believing it can transform intractable problems<br /> By optimising dissonance out of the system.</p> <p>If human subjectivities are intractably <a href="https://www.routledge.com/ModestWitnessSecondMillennium-FemaleManMeetsOncoMouse-Feminism-and/Haraway-Goodeve/p/book/9781138303416">co-constructed with the tools of their time</a>,<br /> We should ask instead how our new forms of calculative cleverness<br /> Can be stiched into an empathic technics,<br /> That breaks with machine learning as a mode of targeting,<br /> And wreathes computation with ways of caring.</p><fieldset class="fieldgroup group-sideboxs"><legend>Sideboxes</legend><div class="field field-read-on"> <div class="field-label"> 'Read On' Sidebox:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> <p><a href="https://opendemocracy.net/hri"><img src="//cdn.opendemocracy.net/files/smallhribanner.jpg" alt="" /></a></p> <p>More from the <a href="https://opendemocracy.net/hri">Human Rights and the Internet</a> partnership.</p> </div> </div> </div> <div class="field field-related-stories"> <div class="field-label">Related stories:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> <a href="/digitaliberties/dan-mcquillan/rethinking-ai-through-politics-of-1968">Rethinking AI through the politics of 1968</a> </div> <div class="field-item even"> <a href="/dan-mcquillan/manifesto-on-algorithmic-humanitarianism">Manifesto on algorithmic humanitarianism</a> </div> </div> </div> </fieldset> <div class="field field-country"> <div class="field-label"> Country or region:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> UK </div> <div class="field-item even"> United States </div> <div class="field-item odd"> EU </div> </div> </div> <div class="field field-rights"> <div class="field-label">Rights:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> CC by 4.0 </div> </div> </div> digitaLiberties digitaLiberties hri EU United States UK Dan McQuillan Mon, 12 Nov 2018 10:44:06 +0000 Dan McQuillan 120537 at https://www.opendemocracy.net “Same story, different soil”: the Deathscapes Project gets under way https://www.opendemocracy.net/dean-chan-suvendrini-perera-joseph-pugliese/same-story-different-soil-deathscapes-project-gets-under <div class="field field-summary"> <div class="field-items"> <div class="field-item odd"> <p>Hawk Newsome: “It’s the same story, different soil... from Long Bay to the USA. In Sydney, his name is David Dungay. In New York City, his name is Eric Garner.” </p> </div> </div> </div> <p><a href="https://opendemocracy.net/hri"><img alt="HRI" src="//cdn.opendemocracy.net/files/finalbannerhri2_0.jpg" width="460px" /></a></p> <p class="Standard"><span class='wysiwyg_imageupload image imgupl_floating_none caption-xlarge'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/500209/HarkNewsome.jpg" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/500209/HarkNewsome.jpg" alt="lead " title="" class="imagecache wysiwyg_imageupload caption-xlarge imagecache imagecache-article_xlarge" style="" width="460"/></a> <span class='image_meta'><span class='image_title'>Hawk Newsome, Black Lives Matter. All rights reserved.</span></span></span></p><p>From Australia’s offshore prison on Manus Island in Papua New Guinea, the author and filmmaker <a href="https://www.theguardian.com/australia-news/2018/aug/02/behrouz-boochani-manus-island-and-the-book-written-one-text-at-a-time">Behrouz Boochani</a>, who has been incarcerated there for five years, sent an <a href="https://www.deathscapes.org/engagements/academics-for-re%C9al-day-of-action/">impassioned appeal</a> to Australian academics to mark a National Day of Action on October 17: </p> <blockquote><p>Definitely Manus and Nauru prison camps are philosophical and political phenomena and we should not view them superficially. The best way to examine them is through deep research into how a human, in this case a refugee, is forced to live between the law and a situation without laws. There are laws that can exile them to an existence where they have recourse to no law. </p><p>&nbsp;</p><p>In this situation, the human is living as something in between a human and another kind of animal. How is the Australian government able to keep two thousand innocent people, especially children, under these conditions in remote prisons for years in an age of revolutions in information technology? How can the government convince Australian society to maintain this policy, when so much damning evidence is available? … </p><p>&nbsp;</p><p>It’s not the first time in modern Australian history that the government is perpetuating this kind of fascist policy. Just remember the Stolen Generations, and what governments have done and still do to First Nations people. The government has now reinvented those barbaric policies at the beginning of the 21st Century, but this time to also inflict them on refugees. </p><p>&nbsp;</p><p>We should ask questions about this again and again, and it’s the duty of academics to do research that unpacks where these policies stem from, why they are maintained and how they can be undone. It’s the duty of academics to understand and challenge this dark historical period, and teach the new generations to prevent this kind of policy in future.</p></blockquote> <p>This message could be a manifesto for the <a href="https://www.deathscapes.org/">Deathscapes</a> project. Crossing visual culture, aesthetic politics, critical theory and social justice activism, Deathscapes is a transnational research project that documents racialized deaths in custody across Australia, Canada, the United States, the UK/EU, situating them within the shared contexts and interrelated practices of the settler state as they are embedded within contemporary global structures. </p><p>Through the inclusion of the UK and EU, as historical points of origin for settler colonialism, the project traces the continuing processes of racialization in these places, and what Nicholas De Genova describes as the often disavowed “brute <i>racial</i> fact” of the current European border regime.</p><p><span class='wysiwyg_imageupload image imgupl_floating_none caption-xlarge'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/500209/MrWardScreenshot_0.jpg" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/500209/MrWardScreenshot_0.jpg" alt="" title="" class="imagecache wysiwyg_imageupload caption-xlarge imagecache imagecache-article_xlarge" style="" width="460"/></a> <span class='image_meta'></span></span><a href="https://www.deathscapes.org/">Deathscapes</a> connects past and present practices through which states have sought to render selected subjects “something in between a human and another kind of animal,” enabling in turn the deaths of these subjects in the custody of state agencies: deaths at the hands of police, deaths in the presence of coast guards and border patrols, deaths in prisons, holding cells and immigration detention centres.&nbsp; </p><h2><b>Sovereignty, the border and the settler state</b></h2> <p>The delineation of the border is central to the project of the settler colonial state, as it overrides the borders of pre-existing nations and assumes the right to determine who may or may not enter the new territorial entity of the settler nation. As <a href="https://www.tandfonline.com/doi/full/10.1080/14623520601056240">Patrick Wolfe</a> argues, the eliminatory logic of settler colonialism, with its aspiration to expunge the presence of the Indigene from the land and to assert its own usurping sovereignty in its place, is predicated on overrunning existing borders and establishing new ones in their place. </p> <p>Redrawing the borders secures settler colonial ownership of the new national entity in space as well as time, and assumes the power to confer or withhold citizen status within it. </p> <p>Indigenous peoples, displaced, dispossessed and stripped of national status, themselves become refugees on their own land losing, among other crucial rights of sovereignty, the right to offer hospitality within their borders. </p> <p>In connecting Indigenous deaths and&nbsp;other&nbsp;racialized deaths, such as those of refugees and migrants within settler states, the Deathscapes project does not collapse the differences between these groups, but rather aims to make visible the shared strategies, policies, practices and rationales of state violence deployed in the management of these separate racialized categories. </p> <p>One of these shared practices, as already indicated, is the assertion of sovereignty as the sole prerogative of the state, along with the accompanying prerogative to control the movement of bodies within those borders.</p> <h2><b>Agents of trafficking</b></h2> <p>Following this logic, <a href="http://open.mitchellhamline.edu/facsch/157">Muscogee scholar Sarah Deer</a> has argued that colonialism in the Americas has long relied on the trafficking of Indigenous people across borders to establish and secure settler dominance over the land. Considered as a practice internal to settler states such as Canada and the US, rather than one through which the global south attempts to infiltrate the global north, trafficking becomes visible as an act of territorial violence against Indigenous women, in particular. </p> <p>Understood within the framework of the settler colonialism, the states of Australia, Canada and the US are revealed as themselves agents of trafficking rather than helpless bystanders or enlightened enforcers of international law. <a href="https://www.gaatw.org/publications/Alliance%20News/Alliance_News_July_2010.pdf">Kwakwaka’wakw scholar Sarah Hunt</a> invokes this logic when she asks, “If human trafficking is about forced movement, exploitation, and the misuse of power in controlling the bodies of marginalised people, who has control over the movement, labour and bodies of Indigenous girls and women in Canada?” The settler state’s “misuse of power in controlling the bodies of marginalised people” at the same time offers a framework for understanding the Australian government’s contemporary practices of forcibly moving refugee and asylum seeker children and families to offshore island prisons. </p><p><span class='wysiwyg_imageupload image imgupl_floating_none caption-xlarge'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/500209/CS_DontDeportToDanger2018.jpg" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/500209/CS_DontDeportToDanger2018.jpg" alt="" title="" class="imagecache wysiwyg_imageupload caption-xlarge imagecache imagecache-article_xlarge" style="" width="460"/></a> <span class='image_meta'></span></span>Attending to strategies of sovereign territoriality thus makes it possible to connect forms of violence directed against refugees and Indigenous groups in new ways, for example by tracking interrelations of historical and current practices of displacement and enforced mass movement, of transportation and deportation. Even as the project marks the historical differences that distinguish practices of displacement and enforced mass movement, it also traces the lines of connection that interlink these same practices. </p><p>So, for example, the iconographies of the Middle Passage find their returns in today’s desperate and brutalizing voyages from African coasts. The separation of children from their parents in residential schools in Canada and missions in Australia find their echoes in the current US policy of separating migrant children from their parents at the border; in the context of the latter, <a href="https://www.newyorker.com/news/daily-comment/juneteenth-and-the-detention-of-children-in-texas">African American scholar Jelani Cobb</a> reminded us in 2017 that “the separation of families has deep roots in the American past” and that the separation and sale of children, “was such a common feature of slavery”.</p> <h2><b>A central resource</b></h2> <p>The relationality and connectivity between, and across these histories and practices are key to the design of the <a href="https://www.deathscapes.org/">Deathscapes</a> website. The site documents selected case studies of where deaths and violations happen, as well as providing the social and critical tools to examine how these deaths are understood and responded to. </p> <p>As a resource that is both archival and analytical, and for use by multiple publics, the website is aimed at addressing a challenge often faced by researchers studying shared circuits of knowledge and modes of governance: the difficulty of accessing information about state violence through a central resource. The transnational focus and methodology adopted by the project invites website visitors to identify key continuities that inscribe racialized deathscapes across diverse locations, and to connect the differential dimensions that might appear to be confined to an individual nation-state, beyond and across the individual stories.</p> <p>The Deathscapes site visually links deaths in custody across the different states, layering images and text and developing key categories that operate across the case studies. The cases currently on the site (still in progress) include the death of Indigenous elder <a href="https://www.deathscapes.org/case-studies/the-road-passage-through-the-deathscape/">Mr Ward</a> inside a prison van operated by G4S as he was transported across the Western Australian desert; the death of <a href="https://www.deathscapes.org/case-studies/trauma-on-the-body/">Anastasio Hernández Rojas</a> who was beaten and tasered to death by US border agents as he was being deported to Mexico; and the death of <a href="https://www.deathscapes.org/case-studies/jimmy-mubenga-case-study/">Jimmy Mubenga</a> in the custody of G4S guards as he was being deported from the UK. </p> <p>The case studies are visually as well as analytically linked. The layered use of images and text presents a multi-dimensioned analysis that aims to connect the deaths transnationally to one another, while also linking back in time to the colonial genealogies of current practices. In the case of Mr Ward, for example, his removal from his land can be situated against a long history of Indigenous people being transported for long distances in manacles and chains for incarceration in offshore prisons such as Rottnest Island, off the Western Australian coastline. </p> <p>Two separate sections, <a href="https://www.deathscapes.org/inspiration/">Inspirations</a> and <a href="https://www.deathscapes.org/galleries/">Galleries</a>, present a snapshot of the transnational and multi-dimensioned underpinnings of the project, drawing on performance, poetry and visual art as well as critical theory and activist manifestos, while <a href="https://www.deathscapes.org/engagements/">Engagements</a> encompasses a range of activities, including publications, talks and <a href="https://www.deathscapes.org/engagements/dispatches-fazel-chegeni-nejad/">dispatches</a>, where team members present immediate reports from unfolding inquests. These case studies, dispatches from inquests in progress, and activist art projects all work together to make connections across technologies of punishment and criminalization, and sites of incarceration and punishment. </p><p><span class='wysiwyg_imageupload image imgupl_floating_none caption-xlarge'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/500209/CS_Projections_March4_0.jpg" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/500209/CS_Projections_March4_0.jpg" alt="" title="" class="imagecache wysiwyg_imageupload caption-xlarge imagecache imagecache-article_xlarge" style="" width="460"/></a> <span class='image_meta'></span></span>The latter are often operated by the same private contractors, as in the case of domestic and ICE prisons in the US; or in Australia, in “on-shore” and “offshore” prisons. In Australia the former are populated disproportionately by Aboriginal prisoners, whereas the latter are designed to hold asylum seekers who arrived by boat. &nbsp; </p><h2><b>A site of resistance</b></h2> <p>In seeking to connect practices and places of state violence, the Deathscapes website is also concerned to foreground transnational practices of resistance to the systematic targeting and punishment against racialized peoples: Indigenous, Black, refugee and migrant groups. Deathscapes situates itself within these transnational practices of resistance and works to underscore the critical significance of establishing solidarity movements whose aim is to expose and, ultimately, <i>to end</i> regimes of racialized punishment. </p> <p>On a recent visit to Australia, Hawk Newsome, a leader of the Black Lives Matter movement, <a href="https://www.smh.com.au/national/nsw/david-dungay-screamed-for-help-said-he-couldn-t-breathe-minutes-before-death-inquest-told-20180716-p4zrs3.html">spoke</a> at the inquest for the Indigenous Dunghutti man David Dungay, whose death in Long Bay prison was accompanied by cries of “I can’t breathe.” Standing with Mr Dungay’s family and supporters in a powerful show of solidarity, Hawk Newsome underscored the transnational racial violence that continues to kill people in both Australia and the US: “It’s the same story, different soil. It’s the same thing from Long Bay to the USA. In Sydney, his name is David Dungay. In New York City, his name is Eric Garner. Eric Garner called for his life 11 times. David Dungay called for his life 12 times. These eerie similarities cannot go ignored.” </p> <p>The refusal to ignore the chilling similarities in racialized custodial deaths across states animates the Deathscapes project: same story, different soil. </p> <p><i>The London Launch of the Deathscapes project takes place on Thursday, 8<sup>th</sup> November 2018, 6-9 pm, at Goldsmiths, University of London. More information and where to register (free of charge) for this event is <a href="https://www.gold.ac.uk/calendar/?id=11779">available here</a>. The event is being live-streamed and recorded. </i></p> <p>&nbsp;</p> <p>WORKS CITED</p> <p>Cobb, Jelani.&nbsp; “<a href="https://www.newyorker.com/news/daily-comment/juneteenth-and-the-detention-of-children-in-texas">Juneteenth and the Detention of Children in Texas</a>,” <i>The New Yorker</i> June 19, 2018. </p> <p>De Genova, Nicholas. “The ‘Migrant Crisis’ as Racial Crisis: Do Black Lives Matter in Europe?,” <i>Ethnic and Racial Studies</i> 41, 10 (2018): 1765-1782. </p> <p>Deer, Sarah. “<a href="http://open.mitchellhamline.edu/facsch/157">Relocation Revisited: Sex Trafficking of Native Women in the United States</a>,” <i>William Mitchell Law Review</i> 821 (2010). </p> <p>Hunt, Sarah. “<a href="https://www.gaatw.org/publications/Alliance%20News/Alliance_News_July_2010.pdf">Colonial Roots, Contemporary Risk Factors</a>: a cautionary exploration of the domestic trafficking of Aboriginal women and girls in British Columbia, Canada,” <i>Alliance News</i> 33 (July 2010), 27-31. </p> <p>Wolfe, Patrick. “Settler colonialism and the elimination of the native,” <i>Journal of Genocide Research</i>, 8:4 (2006): 387-409.</p><fieldset class="fieldgroup group-sideboxs"><legend>Sideboxes</legend><div class="field field-read-on"> <div class="field-label"> 'Read On' Sidebox:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> <p><em>The London Launch of the Deathscapes project takes place on Thursday, November 8, 2018, 6-9 pm, at Goldsmiths, University of London. More information and where to register (free of charge) for this event is <a href="https://www.gold.ac.uk/calendar/?id=11779">available here</a>. The event is being live-streamed and recorded. </em></p> </div> </div> </div> <div class="field field-sidebox"> <div class="field-label"> Sidebox:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> <p><a href="https://opendemocracy.net/hri"><img src="//cdn.opendemocracy.net/files/smallhribanner.jpg" alt="" /></a></p> <p>More from the <a href="https://opendemocracy.net/hri">Human Rights and the Internet</a> partnership.</p> </div> </div> </div> </fieldset> <div class="field field-rights"> <div class="field-label">Rights:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> CC by 4.0 </div> </div> </div> digitaLiberties hri Can Europe make it? Joseph Pugliese Suvendrini Perera Dean Chan Sun, 04 Nov 2018 10:37:09 +0000 Dean Chan, Suvendrini Perera and Joseph Pugliese 120442 at https://www.opendemocracy.net Mark Zuckerberg’s dilemma – what to do with the monster he has created? https://www.opendemocracy.net/uk/john-naughton/mark-zuckerberg-s-dilemma-what-to-do-with-monster-he-has-created <div class="field field-summary"> <div class="field-items"> <div class="field-item odd"> <p>Facebook seems surprised that its monopolistic platform has been weaponised by political actors. So who's going to tackle the perfect storm, asks John Naughton?</p> </div> </div> </div> <p><span class='wysiwyg_imageupload image imgupl_floating_none 0'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/549093/zuke 2.jpg" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/549093/zuke 2.jpg" alt="" title="" width="460" height="307" class="imagecache wysiwyg_imageupload 0 imagecache imagecache-article_xlarge" style="" /></a> <span class='image_meta'><span class='image_title'>Image: Facebook boss and founder Mark Zuckerberg. Credit: NurPhoto/PA Images.</span></span></span></p><p>Ponder this: in 2004 a Harvard sophomore named Zuckerberg sits in his dorm room hammering away at a computer keyboard. He’s taking an idea he ‘borrowed’ from two nice-but-dim Harvard undergraduates and writing the computer code needed to turn it into a social-networking site. He borrows $1,000 from his friend Eduardo Saverin and puts the site onto an internet web-hosting service. He calls it ‘The Facebook’.</p> <p>Fourteen years later, that kid has metamorphosed into the 21st-century embodiment of John D Rockefeller and William Randolph Hearst rolled into one. In the early 20th century, Rockefeller controlled the flow of oil while Hearst controlled the flow of information. In the 21st century Zuckerberg controls the flow of the new oil (data) and the information (because people get much of their news from the platform that he controls). His empire spans more than 2.2bn people, and he exercises absolute control over it — as a passage in the company’s 10-K SEC filing makes clear. It reads, in part:</p> <p class="BlockQuote">”Mark Zuckerberg, our founder, Chairman, and CEO, is able to exercise voting rights with respect to a majority of the voting power of our outstanding capital stock and therefore has the ability to control the outcome of matters submitted to our stockholders for approval, including the election of directors and any merger, consolidation, or sale of all or substantially all of our assets. This concentrated control could delay, defer, or prevent a change of control, merger, consolidation, or sale of all or substantially all of our assets that our other stockholders support, or conversely this concentrated control could result in the consummation of such a transaction that our other stockholders do not support… ”(1).</p> <p>Such concentration of corporate control is unusual in large public corporations (2) and raises questions about corporate governance and social responsibility. These problems are particularly acute in Zuckerberg’s empire because he wields not just the economic power of a monopolist (in the field of social networking, Facebook has no serious competitor) but also operates a business model that affects democratic processes and may perhaps even have influenced the outcomes of the 2016 Brexit referendum in the UK and presidential elections in the US and elsewhere. </p> <p>This is not to assert that Zuckerberg himself pursues political objectives, at least at the moment. Rather, the claim is (a) that the computerised, targeted-advertising system Facebook has constructed can be (and has been) exploited by political actors seeking to target political or ideological messages at users whose data-profiles suggest that they may be receptive to those messages; and (b) that the outcomes of such ‘weaponisation’ of social media may be at best anti-social and at worst anti-democratic.</p> <p>The most striking thing about the discovery of political exploitation of social media in 2016-17 was the initial incredulity of Zuckerberg and his executives that such a thing could have happened. This suggests some or all of the following: a very high degree of political naiveté; a serious case of wilful blindness; and/or a cynical determination to avoid public discussion of the root cause of the trouble — the business model of the corporation and the responsibilities that accompany the power that it confers upon its owner.</p> <h2><strong>The business model: surveillance capitalism</strong></h2> <p>The five most valuable corporations in the Western world at present — Apple, Alphabet (owner of Google), Amazon, Microsoft and Facebook — are all digital enterprises. Three of them — Apple, Amazon and Microsoft — have relatively conventional business models: they produce goods and/or services for which customers pay. The other two — Google and Facebook — provide services that are free in return for the right to extract and monetise the personal information and data-trails of their users. The data thus extracted, refined and aggregated are then deployed to enable advertisers — the actual <em>customers</em> of the companies — to target advertisements at users. This is often summarised in the adage ‘if the service is free, then you are the product’.</p> <p>Google and Facebook operate what economists call two-sided markets: in their cases revenue from customers on one side (advertisers) subsidise users on the other side. In recent years, the term <em>surveillance capitalism (3)</em> has been coined to describe this business model. Although Google and Facebook portray themselves as tech companies, it’s sometimes more illuminating to regard them as extractive enterprises like oil or mining companies. The latter extract natural resources from the earth, which they then refine, process and sell to customers. Facebook and Google do something analogous, but the resources they extract, refine and monetise are purely digital — the data-trails generated by their users’ activities on their platforms. </p> <p>There is, however, one radical difference between the oil/mining enterprises and the two digital giants. Whereas reserves of natural resources are, ultimately, finite (4), reserves of the ‘resources’ extracted by Google and Facebook are, in principle, virtually infinite because they are created by what the industry calls user engagement — i.e. users’ online activity. The level and volume of this engagement is staggering; every 60 seconds on Facebook, for example, 510,000 comments are posted, 293,000 statuses are updated, and 136,000 photos are uploaded (5). </p> <p>Since user engagement is what produces monetisable data-trails, the overriding imperative of the business model is to continually increase engagement. Accordingly, the companies deploy formidable technical and design resources to persuade users to spend more time on their platforms and to engage with them more intensively (6). </p> <p>Much of this supply-side design is informed by applied psychological research on human behaviour — the same kind of research that informs the design of slot machines (7). Some of the services are addictive by design (8) while others exploit known human fallibilities (9) (for example, by using default settings like autoplay on videos — which users can change but generally don’t (10)). And the companies continually conduct thousands of A/B experiments a day in real time to determine which presentational tweaks most effectively increase user engagement. In a metaphorical sense, therefore, users of social media are unwitting rats in Skinnerian mazes created for their delectation. This is what leads some commentators to speak of social media as a ‘dopamine economy’ (11).</p> <p>On the demand side, human psychology and sociality play important roles in keeping the machine humming. Humans are famously subject to a wide range of <em>cognitive biases</em> (12), which social media exploit. Well-known examples include <em>confirmation bias</em> (the tendency to search for, interpret, focus on and remember information in a way that confirms one's preconceptions); and <em>hyperbolic discounting</em> (the tendency for people to have a stronger preference for more immediate payoffs relative to later payoffs). On the sociality side there is <em>homophily</em><em> </em>— the tendency of individuals to associate and bond with similar others. </p> <p>What the world has belatedly woken up to is the realisation that we are in a kind of perfect storm created by the confluence of a number of powerful forces: network effects which lead to the emergence of global monopolies; the business model of surveillance capitalism, with its insatiable demands for increased user engagement; astute deployment of applied psychology to design compulsive or addictive apps, devices and services; cognitive biases which are part of human psychology; powerful tendencies to cluster together (which are probably an inheritance of early human social groups) and which leads to digital echo chambers online; and weaponisation of social media by political actors.</p> <h2><strong>User-generated content: a double-edged sword</strong></h2> <p>When the internet first went mainstream in the mid-1990s it was hailed as a democratising technology that would liberate people’s innate creativity. Instead of being passive consumers of content created by corporations, ordinary people would be able to bypass the editorial gatekeepers of the analogue media ecosystem. These possibilities of the ‘internetworked’ future were memorably celebrated in <em>The Wealth of Networks</em>, a landmark book by the Harvard scholar Yochai Benkler published in 2006 (13). </p> <p>Although the technology did (and does) possess all the empowering, democratising potential celebrated by Benkler, in fact only a relatively small minority made use of it in creative ways. This changed with the arrival of Facebook and YouTube — services that made it easy for users to upload content. Much of the resulting content was unremarkable (and much of it infringed copyright), but from early on it was clear that social media platforms effectively provided a mirror to human nature and some of what appeared in that mirror was unpleasant and sometimes horrific (14). Furthermore, it transpired that some of this extreme or otherwise problematic content increased user-engagement — which meant that it generated more monetisable data for the platforms hosting it.</p> <p>Initially, Facebook’s response to this was relaxed: the onus was placed on users to flag unacceptable or problematic content for possible review by the platform’s owners. This casual attitude was reinforced by Section 230 of the US 1996 Communications Decency Act, which absolved internet services providers from legal liability for content hosted on their platforms (15). But in the wake of a series of developments, including the controversies about the weaponisation of social media by political actors in 2016 and 2017, revelations of the role that Facebook services had played in ethnic cleansing in Myanmar and Sri Lanka, the company’s failure to remove hate-speech and conspiracy theories, and a raft of other scandals (16), this relaxed posture in 2018 had become untenable and the company was now struggling — with questionable efficacy — to contain the abuses that followed from running a platform that enabled anyone to publish whatever they wanted whenever they wanted.</p> <h2><strong>A window into people’s souls</strong></h2> <p>The surveillance capitalism companies have become very good at giving users what they want — which is one reason why some ruefully admit that they find them addictive. They are able to do this because they have garnered an astonishing amount of revealing data about those users and their likely interests, concerns and needs. </p> <p>As far as Facebook is concerned, the key insight was the discovery in 2013 of how revealing even a low level of user engagement can be. Cambridge University researchers demonstrated something that the company probably already knew, namely that Facebook likes could be used to “automatically and accurately predict a range of highly sensitive personal attributes including: sexual orientation, ethnicity, religious and political views, personality traits, intelligence, happiness, use of addictive substances, parental separation, age, and gender” (17). </p> <p>Insights available from a user’s behaviour on the site were eventually supplemented by (a) information gleaned from tracking Facebook’s users as they traversed the wider Web and (b) data about users purchased from external sources (e.g. credit-rating agencies) to construct data-profiles which reportedly (18) ran to 98 data-points per user.</p> <p>In 2007, Facebook made a significant innovation that would later have major implications for both its evolution and for its role in democratic disruption. The company suddenly offered <em>itself</em><em> </em>as a platform on which third-party developers could run apps. “People should build an application on the Facebook platform because it provides a new kind of distribution on the internet,” said a senior company executive at the time. “Really, what has been lacking in all of the other operating systems and platforms that have ever been created is the ability to really access people (19).”</p> <p>But whereas the World Wide Web platform was open and uncontrolled, the Facebook platform was proprietary and controlled by the company. The strategic goal for the platform move was to expand Facebook’s global reach and penetration to the point where it — rather than the open web — would effectively become the internet as far as most people were concerned. From the point of developers, the attraction was that their apps could exploit the user data that Facebook had accumulated. It was the decision to allow third-party apps to run on its platform, coupled with a failure to adequately police what these apps were doing with user data that eventually led to the Cambridge Analytica scandal (20) in 2018.</p> <h2><strong>The targeting engine</strong></h2> <p>As we observed, Google’s and Facebook’s users are not their customers. That role is played by advertisers who use the automated engines developed by the companies to identify targets for their commercial messages. Consequently, the most revealing insights into how surveillance capitalism works are obtained not by being a user but by going in as a customer, i.e. an advertiser.</p> <p>Both companies have constructed automated engines for enabling advertisers to identify types of audiences they wish their messages to reach. In operation and design, these engines are impressive. The Facebook one is particularly user-friendly (21), gently nudging the customer through the various steps needed to identify what the company calls custom audiences and helpfully suggesting categories of user that one may not initially have thought about. As one critic put it:</p> <p class="BlockQuote">“If I want to reach women between the ages of 25 and 30 in zip code 37206 who like country music and drink bourbon, Facebook can do that. Moreover, Facebook can often get friends of these women to post a ‘sponsored story’ on a targeted consumer’s news feed, so it doesn’t feel like an ad.”</p> <p>But one doesn’t have to be a traditional firm doing commercial advertising to use the Facebook engine. The machine is at the disposal of anyone who wishes to direct almost any message at targeted audiences. What seems to have taken Facebook by surprise is that some of the entities that chose to use its system — including at least one foreign power — were in the business of sending not commercial but ideological or political messages to selected categories of users (23). </p><p>And it looks as though the use of social media is a highly cost-effective way of doing this. According to the <em>New York Times</em>, Russian agents intending to sow discord among American citizens disseminated inflammatory posts that reached 126m users on Facebook, published more than 131,000 messages on Twitter and uploaded more than 1,000 videos to YouTube (24).</p> <p>A striking demonstration of the effectiveness of the Facebook targeting engine was provided by an experiment conducted by the news website ProPublica in September 2017. The researchers paid the company $30 to place three promoted posts in the news feeds of Facebook users who — according to the service’s profiles of them — had expressed interest in the topics ‘Jew hater’, ‘How to burn jews’, or ‘History of why jews ruin the world’. The Facebook engine approved all three ads within 15 minutes (25).</p> <p><span class='wysiwyg_imageupload image imgupl_floating_none 0'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/549093/jew hater facebook.png" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/549093/jew hater facebook.png" alt="" title="" width="460" height="546" class="imagecache wysiwyg_imageupload 0 imagecache imagecache-article_xlarge" style="" /></a> <span class='image_meta'></span></span></p> <h2><strong>Zuckerberg’s monster</strong></h2> <p>Facebook is Zuckerberg’s monster. Unlike Frankenstein, he is still enamoured of his creation, which has made him richer than Croesus and the undisputed ruler of an empire of 2.2bn users. It has also given him a great deal of power, together with the concomitant responsibilities that go with it. But it's becoming increasingly clear that his creature is out of control, that he's uneasy about the power and has few good ideas about how to discharge his responsibilities.</p> <p>A good illustration of this was provided by a revealing interview (26) that the Facebook boss gave to the tech journalist Kara Swisher in the summer of 2018. The conversation covered a lot of ground but included a couple of exchanges which spoke volumes about Zuckerberg's inability to grasp the scale of the problems that his creature now poses for society.</p> <p>One of them – obviously – is misinformation or false news. “The approach that we’ve taken to false news", said Zuckerberg, “is not to say, you can’t say something wrong on the internet. I think that that would be too extreme. Everyone gets things wrong, and if we were taking down people’s accounts when they got a few things wrong, then that would be a hard world for giving people a voice and saying that you care about that.”</p> <p>Swisher then asked him about the alt-right site Infowars – whose Facebook page had more than 900,000 followers and which regularly broadcast falsehoods and conspiracy theories, including a claim that the Sandy Hook mass shootings (27) never happened. But Infowars continued to thrive on Facebook, even though Zuckerberg agreed that the Sandy Hook story was false. Was this because “everyone gets things wrong”, or because of those 900,000 followers? Swisher didn't ask, but a Channel 4 undercover investigation (28) of the Irish firm to which Facebook had outsourced content moderation suggested that objectionable content on Facebook pages with large followings could not be deleted by the traumatised serfs in Dublin; instead such decisions had to be referred up the management chain.</p> <p>The most revealing part of the Swisher interview, however, concerned Holocaust denial – a topic that Zuckerberg himself brought up. “I’m Jewish,” he said, “and there’s a set of people who deny that the Holocaust happened. I find that deeply offensive. But at the end of the day, I don’t believe that our platform should take that down because I think there are things that different people get wrong. I don’t think that they’re intentionally getting it wrong, but I think it’s hard to impugn intent and to understand the intent.” </p> <p>If you think this is weird, then join the club. I can see only three explanations for it. One is that Zuckerberg is a sociopath, who wants to have as much content – objectionable or banal – available to maximise user engagement (and therefore revenues), regardless of the societal consequences. A second is that Facebook is now so large that he sees himself as a kind of governor with quasi-constitutional responsibilities for protecting free speech. This is delusional: Facebook is a company, not a democracy. Or thirdly, and most probably, he is scared of being accused of being biased in the polarised hysteria that now grips American (and indeed British) politics. </p> <p>It's as if he's suddenly become aware of the power that his monster has bestowed upon him. As the New York Times journalist, Kevin Roose, put it on The Daily podcast (29), Zuckerberg's increasingly erratic behaviour could be a symptom of something bigger. “He built a company that swallowed communication and media for much of the world,” observed Roose. “And now we're seeing him back away from that... The problem with ruling the world is that you then have to govern and that's not what it seems he wants to do.” In which case, who will?</p><p class="MsoNormal"><em><span>This is an edited version of a chapter from the new book&nbsp;Anti-Social Media: The Impact on Journalism and Society, edited by John Mair,&nbsp;Tor&nbsp;Clark, Neil Fowler, Raymond Snoddy and Richard Tait, available from Abramis priced £19.95. Email&nbsp;</span><span><a href="mailto:Richard@Abramis.co.uk" target="_blank">Richard@Abramis.co.uk</a>&nbsp;to order a copy.</span></em></p> <h2>Notes</h2> <ol><li><a href="https://d18rn0p25nwr6d.cloudfront.net/CIK-0001326801/80a179c9-2dea-49a7-a710-2f3e0f45663a.pdf">https://d18rn0p25nwr6d.cloudfront.net/CIK-0001326801/80a179c9-2dea-49a7-a710-2f3e0f45663a.pdf</a>. The relevant passage continues: “In addition, Mr. Zuckerberg has the ability to control the management and major strategic investments of our company as a result of his position as our CEO and his ability to control the election or replacement of our directors. … As a stockholder, even a controlling stockholder, Mr. Zuckerberg s entitled to vote his shares, and shares over which he has voting control as governed by a voting agreement, in his own interests, which may not always be in the interests of our stockholders generally.”</li><li>Though not unknown in Silicon Valley where charismatic founders use multi-tier shareholding arrangements to ensure that they retain overall control of their creations. This was the case with Google (now Alphabet), for example, and is motivated at least partly to insulate founders from the short-term pressures of Wall Street and enable them to take longer-term strategic views of their enterprises.</li><li>Shosana Zuboff, “The Secrets of Surveillance Capitalism”, <em>Frankfurter All</em>gemeine Zeitung<em>, 5 March, 2016. <a href="http://www.faz.net/aktuell/feuilleton/debatten/the-digital-debate/shoshana-zuboff-secrets-of-surveillance-capitalism-14103616.html ">http://www.faz.net/aktuell/feuilleton/debatten/the-digital-debate/shoshana-zuboff-secrets-of-surveillance-capitalism-14103616.html&nbsp;</a></em></li><li>Pedantic point: the key word is ‘ultimately’. The level of available reserves of natural resources is a function of market-price, location and other factors. Thus the level of oil reserves depends on global oil prices. If the price is high, then it will be economically feasible to extract oil from fields which are relatively harder to work.</li><li><a href="https://zephoria.com/top-15-valuable-facebook-statistics/">https://zephoria.com/top-15-valuable-facebook-statistics/</a></li><li>Current estimates put the time the average Facebook user spends on the platform at 20 minutes per day. (<a href="https://zephoria.com/top-15-valuable-facebook-statistics/">https://zephoria.com/top-15-valuable-facebook-statistics/</a>) Some estimates are higher.</li><li>Natasha Dow Schüll, <em>Addiction by Design: Machine Gambling in Las Vegas</em>, Princeton, 2012.</li><li>See Nir Yal, <em>Hooked: How to Build Habit-Forming Products</em>, Penguin/Portfolio, 2014. Hilary Anderson, “Social media apps are 'deliberately' addictive to users”, BBC News, 4 July, 2018, <a href="https://www.bbc.co.uk/news/technology-44640959">https://www.bbc.co.uk/news/technology-44640959</a></li><li>Mallory Locklear, “Sean Parker says Facebook ‘exploits’ human psychology”, <em>Engadget</em>, 11 September 2017. <a href="https://www.engadget.com/2017/11/09/sean-parker-facebook-exploits-human-psychology/">https://www.engadget.com/2017/11/09/sean-parker-facebook-exploits-human-psychology/</a></li><li>John Naughton, “More choice on privacy just means more chances to do what’s best for big tech”, <em>Observer</em>, 8 July, 2018. <a href="https://www.theguardian.com/commentisfree/2018/jul/08/more-choice-privacy-gdpr-facebook-google-microsoft">https://www.theguardian.com/commentisfree/2018/jul/08/more-choice-privacy-gdpr-facebook-google-microsoft</a></li><li><a href="https://eand.co/the-dopamine-economy-336b239272ef">https://eand.co/the-dopamine-economy-336b239272ef</a></li><li><a href="https://en.wikipedia.org/wiki/List_of_cognitive_biases">https://en.wikipedia.org/wiki/List_of_cognitive_biases</a></li><li><a href="http://benkler.org/Benkler_Wealth_Of_Networks.pdf">http://benkler.org/Benkler_Wealth_Of_Networks.pdf</a></li><li>John Naughton, “How Facebook became a home to psychopaths”, <em>Observer</em>, 23 April, 2017. <a href="https://www.theguardian.com/commentisfree/2017/apr/23/how-facebook-became-home-to-psychopaths-facebook-live">https://www.theguardian.com/commentisfree/2017/apr/23/how-facebook-became-home-to-psychopaths-facebook-live</a></li><li>John Naughton, “How two congressmen created the internet’s biggest names”, <em>Observer</em>, 8 January, 2017. <a href="https://www.theguardian.com/commentisfree/2017/jan/08/how-two-congressmen-created-the-internets-biggest-names">https://www.theguardian.com/commentisfree/2017/jan/08/how-two-congressmen-created-the-internets-biggest-names</a></li><li>For example the use of Facebook Live to stream horrific acts of violence, bullying and worse. See John Naughton, “How Facebook became a home to psychopaths”, <em>Observer</em>, 23 April, 2017. <a href="https://www.theguardian.com/commentisfree/2017/apr/23/how-facebook-became-home-to-psychopaths-facebook-live">https://www.theguardian.com/commentisfree/2017/apr/23/how-facebook-became-home-to-psychopaths-facebook-live</a></li><li>Michal Kosinski, David Stillwell, and Thore Graepel, “Private traits and attributes are predictable from digital records of human behavior”, <em>PNAS</em><em>,</em> April 9, 2013. 110 (15) 5802-5805; <a href="https://doi.org/10.1073/pnas.1218772110">https://doi.org/10.1073/pnas.1218772110</a></li><li><a href="https://www.washingtonpost.com/news/the-intersect/wp/2016/08/19/98-personal-data-points-that-facebook-uses-to-target-ads-to-you/?tid=sm_tw">https://www.washingtonpost.com/news/the-intersect/wp/2016/08/19/98-personal-data-points-that-facebook-uses-to-target-ads-to-you/?tid=sm_tw</a></li><li>https://betanews.com/2007/05/25/facebook-becomes-a-software-company-with-platform-rollout/</li><li><a href="https://www.theguardian.com/news/series/cambridge-analytica-files">https://www.theguardian.com/news/series/cambridge-analytica-files</a>. See also Kevin Roose, “How Facebook’s Data Sharing Went From Feature to Bug”, <em>New York Times</em>, 19 March, 2018. https://www.nytimes.com/2018/03/19/technology/facebook-data-sharing.html</li><li><a href="https://www.facebook.com/business/a/custom-audiences">https://www.facebook.com/business/a/custom-audiences</a></li><li>Jonathan Taplin<em>, <em>Move fast and Break Things: How Facebook, Google and Amazon Have Cornered Culture, and What it Means for All of Us</em>,</em> Macmillan, 2017, p.143.</li><li>Dipayan Ghosh and Ben Scott, “Russia's Election Interference Is Digital Marketing 101”, <em>The Atlantic</em>, 19 February, 2018, <a href="https://www.theatlantic.com/international/archive/2018/02/russia-trump-election-facebook-twitter-advertising/553676/">https://www.theatlantic.com/international/archive/2018/02/russia-trump-election-facebook-twitter-advertising/553676/</a></li><li>Mike Isaac and Daisuke Wakabayashi, “Russian Influence Reached 126 Million Through Facebook Alone”, <em>New York Times</em>, 30 October, 2017. <a href="https://www.nytimes.com/2017/10/30/technology/facebook-google-russia.html">https://www.nytimes.com/2017/10/30/technology/facebook-google-russia.html</a></li><li>Julia Angwin, Madeleine Varner and Ariana Tobin, “Facebook Enabled Advertisers to Reach ‘Jew Haters’”, <em>ProPublica</em>, 14 September, 2017. <a href="https://www.propublica.org/article/facebook-enabled-advertisers-to-reach-jew-haters">https://www.propublica.org/article/facebook-enabled-advertisers-to-reach-jew-haters</a></li><li><a href="https://www.recode.net/2018/7/18/17575156/mark-zuckerberg-interview-facebook-recode-kara-swisher">https://www.recode.net/2018/7/18/17575156/mark-zuckerberg-interview-facebook-recode-kara-swisher</a></li><li><a href="https://en.wikipedia.org/wiki/Sandy_Hook_Elementary_School_shooting">https://en.wikipedia.org/wiki/Sandy_Hook_Elementary_School_shooting</a></li><li><a href="http://uk.businessinsider.com/channel-4-finds-facebook-not-deleting-child-abuse-and-racism-2018-7">http://uk.businessinsider.com/channel-4-finds-facebook-not-deleting-child-abuse-and-racism-2018-7</a></li><li><a href="https://www.nytimes.com/2018/07/20/podcasts/the-daily/facebook-mark-zuckerberg-misinformation.html">https://www.nytimes.com/2018/07/20/podcasts/the-daily/facebook-mark-zuckerberg-misinformation.html</a></li></ol><fieldset class="fieldgroup group-sideboxs"><legend>Sideboxes</legend><div class="field field-related-stories"> <div class="field-label">Related stories:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> <a href="/uk/jennifer-cobbe/problem-isn-t-just-cambridge-analytica-or-even-facebook-it-s-surveillance-capitali">The problem isn’t just Cambridge Analytica or Facebook – it’s “surveillance capitalism”</a> </div> <div class="field-item even"> <a href="/uk/leighton-andrews/is-tide-turning-on-regulating-facebook-and-google">Is the tide turning on regulating Facebook and Google?</a> </div> <div class="field-item odd"> <a href="/digitaliberties/agne-pix-bruce-schneier/surveillance-is-business-model-of-internet">&quot;Surveillance is the business model of the internet&quot;</a> </div> <div class="field-item even"> <a href="/digitaliberties/jimmy-tidey/facebook-needs-to-face-up-to-new-political-reality">Facebook needs to face up to the new political reality</a> </div> <div class="field-item odd"> <a href="/mary-fitzgerald-peter-york-carole-cadwalladr-james-patrick/dark-money-deep-data-voicing-dissent">Dark Money Deep Data</a> </div> <div class="field-item even"> <a href="/uk/sam-jeffers/how-can-we-better-regulate-elections-in-digital-age">How can we better regulate elections in the digital age?</a> </div> <div class="field-item odd"> <a href="/digitaliberties/moh-hamdi/facebook-and-journalism-part-two">Facebook and journalism. Part two</a> </div> </div> </div> </fieldset> <div class="field field-country"> <div class="field-label"> Country or region:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> UK </div> </div> </div> <div class="field field-topics"> <div class="field-label">Topics:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> Civil society </div> <div class="field-item even"> Democracy and government </div> <div class="field-item odd"> Internet </div> </div> </div> <div class="field field-rights"> <div class="field-label">Rights:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> CC by 4.0 </div> </div> </div> uk digitaLiberties uk UK Civil society Democracy and government Internet Facebook John Naughton Mon, 29 Oct 2018 14:03:52 +0000 John Naughton 120333 at https://www.opendemocracy.net The politics of artificial intelligence: an interview with Louise Amoore https://www.opendemocracy.net/digitaliberties/krystian-woznicki-louise-amoore/politics-of-artificial-intelligence-interview-with-l <div class="field field-summary"> <div class="field-items"> <div class="field-item odd"> <p>Artificial intelligence, in settings as diverse as politics, commerce, policing and warfare, amplifies longstanding prejudices circumscribing access to the political public sphere, changing our relations to ourselves and others. </p> </div> </div> </div> <p><span class='wysiwyg_imageupload image imgupl_floating_none caption-xlarge'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/500209/shutterstock_198736082_1.jpg" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/500209/shutterstock_198736082_1.jpg" alt="" title="" class="imagecache wysiwyg_imageupload caption-xlarge imagecache imagecache-article_xlarge" style="" width="460" /></a> <span class='image_meta'><span class='image_title'>Shutterstock/Ollyy. All rights reserved.</span></span></span></p><p><em><strong>Krystian Woznicki (KW)</strong></em><em>:‘Rethinking political agency in an AI-driven world’ is the topic of the </em><a href="https://projekte.berlinergazette.de/ambient-revolts/"><em>AMBIENT REVOLTS </em></a><em>conference in Berlin on 8–10 November. I would therefore like to begin by asking you about the deployment of algorithms at state borders. </em></p> <p><em>You have noted that ‘in order to learn, to change daily and evolve [they] require precisely the circulations and mobilities that pass through’. This observation is part of your larger argument about how governmentality is less concerned with prohibiting movement than with facilitating it in productive ways. The role of self-learning algorithms would seem to be very significant in this context, since – like capitalism – they also hinge upon movement. When it comes to their thirst for traffic, how do you think that relationship between self-learning algorithms and capitalism?</em></p> <p><strong>Louise Amoore (LA):</strong> Yes, I agree that the role of ‘self learning’ or semi-supervised algorithms is of the utmost relevance in understanding how movement and circulation matters. </p> <p>First, perhaps it is worth reflecting on what one means by ‘self learning’ in the context of algorithms. As algorithms such as deep neural nets and random forests become deployed in border controls, in one sense they do self-learn because they are exposed to a corpus of data (for example on past travel) from which they generate clusters of shared attributes. When people say that these algorithms ‘detect patterns’, this is what they mean really – that the algorithms group the data according to the presence or absence of particular features in the data. </p> <p>Where we do need to be careful with the idea of ‘self learning’, though, is that this is in no sense fully autonomous. The learning involves many other interactions, for example with the humans who select or label the training data from which the algorithms learn, with others who move the threshold of the sensitivity of the algorithm (recalibrating false positives and false negatives at the border), and indeed interactions with other algorithms such as biometric models.</p> <p>I do think that the circulations and mobilities passing through are extraordinarily important conditions of possibility for algorithms at the border. Put simply, there are no fixed criteria or categories for what ‘normal’ or ‘anomalous’ might look like, no fixed notions of what kinds of movement are to be prohibited. Instead, there is a mobile setting of thresholds of norm and anomaly which is always conducted in relation to the input data to the algorithm. For example, deep learning algorithms are increasingly being used to detect immigration risks and to detain someone in advance of reaching the border (e.g. in visa applications). The decision about that person is made not primarily in terms of their own data but more significantly in relation to an algorithm that has learned its risk thresholds by exposure to the attributes of vast numbers of unknown others. There are profound consequences for ethico-politics – the algorithm that will detain some future person has learned to recognise on the basis of the attributes of others. The thirst for traffic, as you put it, is a thirst for the data that fuels the generation of algorithmic models for border control. <span class="mag-quote-center">To the algorithm it does not matter whether the target is for capital or for the state.</span></p> <p>The relationship between machine learning and capitalism is interesting, of course. In my book <em>The Politics of Possibility</em>, I described how a set of algorithms designed for commercial consumer methods ultimately became a resource in the so-called war on terror. In effect, the commercial target of an ‘unknown consumer’ (someone not yet encountered but who may have a propensity to shop in a particular way, for example) became allied to ideas of the ‘unknown terrorist’ (someone not yet encountered who may have a propensity to be a threat). </p> <p>To the algorithm it does not matter whether the target is for capital or for the state: it is indifferent in this sense. I do think that this relationship between machine learning and capital continues to shift and change. For example, the Cambridge Analytica algorithms were used in commercial and political spheres and in both cases the target output for the algorithm was a propensity to be influenced in a specific way by targeted media. There has been public outcry at the effects of such algorithms on the democratic process – particularly in the Brexit referendum and the election of Trump – but similar algorithms are being used every day to police cities, to stop or to detain people at multiple borders, from railways stations to shopping malls. <span class="mag-quote-center">There has been public outcry at the effects of such algorithms on the democratic process – particularly in the Brexit referendum and the election of Trump – but similar algorithms are being used every day to police cities.</span></p> <p><strong><em>KW:</em></strong><em> It seems what we are dealing with here is the systematization of the possible: the target is something unknown that, by way of systematization, becomes a circumscribed and definite possibility – or rather an array of definite possibilities. The deployment of algorithms is thus ‘mobile’ and ‘flexible’ but, because it is oriented towards definite possibilities, not as much as it may seem; in fact there is only a definite and therefore limited spectrum of possibilities that algorithmic modelling can address. Does this situation change in the face of self-learning algorithms, given their inductive capacities to create semi-autonomous associations in an ever-growing range of possibilities?</em></p> <p><strong>LA:</strong> The orientation of algorithms towards definite possibilities is important, yes. In one sense I agree that there is only a limited range of possibilities. Let us make this a little more concrete with an example. One group of algorithm designers whom I observed for my research explained to me how they modify their model according to the output. Their algorithms are used in a wide range of applications, from surgical robotics to gait recognition and detecting online gambling addiction. When they set a target output, this is indeed a kind of limited spectrum of possibilities – the target has to be a numeric output between 0 and 1. However, the distance between the actual output signals of their algorithms and the target output represents what I call a space of play. Indeed, the algorithm designers described ‘playing with’ or ‘tuning’ the algorithm so that the output converges on the target. Here I think that deep machine learning is not circumscribed at all by a limited spectrum of possibility. A minute change in the weights inside one layer of the neural net can shift the output of the algorithm dramatically. In a convolutional neural net for image detection or face recognition, for example, this can represent millions of possible parameters, far in excess of what could be meaningfully understood by a human. This is why I am sceptical of claims about ‘opening the black box’ of the algorithm in order to have some kind of accountability. I would say instead that there is no transparency or accountability in the algorithm’s space of play, and so we must begin instead from notions of opacity and partiality. <span class="mag-quote-center">There is no transparency or accountability in the algorithm’s space of play, and so we must begin instead from notions of opacity and partiality.</span></p> <p><strong><em>KW:</em></strong><em> If the systematization of the possible is somehow reconfigured by self-learning algorithms, are the capacities of indefinite potential then also reconfigured when it comes to evading this very systematization? In other words: is a new, ‘intelligent’ systematization of indefinite potential arising in the context of AI?</em></p> <p><strong>LA:</strong> You have really identified a crucial issue here. In the final chapter of <em>Politics of Possibility</em>, I proposed that potentiality continues to overflow and exceed the capacity for the calculation of possibles. However, I am worried that this evasive potentiality may also be under threat, and I do address that in my new book <em>Cloud Ethics</em>. With contemporary deep machine learning, there is a move to incorporate the incalculable and to generate potentials that need never be fully exhausted. Gilles Deleuze once wrote that ‘the problem gets the solution it deserves’, implying that the particular arrangement of a problem will systematize a solution. To my reading, today’s algorithms are reversing this, so that the solution gets the problem it deserves – in the sense that the potential pathways of the neural net are infinitely malleable in relation to a solution. Let us not forget that by ‘solution’ we mean an algorithm that may decide juridical processes, policing, security, employment and so on. <span class="mag-quote-center">Today’s algorithms are reversing this, so that the solution gets the problem it deserves.</span></p> <p><strong><em>KW:</em></strong><em> An associationism that can never be known, a life of associating with other things and people that is not amenable to and not incorporable by calculation – is this now changing through AI? Is the potential of associating becoming amenable to calculation as AI thrives on an associationism that can never be known completely? In other words: does AI entail a shift in the ontology of association?</em></p> <p><strong>LA:</strong> A life of associating with other things and people is where I think some of the greatest harms of AI reside. That is to say, for me there is profound violence in the way algorithms redefine how we might live together and decide, uncertainly, in the context of unknown futures. To gather together, to make political claims in the world, to associate with others in the absence of secure recognition – all of this is threatened by AI. </p> <p>And, of course, it has profound consequences. Following the murder of Freddie Gray by Baltimore police in 2015, for example, it was machine learning algorithms that ‘detected hints of unrest’ among the African American population and preemptively targeted associative life. High school students were prevented from boarding buses to join the protest, people were arrested for their social media content, and groups were apprehended on the basis of image recognition. The potential of associating becoming amenable to calculation is something that is real and happening in the world. If there is a shift in the ontology of association with AI, then this is a shift that replaces the conventional ‘association rule’ of old data mining with something like an association of attributes. Attributes do not map onto individuals but onto small fragments of a person’s data, in association with small fragments of another’s, but calibrated against the feature vectors of the algorithm. What this ontology meant for the protestors of Baltimore is that they could not gather together in a public space because their attributes had already gathered in a risk model that can action the incalculable. <span class="mag-quote-center">To gather together, to make political claims in the world, to associate with others in the absence of secure recognition – all of this is threatened by AI.</span></p> <p><strong><em>KW:</em></strong><em> You have argued in your work that we are facing new, underacknowledged forms of algorithmic discrimination and prejudice. You note that post-9/11, risk-oriented forms of algorithmic modelling are ‘prejudicial or discriminatory’ but that they ‘write their lines in a novel form that never quite lets go of other future possibilities’. Could you explain how these new forms differ from older forms of technology-based discrimination? What is new about the new?</em></p> <p><strong>LA:</strong> In many ways, new forms of algorithmic modelling amplify the racialized targeting of black and brown bodies that has for so long circumscribed access to the public sphere and to politics. </p> <p>As Safiya Umoja Noble has documented vividly in her book <em>Algorithms of Oppression</em>, algorithms reinforce racism in distinct ways. One of my concerns, however, is that there has been a growing call for more accountable and ethical algorithms, as though the discrimination and bias could be corrected out or extracted. In fact, machine learning algorithms need assumptions and bias in order to function. They cannot simply have their discriminatory practices modified by, for example, adjusting the training data or the source code.</p> <p>For me, the significant difference between what you call ‘older forms’ and novel deep learning is the particular relationship between individual and population, and how this is used to govern life. Consider, for example, the racist profiles of Francis Galton’s nineteenth-century composite portraits or Adolphe Quetelet’s statistics of the average man. What were ‘variables’ in these social models would in contemporary terms be closer to computational attributes or features. Consider, for example, the young man who is detained in a police station because the risk algorithm outputs a high score for propensity to abscond. This does not take place because he shares the statistical probability or fixed profile of threat. Rather, it is because the feature vectors of his data have a proximity to features derived from the millions of parameters of the data of unknown others. Yes, the algorithm acts in a way that is racist and prejudicial, but it is in a form that involves new ethico-political relations to ourselves and to others. </p> <p>Very frequently I have heard a desk analyst or police operator say that ‘well, I can just move the threshold if I don’t find a useful output’. The moving of a threshold in the computation is the moving of a relation of societal norms to societal anomalies. Yes, we have seen discrimination and prejudice from the Hollerith machine to the bell curve, but it is crucial that we understand what it means to define a feature or an attribute, to move a threshold or adjust a weight.</p> <p><strong><em>KW:</em></strong><em> Do self-learning algorithms introduce another quality in this context? Or are they merely more efficient when it comes to discriminating indifferently between subjects?</em></p> <p><strong>LA:</strong> Efficiency is an interesting way to put it. This is what is often claimed for algorithmic systems at the border or in the criminal justice system – to have a more accurate and efficient process of targeting what matters. </p> <p>But, in fact there are all kinds of inefficiencies too. For example, when the UK police forces have used automated facial recognition algorithms to detect target individuals in crowds, the proliferation of false positives – with the corresponding stopping, searching and identifying of people – has been inefficient and discriminatory. So, then what comes to matter? I think that error is an interesting question here. One could point to all of the many errors and say ‘here is the space for critique and potential alternative futures, here in the excess of the errors’. </p> <p>But, again, at the level of the algorithm, error is distance. What does it mean to say error is distance? Error is merely the spatial gap between the output and a target. And so, even error is productive, even error is incorporable within the generative capacities of machine learning. How does one begin to adjudicate on discrimination with indifference when it is the algorithms that are generating the means to adjudicate in the world, to identify good and bad, to filtrate and condense to an optimized output? Yes, perhaps optimization and not efficiency is the heart of it.</p> <p><strong><em>KW:</em></strong><em> In your work there is a notion of active agency attributed to technology and data. With regard to radio frequency identification (RFID), for instance, you speak of ‘ambient locatability’; and with regard to security tech, you explore how this technology ‘lets the environment talk’. Thinking both ideas in conjunction, the notion of ‘ambient agency’ arises – a notion that becomes more complex still when you embark upon a critique of anthropocentrism, noting that ‘species life has so dominated our thinking, that the bio in biopolitics has a blind spot with regard to the lives of objects’. What does it mean for you to rethink human agency against this background?</em></p> <p><strong>LA:</strong> The question of what it means to be human is paramount here. It is my view that our relations to ourselves and to others is changing through our interactions with algorithms. One of the most striking moments in my recent research was when an experienced surgeon described how using her surgical robot to excise tumours had changed her sense of the limits of her own agency. She definitively did not distinguish between the surgical instruments of the robot, the algorithms that animate the API of the machine, and her human capacities. These were, for her, thoroughly entangled. I found this to be a compelling insight. </p> <p>I followed the design of machine learning for surgery partly because the same families of algorithms are used in autonomous weapons and autonomous vehicles. In terms of ambient locatability, the neural networks locate the edges of tumours, territories, faces, and so on through the data they have been exposed to in training. Because image recognition and language processing uses deep learning, the environment ‘talks’ in new ways. The RFID and other devices I discussed in that book continue to be significant data inputs, but in cloud-based systems they are often one data-stream among many. To rethink human agency against this backdrop implies an acknowledgement of the composite forms of agency that emerge in our entanglements with algorithms and machines. I think that this should be given greater attention in debates about the ‘human in the loop’ supposed to supply the locus of ethics for composite systems such as autonomous weapons. Who is this human? How are their embodied relations to the world changed through their collaborations with algorithms? <span class="mag-quote-center">The question of what it means to be human is paramount here… Who is this human? How are their embodied relations to the world changed through their collaborations with algorithms?</span></p> <p><strong><em>KW:</em></strong><em> As you argue in </em>The Politics of Possibility,<em> algorithmic modelling that is primarily concerned with re-establishing agency is all about generating new capacities to act and intervene in a world of circulation and movement. You undertake a critique of decision making in this context, since these semi-automated sovereign actions are subsumed by what you refer to as ‘actionable analytics’ – indifferent to error, indifferent to the people that are affected by the actions, indifferent to any consequences and life in general. </em></p> <p><em>It seems that your critique of this novel form of sovereign agency is primarily concerned with the philosophy of decision. Or are there also other aspects that are important to you when it comes to rethinking political agency in an AI driven world?</em></p> <p><strong>LA:</strong> I am sure that you are right that I have been preoccupied with the philosophy of decision, and I admit that I am still thinking about this a great deal. To put this into context: very often the public inquiries that have focused on the potential harms of algorithms have expressed concern at what they call ‘algorithmic decision making’. Put simply, the moral panic seems to be about the notion that a machine and not a human decides. </p> <p>Now, setting aside what I have said about all algorithmic decisions containing the residue of multiple other human and machine decisions, what is special about the human who decides? If a human judge decides on a jail sentence without reference to the recidivism algorithm, or a human oncologist decides on a course of treatment without recourse to the optimal pathways algorithm – is it the ‘humanness’ that we value? What is it about this human decision that matters to us? I think that this is interesting because of course the judge, the border guard, and the police officer are fallible, their decisions could later turn out to have been the wrong course of action. Yet, it is precisely this acknowledgment that the decision is made in the dark, in the fully political realm of undecidability, that leaves open the space for other futures, for other pathways not taken. <span class="mag-quote-center">We must rethink political agency in an AI driven world, not least because the neural nets in algorithms like those of Cambridge Analytica are remaking political worlds.</span></p> <p>One of my real concerns about the output of a neural net is that it is described as a ‘decision support’ instrument. It does not confront what is undecidable in the world, but makes a claim to the resolution of political difficulties, and in so doing condenses multiplicities to a single output. </p> <p>To be clear, I do not deny that a machine learning algorithm could also be approached critically as an ethico-political agent. I think that we need to ask these kinds of questions: What were the hidden layer pathways not taken in the making of that output? How have a set of contingent weighted probabilities generated something called an automated decision? Can we make those weights in the algorithm actually carry the full weight of political undecidability? </p> <p>We must rethink political agency in an AI driven world, not least because the neural nets in algorithms like those of Cambridge Analytica are remaking political worlds. One place to start, at least for me, would be to insist upon the non-closures and moments of undecidability that continue to lodge themselves within the algorithm.</p> <p><em>Louise Amoore will speak on November 8, 2018 at the </em>AMBIENT REVOLTS<em> conference. <a href="https://ambient-revolts.berlinergazette.de">More info here.</a><em></em></em></p><p><em><em>This interview was <a href="https://www.eurozine.com/politics-artificial-intelligence/">first published</a> by Eurozine magazine on October 23, 2018.</em></em></p><fieldset class="fieldgroup group-sideboxs"><legend>Sideboxes</legend><div class="field field-read-on"> <div class="field-label"> 'Read On' Sidebox:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> <p><em>Louise Amoore will speak on November 8, 2018 at 7:30 p.m. at the </em>AMBIENT REVOLTS<em> conference. <a href="https://ambient-revolts.berlinergazette.de">More info here. </a><br /></em></p> </div> </div> </div> <div class="field field-related-stories"> <div class="field-label">Related stories:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> <a href="/cathy-oneil-leo-hollis/weapons-of-maths-destruction">Weapons of maths destruction </a> </div> <div class="field-item even"> <a href="/dan-mcquillan/manifesto-on-algorithmic-humanitarianism">Manifesto on algorithmic humanitarianism</a> </div> </div> </div> </fieldset> <div class="field field-rights"> <div class="field-label">Rights:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> CC by 4.0 </div> </div> </div> digitaLiberties digitaLiberties Can Europe make it? Krystian Woznicki Louise Amoore Fri, 26 Oct 2018 07:45:07 +0000 Louise Amoore and Krystian Woznicki 120287 at https://www.opendemocracy.net The EU call it copyright, but it is massive Internet censorship and must be stopped https://www.opendemocracy.net/can-europe-make-it/xnet/civilised-societies-don-t-call-it-censorship-but-copyright <div class="field field-summary"> <div class="field-items"> <div class="field-item odd"> <p>We citizens battling for civil rights on the Internet will meet our obligation and fight the good fight. We’ll stop this attack on the Internet and democracy sooner or later.</p> </div> </div> </div> <p class="Standard"><span class='wysiwyg_imageupload image imgupl_floating_none caption-xlarge'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/500209/Tiranía_circa_1820.jpg" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/500209/Tiranía_circa_1820.jpg" alt="lead " title="" class="imagecache wysiwyg_imageupload caption-xlarge imagecache imagecache-article_xlarge" style="" width="460" /></a> <span class='image_meta'><span class='image_title'>Spanish-language cartoon Tiranía (Tyranny). Superstition sits on the throne, advised by a priest and a devil by Claudio Linati, 1826. Wikicommons. Public domain.</span></span></span></p><p class="Standard"><em>Xnet (</em><a href="https://xnet-x.net/en/"><em>https://xnet-x.net/en/</em></a><em>), an activist group working for civil rights in the Internet, is the founder member in Spain of the #SaveYourInternet coalition, which has among its participants groups such as the Electronic Frontier Foundation (EFF), European Digital Rights (EDRi) and others. We have come together to organise a campaign to inform the public about the hidden dangers of the new European Copyright Directive. </em></p> <p class="Standard">With the approval in the European Parliament of the final text of the Copyright Directive, which will be definitely put to the vote in a very few months’, the European Union has lost a historic opportunity to produce copyright legislation adapted for the Internet in the twenty-first century. What the European Parliament will finally vote on is a technophobic text, tailor-made for the interests of the copyright monopolies which, moreover, doesn’t guarantee the right of authors to have a reasonable standard of living as a result of their work.</p> <p class="Standard">If the law is eventually passed, it will be used for wholesale curtailment of freedoms and more censorship, in keeping with the bizarre idea that anything that doesn’t produce hard cash for the <em>major </em>players<em> – </em>which doesn’t mean authors! – has to be prohibited and eliminated. <span class="mag-quote-center">The amount of money the real authors receive in the end is zero or almost zero.</span></p> <p class="Standard">This is a tragedy for workers in the domain of culture who (with a few, brave, and praiseworthy exceptions) have once again been frivolously incapable of informing themselves about the real state of affairs. They have passively swallowed the version fed to them by their masters and, avidly playing the victim, have become the chief mouthpiece of freedom-killing propaganda without the slightest understanding that this is not going to enhance their rights but will do away with the rights of everyone.</p> <p class="Standard">Alarm bells started ringing almost two years ago when we discovered that, rather than being a proposal for an obsolete copyright law, the directive is being used as a Trojan horse to introduce surveillance, automatic data processing, government by opaque algorithms, and censorship without court orders, etc. </p> <p class="Standard">This threat to such basic rights as freedom of expression and access to culture and information lurks in ruses which are mainly hidden in two articles of the Directive:</p> <p class="Standard"><strong>Article 11: no link without a licence</strong></p> <p class="Standard">Article 11, otherwise known as the “Linktax” article, has created a new economic “right” for magnates of the written press. This ‘right’, moreover, implies indefinitely restricting the possibility of citing the press online.</p> <p class="Standard">If this seems absurd, arbitrary and counterproductive, we invite you to read the proposal itself. This is an ambiguous text, described by the jurist Andrej Savin as <em>“One of the worst texts I have ever seen in my 23-year-long career as a law scholar.” </em>Given its muzzy formulation, the safest response for any platform will be not to link to any media publication without explicit permission. <em class="mag-quote-center">“One of the worst texts I have ever seen in my 23-year-long career as a law scholar.”</em></p> <p class="Standard">This perverse measure will be the equivalent, on a European scale, to the “Google tax”, which is already in force in Spain and Germany. Even its promoters were soon to regret it, when Google shut down Google News in Spain after it was approved. The Google tax is paradoxical and those responsible for initiating it know very well it won’t work in Europe. For example, Xnet revealed that the big German publishing company Alex Springer was paying itself – having linked up to pay itself – in an outlandish pretence that “everything’s fine”.</p> <p class="Standard">Where are they trying to go with this? What sense is there in this move by the press barons to push laws which prevent you from linking up to their content, disseminating it, and commenting on them? Is this just a mix of ignorance and greed, or something like shooting yourself in the foot?</p> <p class="Standard">There is certainly something of this involved, but we believe that this is a mix of ignorance and greed which, in the end, means cutting off your nose to spite your face (when you’re trying to damage someone else’s face). With laws like this, the press barons can engage in legal harassment to the point of closing down social aggregators and communities like Meneame or Reddit, eliminating any new competitor, consolidating their monopoly, and thus becoming the lone voice on the Internet, the only ones who speak. In short, they are aspiring to become a new kind of television.</p> <p class="Standard"><strong>Article 13: no uploading content without a licence</strong></p> <p class="Standard">Platforms – from medium-sized providers of services storing subject material through to the giants of the Internet – will be considered responsible for any copyright infringement committed by their users, and they are bulldozed into taking preventive measures. In other words, this isn’t a matter of eliminating content but directly preventing people from uploading it.</p> <p class="Standard">Of course, nobody is forcing them to do anything. They are simply being made responsible for material uploaded by their users. It’s like a car salesman being held responsible for crimes committed by people who buy his cars. This can only end up with algorithmic upload filters being applied to absolutely everything or, in other words, prior, automatic, and massive Internet censorship. <span class="mag-quote-center">This can only end up with algorithmic upload filters being applied to absolutely everything or, in other words, prior, automatic, and massive Internet censorship. </span></p> <p class="Standard">Recently, YouTube prevented the pianist James Rhodes from uploading one of his own videos in which he is playing Bach. This kind of “error”, which always favours privatisation of the public domain, is the everyday reality for all authors who use YouTube.</p> <p class="Standard">And this isn’t just about the “errors” that lead to the privatisation of the public domain. It is about the difficulty or impossibility of uploading on the Internet any kind of derivative work: parodies, memes, remixes, fandom, satires, and so on or, in other words, the very essence of culture, political freedom and freedom of expression.</p> <h2 class="Standard"><strong>Repeating the medieval experience of the invention of the printing press</strong></h2> <p class="Standard">This whole setup, which looks like a science-fiction dystopia, an impossible attempt to lock the doors when the horse has bolted, or an exaggeratedly grim prophecy being spread by concerned activists, is already being implemented today on big platforms.</p> <p class="Standard">At present, there are two options:</p> <blockquote><p class="Standard"><strong>The Spotify model </strong></p><p class="Standard"><strong>&nbsp;</strong>In this case, the platform would acquire all national and international licences and then make all contents available unidirectionally in such a way that users can’t upload content. Even so, in the case of Spotify, one of the few giants with the resources to do this today, paying the copyright monopolies has raised its overheads so much that, despite its commercial success, its medium-term sustainability isn’t guaranteed. If this is the situation of Spotify, it’s not difficult to imagine what will happen to medium-sized Internet companies.</p><p class="Standard">&nbsp;</p><p class="Standard">This model has another defect which is obvious to most artists. The amount of money the real authors receive in the end is zero or almost zero. </p></blockquote> <blockquote><p class="Standard"><strong>The</strong> <strong>Facebook/Google model</strong></p><p class="Standard">These new Internet monopolies refuse to share the cake with the old copyright monopolies and therefore opt for large-scale, automatic filtering of all content. They will find it easier to adapt to Article 13 since now they will only need to apply the filtering mechanisms before uploading takes place.</p><p class="Standard">&nbsp;</p><p class="Standard">This technology, besides being opaque and exclusive, is very expensive. Since it will be obligatory, it will also mean that these giants are very unlikely to have competitors that have any chance of prospering.</p><p class="Standard">&nbsp;</p><p class="Standard">Google has spent approximately 100 million dollars to create the technology that has so far enabled it to respond to copyright claims coming in from only 1% of its users.</p><p class="Standard">&nbsp;</p><p class="Standard">The effect which these arbitrary regulations will have on free Internet conversation, on diffusion of culture and information, and access to them will be devastating. </p></blockquote> <h2 class="Standard"><strong>Whose rights are at stake?</strong></h2> <p class="Standard">Authors’ rights (Droits des auteurs→ copyright) are important. But what are these rights? And which authors have them?</p> <p class="Standard">Any democratic proposal seeking widespread consensus and aspiring to guarantee the decent employment of authors without jeopardising the basic rights of citizens would need, finally, to take a bold stand against the copyright monopolies and management entities which are suspected of abuse when not directly investigated, tried, and condemned, as we succeeding in doing with SGAE (the Spanish Society of Authors and Publishers). </p> <p class="Standard">It should also take as given the fact that the concept of the author or medium has changed in the last twenty years. Since the earliest days of Web 2.0, the content generated by users has evolved from being an interesting social experiment to the digital reality in which we are immersed day in day out. </p> <p class="Standard">In a society like that of Spain, for example, content generated by entities which were once “big” media now account for less than 5% of Internet traffic. The EU must respect citizens as content generators and not regard them simply as people who steal content generated by the elite. <span class="mag-quote-center">The EU must respect citizens as content generators and not regard them simply as people who steal content generated by the elite. </span></p> <p class="Standard">No single company, medium, or author has written Wikipedia, or turned the Web into the repository of gazillions of videos, or generated hundreds of millions of tweets per day. We – the people – did this. The Internet doesn’t belong to them.</p> <p class="Standard">The threats skulking behind the Copyright Directive are part of an attempt to stuff the genie back into the bottle and embark on an inquisition that would allow the oligarchs to take control of the Internet. Our politicians and big company bosses are envious of the Chinese model.</p> <h2 class="Standard"><strong>Open architecture</strong></h2> <p class="Standard">The initial idea of the fathers and mothers of the World Wide Web and the Internet, as we know it, this idea of an open architecture for sharing links without restriction, was crucial to its success. And it would be radically undermined if the directive is approved.</p> <p class="Standard">Now the EU wants to create an Internet with a licence. And since we are a civilised society, they can’t call it censorship so they say “copyright”.</p> <p class="Standard">In the final vote, all the power and wealth will be on one side. We, the people, who are on the other side ­– in favour of freedom of expression, an open Internet, and copyright laws adapted to the twenty-first century, which will enable authors to make a decent living and not have to scrabble for crumbs dropped from the table of the Internet moguls ­ – will be vilified, slandered as thieves, hackers and pirates, and absurd allegations will be made against us.</p> <p class="Standard">This situation has happened before. And what it most clearly evokes is the relationship between the invention of the printing press and the censorship of the Holy Inquisition.</p><p class="Standard"><span class='wysiwyg_imageupload image imgupl_floating_none caption-xlarge'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/500209/640px-Thomas_Rowlandson_-_Spanish_Inquisition_-_Google_Art_Project.jpg" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/500209/640px-Thomas_Rowlandson_-_Spanish_Inquisition_-_Google_Art_Project.jpg" alt="" title="" class="imagecache wysiwyg_imageupload caption-xlarge imagecache imagecache-article_xlarge" style="" width="460" /></a> <span class='image_meta'><span class='image_title'>Inscribed in pen and ink. "Spanish Inquisition" by Thomas Rowlandson (1756 - 1827). Wikicommons/ Google Cultural Institute. Some rights reserved.</span></span></span></p> <h2 class="Standard"><strong>What is the responsibility of artists and (left) political parties?</strong></h2> <p class="Standard">The vote has not yet been cast. We have a few months to get everyone to understand the magnitude of the danger. We can win this battle. We have already won <em>in extremis </em>in other situations like the fight for net neutrality and ACTA, and we can do it again.</p> <p class="Standard"><strong>What would help:</strong></p> <ul><li>-&nbsp; Artists who will step forward and say, “NOT in my name”.</li></ul> <ul><li>-&nbsp; A clear, effective, and non-opportunist stance from the left in favour of an open Internet and freedom of expression. </li></ul> <p class="Standard">The left instead tends all too often to cultivate a technophobic position which contributes towards censoring narratives. The case of Spain is paradigmatic. The PP (right-wing party) and PSOE (“socialist” party) voted and will vote in block for whatever the Copyright Monopolies and the SGAE tells them to vote for, which is to say what most favours control and censorship. </p> <p class="Standard">But the example of the left-wing electoral alliance Unidos Podemos is also instructive. They joined the SaveYourInternet campaign at the last moment in order to coopt these citizen-activists. The next day, one Anova and two Izquierda Unida members of parliament abstained from voting and nobody in either party as much as batted an eyelid. It would seem that none of our politicians take these basic rights very seriously.</p> <p class="Standard">We citizens who are active in battling for civil rights on the Internet will meet our obligation and fight the good fight. We’ll stop this attack on the Internet and democracy sooner or later, with or without the help of the “artists” or the “parliamentary left”, but not without bitterly calling attention to the dangerous future that is looming for freedom of expression and information, and our other freedoms in the new context of the digital age in which, again and again, the tool is being destroyed and the messenger killed in order to preserve a status quo that must not continue.</p><p class="Standard"><span class='wysiwyg_imageupload image imgupl_floating_none caption-xlarge'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/500209/Inquisición_(grabado_siglo_XIX).jpg" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/500209/Inquisición_(grabado_siglo_XIX).jpg" alt="" title="" class="imagecache wysiwyg_imageupload caption-xlarge imagecache imagecache-article_xlarge" style="" width="460" /></a> <span class='image_meta'><span class='image_title'>Heretics brought before the tribunal of the Inquisition, Seville by F.Moyse, 1870. Wikicommons. Public domain.</span></span></span></p> <p> <em>This text was first released</em> <em>in <a href="https://www.revistamongolia.com/">no.70, Revista Mongolia</a></em></p><div class="field field-country"> <div class="field-label"> Country or region:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> EU </div> <div class="field-item even"> Spain </div> </div> </div> <div class="field field-topics"> <div class="field-label">Topics:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> Civil society </div> <div class="field-item even"> Conflict </div> <div class="field-item odd"> Culture </div> <div class="field-item even"> Democracy and government </div> <div class="field-item odd"> Economics </div> <div class="field-item even"> Ideas </div> <div class="field-item odd"> International politics </div> <div class="field-item even"> Internet </div> </div> </div> <div class="field field-rights"> <div class="field-label">Rights:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> CC by 4.0 </div> </div> </div> Can Europe make it? digitaLiberties Can Europe make it? Spain EU Civil society Conflict Culture Democracy and government Economics Ideas International politics Internet Xnet Thu, 25 Oct 2018 18:31:44 +0000 Xnet 120285 at https://www.opendemocracy.net Rethinking AI through the politics of 1968 https://www.opendemocracy.net/digitaliberties/dan-mcquillan/rethinking-ai-through-politics-of-1968 <div class="field field-summary"> <div class="field-items"> <div class="field-item odd"> <p>We need to pursue a political philosophy that was embraced in '68, of living the new society through authentic action in the here and now.</p> </div> </div> </div> <p><span class='wysiwyg_imageupload image imgupl_floating_none caption-xlarge'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/500209/4682461024_10ce18b9ea_z.jpg" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/500209/4682461024_10ce18b9ea_z.jpg" alt="" title="" class="imagecache wysiwyg_imageupload caption-xlarge imagecache imagecache-article_xlarge" style="" width="460" /></a> <span class='image_meta'><span class='image_title'>HAL in vector. Made in flash (2004). Flickr/Abel. Some rights reserved.</span></span></span></p><p>There's a definite resonance between the agitprop of '68 and social media. Participants in the UCU strike earlier this year, for example, experienced Twitter as <a href="https://www.wired.co.uk/article/no-capitulation-uk-university-pension-protest-twitter">a platform</a> for both affective solidarity and practical self-organisation<a href="#_ftn1">[1]</a>. </p> <p>However, there is a different genealogy that speaks directly to our current condition; that of systems theory and cybernetics. What happens when the struggle in the streets takes place in the smart city of sensors and data? Perhaps the revolution will not be televised, but it will certainly be subject to algorithmic analysis. Let's not forget that 1968 also saw the release of '2001: A Space Odyssey', featuring the AI supercomputer HAL.</p> <p>While opposition to the Vietnam war was a rallying point for the movements of '68, the war itself was also notable for the application of systems analysis by US Secretary of Defense Robert McNamara, who attempted to make it, in modern parlance, a data-driven war. </p> <p>During the Vietnam war the <a href="http://www.dtic.mil/docs/citations/ADA051609">hamlet pacification programme</a> alone produced 90,000 pages of data and reports a month<a href="#_ftn2">[2]</a>, and the body count metric was published in the daily newspapers. The milieu that helped breed our current algorithmic dilemmas was the contemporaneous swirl of systems theory and cybernetics, ideas about emergent behaviour and experiments with computational reasoning, and the intermingling of military funding with the hippy visions of the Whole Earth Catalogue.</p> <p>The double helix of DARPA and Silicon Valley can be traced through the evolution of the web to the present day, where AI and machine learning are making inroads everywhere carrying their own narratives of revolutionary disruption; a Ho Chi Minh trail of predictive analytics. </p> <p>They are playing Go better than grand masters and preparing to drive everyone's car, while the media panics about AI taking our jobs. But this AI is nothing like HAL. It's a form of pattern-finding based on mathematical minimisation; like a complex version of fitting a straight line to a set of points. These algorithms find the optimal solution when the input data is both plentiful and messy. Algorithms like <a href="https://www.youtube.com/watch?v=Ilg3gGewQ5U">backpropagation</a><a href="#_ftn3">[3]</a> can find patterns in data that were intractable to analytical description, such as recognising human faces seen at different angles, in shadows and with occlusions. The algorithms of AI crunch the correlations and the results often work uncannily well.</p><p><span class='wysiwyg_imageupload image imgupl_floating_none caption-xlarge'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/500209/Screen Shot 2018-10-19 at 20.51.23.png" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/500209/Screen Shot 2018-10-19 at 20.51.23.png" alt="" title="" class="imagecache wysiwyg_imageupload caption-xlarge imagecache imagecache-article_xlarge" style="" width="460" /></a> <span class='image_meta'><span class='image_title'>Screenshot of Backpropagation Youtube video on how neural networks learn.</span></span></span></p><p>But it's still computers doing what computers have been good at since the days of vacuum tubes; performing mathematical calculations more quickly than us. Thanks to algorithms like neural networks, this calculative power can learn to emulate us in ways we would never have guessed at. This learning can be applied to any context that is boiled down to a set of numbers, such that the features of each example are reduced to a row of digits between zero and one and are labelled by a target outcome. The datasets end up looking pretty much the same whether it's cancer scans or Netflix-viewing figures. </p><p>There's nothing going on inside except maths; no self-awareness and no assimilation of embodied experience. These machines can develop their own unprogrammed behaviours but utterly lack an understanding of whether what they've learned makes sense. And yet, machine learning and AI are becoming the mechanisms of modern reasoning, bringing with them the kind of dualism that the philosophy of '68 was set against, a belief in a hidden layer of reality which is ontologically superior and <a href="https://link.springer.com/article/10.1007/s13347-017-0273-3">expressed mathematically</a><a href="#_ftn4">[4]</a>.</p> <p>The delphic accuracy of AI comes with built-in opacity because massively parallel calculations can't always be reversed to human reasoning, while at the same time it will happily regurgitate society's prejudices when trained on raw social data. It's also mathematically impossible to <a href="https://www.youtube.com/watch?v=jIXIuYdnyyk">design an algorithm</a> to be fair to all groups at the same time<a href="#_ftn5">[5]</a>. <span class="mag-quote-center">It's also mathematically impossible to design an algorithm to be fair to all groups at the same time.</span></p> <p>For example, if the reoffending base rates vary by ethnicity, a recidivism algorithm like COMPAS will predict different numbers of false positives and more black people will be <a href="https://www.washingtonpost.com/news/monkey-cage/wp/2016/10/17/can-an-algorithm-be-racist-our-analysis-is-more-cautious-than-propublicas/">unfairly refused bail</a><a href="#_ftn6">[6]</a>. The wider impact comes from the way the algorithms proliferate social categorisations such as 'troubled family' or 'student likely to underachieve', fractalising social binaries wherever they divide into 'is' and 'is not'. This isn't only a matter of data dividuals misrepresenting our authentic selves but of technologies of the self that, through repetition, produce subjects and act on them. And, as AI analysis starts overcode MRI scans to force psychosocial symptoms back into the brain, we will even see algorithms play a part in <a href="https://www.vox.com/science-and-health/2017/4/4/15073652/precision-psychiatry-depression">the becoming of our bodies</a><a href="#_ftn7">[7]</a>.</p> <h2><strong>Political technology</strong></h2> <p>What we call AI, that is, machine learning acting in the world, is actually a political technology in the broadest sense. Yet under the cover of algorithmic claims to objectivity, neutrality and universality, there's an infrastructual switch of allegiance to algorithmic governance. </p> <p>The dialectic that drives AI into the heart of the system is the contradiction of societies that are data rich but subject to austerity. One need only look at the <a href="https://www.gov.uk/government/news/matt-hancock-new-technology-is-key-to-making-nhs-the-worlds-best">recent announcements</a> about a brave new NHS to see the fervour welcoming this salvation<a href="#_ftn8">[8]</a>. While the global financial crisis is manufactured, the restructuring is real; algorithms are being enrolled in the refiguring of work and social relations such that precarious employment depends on <a href="https://www.ft.com/content/88fdc58e-754f-11e6-b60a-de4532d5ea35">satisfying algorithmic demands</a><a href="#_ftn9">[9]</a> and the public sphere exists inside a targeted attention economy.</p> <p>Algorithms and machine learning are coming to act in the way pithily described by Pierre Bourdieu, as structured structures predisposed to function as structuring structures<a href="#_ftn10">[10]</a>, such that they become absorbed by us as habits, attitudes, and pre-reflexive behaviours. </p> <p>In fact, like global warming, AI has become <a href="https://www.upress.umn.edu/book-division/books/hyperobjects">a hyperobject</a><a href="#_ftn11">[11]</a> so massive that its totality is not realised in any local manifestation, a higher dimensional entity that adheres to anything it touches, whatever the resistance, and which is perceived by us through its informational imprints. </p> <p>A key imprint of machine learning is its predictive power. Having learned both the gross and subtle elements of a pattern it can be applied to new data to predict which outcome is most likely, whether that is a purchasing decision or a terrorist attack. This leads ineluctably to the logic of preemption in any social field where data exists, which is every social field, so algorithms are predicting which prisoners should be given parole and <a href="http://www.reimaginingsocialwork.nz/2015/06/predictive-risk-modelling-on-rights-data-and-politics/">which parents</a> are <a href="https://www.theguardian.com/society/2018/sep/16/councils-use-377000-peoples-data-in-efforts-to-predict-child-abuse">likely to abuse</a> their children<a href="#_ftn12">[12]</a><a href="#_ftn13">[13]</a>.</p> <p>We should bear in mind that the logic of these analytics is correlation. It's purely pattern matching, not the revelation of a causal mechanism, so enforcing the foreclosure of alternative futures becomes effect without cause. The computational boundaries that classify the input data map outwards as cybernetic exclusions, implementing continuous forms of what Agamben calls states of exception. The internal imperative of all machine learning, which is to optimise the fit of the generated function, is entrained within a process of social and economic optimisation, fusing marketing and military strategies through the unitary activity of targeting. <span class="mag-quote-center">A society whose synapses have been replaced by neural networks will generally tend to a heightened version of the status quo.</span></p> <p>A society whose synapses have been replaced by neural networks will generally tend to a heightened version of the status quo. Machine learning by itself cannot learn a new system of social patterns, only pump up the existing ones as computationally eternal. Moreover, the weight of those amplified effects will fall on the most data visible i.e. the poor and marginalised. The net effect being, as the book title says, the <a href="https://www.wired.com/story/excerpt-from-automating-inequality/">automation of inequality</a><a href="#_ftn14">[14]</a>. </p> <p>But at the very moment when the tech has emerged to fully automate neoliberalism, the wider system has lost its best-of-all-possible-worlds authority, and racist authoritarianism mestastasizes across the veneer of democracy. </p> <h2><strong>Contamination and resistance</strong></h2> <p>The opacity of algorithmic classifications already have the tendency to evade due process, never mind when the levers of mass correlation are at the disposal of ideologies based on paranoid conspiracy theories. A common core to all forms of fascism is a rebirth of the nation from its present decadence, and a mobilisation to deal with those parts of the population that are <a href="https://www.libraryofsocialscience.com/ideologies/resources/griffin-the-palingenetic-core/">the contamination</a><a href="#_ftn15">[15]</a>. &nbsp;</p> <p>The automated identification of anomalies is exactly what machine learning is good at, at the same time as promoting the kind of thoughtlessness that Arendt identified in Eichmann.</p> <p>So much for the intensification of authoritarian tendencies by AI. What of resistance? </p> <p>Dissident Google staff forced them to partly <a href="https://www.nytimes.com/2018/05/30/technology/google-project-maven-pentagon.html">drop project Maven</a><a href="#_ftn16">[16]</a>, which develops drone targeting, and Amazon workers are campaigning against the sale of facial recognition systems to the government. But these workers are the privileged guilds of modern tech; this isn't a return of working class power. </p><p><span class='wysiwyg_imageupload image imgupl_floating_none caption-xlarge'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/500209/Screen Shot 2018-10-19 at 20.58.55.png" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/500209/Screen Shot 2018-10-19 at 20.58.55.png" alt="" title="" class="imagecache wysiwyg_imageupload caption-xlarge imagecache imagecache-article_xlarge" style="" width="460" /></a> <span class='image_meta'><span class='image_title'>Screenshot: Defense One Production Youtube video on Project Maven:AI and the US military.</span></span></span></p> <p>In the UK and USA there's a general institutional push <a href="https://www.gov.uk/government/consultations/consultation-on-the-centre-for-data-ethics-and-innovation">for ethical AI</a> – in fact you can't move for initiatives aiming to add ethics to algorithms<a href="#_ftn17">[17]</a>, but i suspect this is mainly preemptive PR to head off people's growing unease about their coming AI overlords. All the initiatives that want to make AI ethical seem to think it's about adding something i.e. ethics, instead of about revealing the value-laden-ness at every level of computation, right down to the mathematics. <span class="mag-quote-center">All the initiatives that want to make AI ethical seem to think it's about adding something…</span></p> <p>Models of radical democratic practice offer a more political response through structures such as people's councils composed of those directly affected, mobilising what Donna Haraway calls <a href="https://doi.org/10.1177/2056305118768303">situated knowledges</a> through horizontalism and direct democracy<a href="#_ftn18">[18]</a>. While these are valid modes of resistance, there's also the '68 notion from groups like the Situationists that the Spectacle generates the potential for its own supersession<a href="#_ftn19">[19]</a>. </p> <p>I'd suggest that the self-subverting quality in AI is its latent surrealism. For example, experiments to figure out how image recognition actually works probed the contents of intermediary layers in the neural networks, and by recursively applying filters to these outputs produced <a href="http://googleresearch.blogspot.com/2015/06/inceptionism-going-deeper-into-neural.html">hallucinatory images</a> that are straight out of an acid trip, such as snail-dogs and trees made entirely of eyes<a href="#_ftn20">[20]</a>. When people deliberately feed AI the wrong kind of data it makes surreal classifications. It's a lot of fun, and can even <a href="http://www.memo.tv/portfolio/learning-to-see/">make art that gets shown</a> in galleries<a href="#_ftn21">[21]</a> but, like the Situationist drive through the Harz region of Germany while blindly following a map of London, it can also be a poetic disorientation that coaxes us out of our habitual categories.</p> <h2><strong>Playfully serious</strong></h2> <p>While businesses and bureaucracies apply AI to the most serious contexts to make or save money or, through some miracle of machinic objectivity, solve society's toughest problems, its liberatory potential is actually ludic. </p> <p>It should be used playfully instead of abused as a form of prophecy. But playfully serious, like the tactics of the Situationists themselves, a disordering of the senses to reveal the possibilities hidden by the dead weight of commodification. Reactivating the demands of the social movements of '68 so that work becomes play, the useful becomes the good, and life itself becomes art. <span class="mag-quote-center">Reactivating the demands of the social movements of '68 so that work becomes play, the useful becomes the good, and life itself becomes art...</span></p> <p>At this point in time, where our futures are becoming cut off by algorithmic preemption we need to pursue a political philosophy that was embraced in '68, of living the new society through authentic action in the here and now. </p> <p>A counterculture of AI must be based on immediacy. The struggle in the streets must go hand in hand with a detournement of machine learning; one that seeks authentic decentralization, not Uber-ised serfdom, and federated horizontalism not the invisible nudges of algorithmic governance. We want a fun yet anti-fascist AI, so we can say "beneath the backpropagation, the beach!".</p><p> <strong>References</strong></p> <p><a href="#_ftnref1">[1]</a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Kobie, Nicole. ‘#NoCapitulation: How One Hashtag Saved the UK University Strike’. Wired UK 18 Mar. 2018..</p> <p><a href="#_ftnref2">[2]</a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Thayer, Thomas C. A Systems Analysis View of the Vietnam War: 1965-1972. Volume 2. Forces and Manpower. 1975. www.dtic.mil..</p> <p><a href="#_ftnref3">[3]</a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 3Blue1Brown. What Is Backpropagation Really Doing? | Deep Learning, Chapter 3. N.p. Film. </p> <p><a href="#_ftnref4">[4]</a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; McQuillan, Dan. ‘Data Science as Machinic Neoplatonism’. Philosophy &amp; Technology (2017): 1–20. </p> <p><a href="#_ftnref5">[5]</a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Narayanan, Arvind. Tutorial: 21 Fairness Definitions and Their Politics. N.p. </p> <p><a href="#_ftnref6">[6]</a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Corbett-Davies, Sam et al. ‘A Computer Program Used for Bail and Sentencing Decisions Was Labeled Biased against Blacks. It’s Actually Not That Clear.’ Washington Post 17 Oct. 2016.</p> <p><a href="#_ftnref7">[7]</a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Resnick, Brian. ‘Treating Depression Is Guesswork. Psychiatrists Are Beginning to Crack the Code.’ Vox. N.p., 4 Apr. 2017.</p> <p><a href="#_ftnref8">[8]</a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Department of Health and Social Care. ‘Matt Hancock: New Technology Is Key to Making NHS the World’s Best’. GOV.UK. N.p., 6 Sept. 2018.</p> <p><a href="#_ftnref9">[9]</a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; O’Connor, Sarah. ‘When Your Boss Is an Algorithm’. Financial Times. N.p., 8 Sept. 2016.</p> <p><a href="#_ftnref10">[10]</a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Bourdieu, Pierre. The Logic of Practice. p53. Stanford University Press, 1990.</p> <p><a href="#_ftnref11">[11]</a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Morton, Timothy. Hyperobjects - Philosophy and Ecology after the End of the World. University Of Minnesota Press, 2013.</p> <p><a href="#_ftnref12">[12]</a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Keddell, Emily. ‘Predictive Risk Modelling: On Rights, Data and Politics.’ Re-Imagining Social Work in Aotearoa New Zealand 4 June 2015.</p> <p><a href="#_ftnref13">[13]</a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; McIntyre, Niamh, and David Pegg. ‘Councils Use 377,000 People’s Data in Efforts to Predict Child Abuse’. The Guardian 16 Sept. 2018. www.theguardian.com.</p> <p><a href="#_ftnref14">[14]</a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Eubanks, Virginia. ‘A Child Abuse Prediction Model Fails Poor Families’. Wired 15 Jan. 2018.</p> <p><a href="#_ftnref15">[15]</a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; iGriffin, Roger. ‘The Palingenetic Core of Fascist Ideology’. Library of Social Science. N.p., n.d..</p> <p><a href="#_ftnref16">[16]</a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Shane, Scott, Cade Metz, and Daisuke Wakabayashi. ‘How a Pentagon Contract Became an Identity Crisis for Google’. The New York Times 30 July 2018.</p> <p><a href="#_ftnref17">[17]</a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Department for Digital, Culture, Media &amp; Sport. ‘Consultation on the Centre for Data Ethics and Innovation’. GOV.UK. N.p., 13 June 2018.</p> <p><a href="#_ftnref18">[18]</a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; McQuillan, Dan. ‘People’s Councils for Ethical Machine Learning’. Social Media + Society 4.2 (2018): 2056305118768303. SAGE Journals. </p> <p><a href="#_ftnref19">[19]</a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Plant, Sadie. The Most Radical Gesture: The Situationist International in a Postmodern Age. Routledge, 1992.</p> <p><a href="#_ftnref20">[20]</a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Mordvintsev, Alexander, Christopher Olah, and Mike Tyka. ‘Inceptionism: Going Deeper into Neural Networks’. Research Blog 17 June 2015.</p> <p><a href="#_ftnref21">[21]</a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Akten, Memo. ‘Learning to See’. Memo Akten. 2018.</p><fieldset class="fieldgroup group-sideboxs"><legend>Sideboxes</legend><div class="field field-related-stories"> <div class="field-label">Related stories:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> <a href="/dan-mcquillan/manifesto-on-algorithmic-humanitarianism">Manifesto on algorithmic humanitarianism</a> </div> </div> </div> </fieldset> <div class="field field-topics"> <div class="field-label">Topics:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> Civil society </div> <div class="field-item even"> Science </div> </div> </div> <div class="field field-rights"> <div class="field-label">Rights:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> CC by 4.0 </div> </div> </div> digitaLiberties Civil society Science 1968 artificial intelligence Dan McQuillan Sat, 13 Oct 2018 10:28:51 +0000 Dan McQuillan 120081 at https://www.opendemocracy.net Is the tide turning on regulating Facebook and Google? https://www.opendemocracy.net/uk/leighton-andrews/is-tide-turning-on-regulating-facebook-and-google <div class="field field-summary"> <div class="field-items"> <div class="field-item odd"> <p>Facebook and Google are modern utilities - and natural monopolies - so they need a utility regulator.</p> </div> </div> </div> <p><span class='wysiwyg_imageupload image imgupl_floating_none 0'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/549093/zuckerberg.jpg" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/549093/zuckerberg.jpg" alt="" title="" width="460" height="307" class="imagecache wysiwyg_imageupload 0 imagecache imagecache-article_xlarge" style="" /></a> <span class='image_meta'></span></span><em>Image: Facebook CEO Mark Zuckerberg, May 2018. Rights: NUR Photo/SIPA USA/PA Images, all rights reserved.</em></p><p>Let me start with two quotes, six months and a continent apart. The first is from the current chief executive of OFCOM, who told the Cambridge RTS Conference last year that while she believed that Facebook and Google were media companies, she didn’t “think regulation is the answer because I think it is really hard to navigate the boundary between <a href="https://www.opendemocracy.net/uk/leighton-andrews/why-regulators-like-ofcom-are-dropping-ball-on-fake-news-dark-advertising-and-ex">regulation and censorship of the internet</a>”. Six months later, in an interview with CNN, Facebook’s founder <a href="https://www.buzzfeednews.com/article/alexkantrowitz/mark-zuckerberg-im-not-sure-we-shouldnt-be-regulated">Mark Zuckerberg said "I’m not sure we shouldn’t be regulated</a>".</p> <p>There are good grounds for believing that we have witnessed a regulatory turn, that this has moved well beyond media policy, and that European regulatory proposals may become the ‘gold standard’ for global regulation. And there are signs that, even if Brexit happens, the UK will not be immune from the regulatory tide. The 2017 Conservative Manifesto, now being implemented through the Digital Charter, the Green Paper on Internet Safety and other measures, contained a series of proposals, including establishment of the regulatory framework in law. <a href="https://www.conservatives.com/manifesto">The manifesto</a> was explicit in its emphasis: “Some people say that it is not for government to regulate when it comes to technology and the internet. We disagree.” </p> <p>Indeed, by July 2018, even <a href="https://www.thetimes.co.uk/article/it-s-time-to-regulate-social-media-sites-that-publish-news-pxsg9t3fv">Ofcom had arguably changed its position to support regulation</a>.</p> <p>Today, the debate over the role of information intermediaries such as Facebook, following the Cambridge Analytica controversy and the revelations of abundant Russian election and referendum interference, revealed by <a href="https://www.theguardian.com/technology/2017/may/07/the-great-british-brexit-robbery-hijacked-democracy">painstaking investigative journalism</a> and <a href="https://medium.com/tow-center/who-hacked-the-election-43d4019f705f">detailed academic research</a>, encompasses a range of issues which fundamentally raise the role of state sovereignty and the political sphere of regulation. </p> <p>Facebook and Google are more than media companies. They are advertising engines, data controllers, information service providers and algorithm developers. And they are moving into a variety of new fields such as artificial intelligence and virtual or augmented reality, leveraging the revenues they are earning from advertising. Their corporate power is unprecedented. They have purchased early-stage ventures which might have turned out to threaten their position, and their dominance risks damaging innovation. In their main fields, they are arguably now natural monopolies. </p> <p>The role of network effects and economies of scale driven by Big Data consolidates and concentrates their power as first-movers. The entry costs for new suppliers are so high as to be prohibitive. Their ability to imitate and replicate at low cost the new services offered by competitors reduces the effects of competition. It is difficult for consumers to switch or exit when in the case of Facebook, most of their friends may be on the platform, and in the case of Google, its dominance of data makes it difficult for any other search engine to approach the quality of service it provides. Cross-platform sharing of data within a group of companies such as Facebook, Instagram and WhatsApp, intensifies their dominance. </p> <p>I suggest creating a new category of ‘Information Utilities’ for specific markets such as search and social media, drawing on <a href="https://reutersinstitute.politics.ox.ac.uk/our-research/news-plurality-digital-world%20.">proposals for the ‘statutory underpinning’</a> of a new regulatory framework suggested by a former Ofcom regulator.</p> <p>Information Utilities would be licensed as such and they would have specific reporting regulations in respect of the regulator, which would be granted strong back-stop intervention powers. Dominant ‘Information Utilities’ – whose dominance might be measured in terms of their significant market power, such as their share of the online or mobile advertising markets – would have the most stringent reporting duties. These proposals would be compatible with <a href="///C:/Users/Caroline/Downloads/v">the imposition of a ‘duty of care’ on social media companies being proposed by others</a>. In the case of <a href="http://observer.com/2009/07/the-evolution-of-facebooks-mission-statement/">Facebook, its founder has regularly referred to it as ‘a social utility’</a> and in his 600-word manifesto last year, referred to it as <a href="https://www.facebook.com/notes/mark-zuckerberg/building-global-community/10103508221158471/?pnref=story">‘social infrastructure’</a> on several occasions. Perhaps we should take Zuckerberg at his word and accept that Facebook is a social utility and a form of social infrastructure. Utilities, after all, are regulated.</p> <p>In the past, Parliament has regulated to control monopoly power. For example, the 1984 Telecommunications Act, introduced when BT was privatised, recognized the danger of such a dominant player being able to exert anti-competitive power and put in place a strong regulatory framework. The situation of Facebook and Google is different, but they are dominant in their spheres and have significant market power. Their potential for exploitation by hostile state actors, as we have seen in both the US Presidential election and in the UK’s EU referendum, means that they should be seen as critical social infrastructure. There would need to be a lead regulator in respect of this new framework for Information Utilities, which should additionally be charged formally with convening regular meetings with other relevant regulators. Today, the regulation of social media is no longer a media issue but a social issue.</p> <p><em>This is an extract from "Anti-social media? the effect on Journalism and Society" edited by Mair, Clark, Fowler, Snoddy and Tait (Abramis) which is being launched at the Frontline Club on 26 October. </em><em></em></p><fieldset class="fieldgroup group-sideboxs"><legend>Sideboxes</legend><div class="field field-related-stories"> <div class="field-label">Related stories:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> <a href="/uk/brexitinc/emma-briant/our-governments-share-responsibility-for-cambridge-analytica-crisis-and-her">Our governments share responsibility for the Cambridge Analytica crisis… and here’s how they should fix it</a> </div> <div class="field-item even"> <a href="/uk/jennifer-cobbe/problem-isn-t-just-cambridge-analytica-or-even-facebook-it-s-surveillance-capitali">The problem isn’t just Cambridge Analytica or Facebook – it’s “surveillance capitalism”</a> </div> <div class="field-item odd"> <a href="/digitaliberties/moh-hamdi/facebook-and-journalism-part-two">Facebook and journalism. Part two</a> </div> <div class="field-item even"> <a href="/can-europe-make-it/joren-de-wachter/facebook-privacy-and-use-of-data">Facebook, privacy, and the use of data</a> </div> <div class="field-item odd"> <a href="/digitaliberties/juan-ortiz-freuler/techlash-why-facebook-s-approach-to-fakenews-ultimately-fails">Techlash: why Facebook’s approach to #FakeNews ultimately fails</a> </div> <div class="field-item even"> <a href="/mary-fitzgerald-peter-york-carole-cadwalladr-james-patrick/dark-money-deep-data-voicing-dissent">Dark Money Deep Data</a> </div> </div> </div> </fieldset> <div class="field field-rights"> <div class="field-label">Rights:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> CC by 4.0 </div> </div> </div> uk digitaLiberties OurBeeb uk Leighton Andrews Wed, 10 Oct 2018 06:00:10 +0000 Leighton Andrews 119795 at https://www.opendemocracy.net Our governments share responsibility for the Cambridge Analytica crisis… and here’s how they should fix it https://www.opendemocracy.net/uk/brexitinc/emma-briant/our-governments-share-responsibility-for-cambridge-analytica-crisis-and-her <div class="field field-summary"> <div class="field-items"> <div class="field-item odd"> <p>Government must regulate before privatised military propaganda firms interfere with any more elections</p> </div> </div> </div> <p dir="ltr"><span class='wysiwyg_imageupload image imgupl_floating_none 0'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/553846/Nix_2.jpg" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/553846/Nix_2.jpg" alt="" title="" width="460" height="314" class="imagecache wysiwyg_imageupload 0 imagecache imagecache-article_xlarge" style="" /></a> <span class='image_meta'><span class='image_title'>Cambridge Analytica/SCL's Alexander Nix. Image, Sam Barnes. CC2.0</span></span></span></p><p dir="ltr">A series of whistleblowers, journalistic investigations and public inquiries this year have reinforced concerns academics like me have had for some time about the rapid development of highly manipulative communication technologies. As our online activities are increasingly monitored and monetized, and we are being made more vulnerable to powerful actors abusing data for propaganda targeting. </p><p dir="ltr">This is enabled by digital platforms and influence industry applications that consumers trust, and which obscure their central purpose as part of their business model. Following questions of manipulation during Brexit and Trump campaigns inquiries interrogated the respective roles of: the campaigns themselves; foreign actors such as Russia; digital media platforms; influence industry companies and their business models and methodologies. Now US Senator <a href="https://www.scribd.com/document/385137394/MRW-Social-Media-Regulation-Proposals-Developed">Mark Warner</a> and the <a href="https://publications.parliament.uk/pa/cm201719/cmselect/cmcumeds/363/36302.htm">Fake News Inquiry</a> in the UK have come up with some helpful solutions for the problem of ‘fake news’ and digital campaign practices that may undermine democracy… how well do these address the problem at hand? Well, these proposals largely focus on: Information Operations (IO) and coordinated responses to Russia; privacy and transparency measures largely focused on encouraging better behavior from digital platforms like Facebook; and providing public media education. </p><p dir="ltr">The extent to which platforms like Facebook are complicit has been central to media debates, to the neglect of other aspects of the problem. Scholarly proposals rightly emphasize a need to address the monopoly of these platforms. Some (Baron et al 2017; Freedman, 2018; Tambini, 2017, for example) say forcing data portability, whereby users are able to take their data to competitors, might reduce the monopoly power enjoyed by Facebook. &nbsp;</p><p dir="ltr">Privacy measures like GDPR and other measures aimed at platforms would certainly be helpful. However, a central question has been neglected by media and reports and yet is all the more urgent as we plan for upcoming elections in the UK and US – this concerns the influence industry and how government contracting helped create Cambridge Analytica and its parent company SCL. If UK and US responses are likely to include more propaganda or ‘information operations’ (IO), to counter Russia, it is unfortunate that both reports fail to address the fact the company central to the scandal emerged out of this kind of contracting work for US and UK governments and NATO. My <a href="http://data.parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/digital-culture-media-and-sport-committee/fake-news/written/84032.html">submission to the Fake News Inquiry</a> from my academic research helped expose this link and indicated problems which seem to be largely unaddressed by recent proposals. </p><p dir="ltr">Policymakers must consider whether oversight and intelligence mechanisms were adequate as they failed to identify or prevent a developing problem. We must write to them demanding they make these necessary changes to ensure there can be no recurring issues with another contractor. </p><h2 dir="ltr">Addressing Facebook’s Monopoly Power</h2><p dir="ltr">Many of those who best anticipated how powerfully ‘big data’ would transform ‘influence’, were those who saw it as an opportunity to be exploited for profit. The opaque and monopolistic business models of digital platforms have recently been scrutinized, highlighting the implications of data harvesting and misuse as well as consent and privacy issues. Solving this is vital to <a href="https://www.eff.org/deeplinks/2018/07/facing-facebook-data-portability-and-interoperability-are-anti-monopoly-medicine">enabling data portability</a>, something unlikely to be successfully achieved if left to the goodwill of profit-orientated companies<a href="http://journals.sagepub.com/doi/abs/10.1177/0267323118790156?journalCode=ejca&amp;"> like Facebook</a>. Yet if we enable consumers to ‘be in control of’ their data and take it to competitors, we need to protect them too, to ensure we are not making them vulnerable to other companies like Cambridge Analytica, who may be keen to obtain and exploit their data in further unethical ways. And digital ‘whack-a-mole’ banning of particular techniques, or dropping of companies as they are exposed in the media would leave us falling short of responding to complex multi-layered adaptive manipulation and or preventing problems as a fast-moving industry develops. We must address a problem not just of social media companies, but of an influence industry with deeply concerning norms. </p><h2 dir="ltr">Unaddressed problems in the influence industry</h2><p dir="ltr">These companies grew not just from political campaigning and commercial advertising, but some emerged through our own governments’ information warfare. There has long been a revolving door between military and intelligence and private influence industries. PR companies, and wider cultural industries have frequently been involved in wartime propaganda which raises problems itself. Particularly as defense and intelligence methodologies increase in sophistication, we must take more seriously the risk of knowledge migrating the other way, into commercial and electoral campaigning.</p><p dir="ltr">Specific training and/or knowledge formally or informally acquired in a military or intelligence context could include: disinformation and deception techniques; methods used to demoralize an enemy; methods of harnessing psychological weaknesses or violent tendencies within a population or group; methods for influencing extremists, or increasing or decreasing inter- and intra-group tensions; techniques and specialist knowledge about surveillance and hacking; all of which many would recognize would be inappropriate knowledge to risk having among teams handling election campaigns if we wish to prioritize the protection of democracy. </p><p dir="ltr">The inquiries and journalistic investigations have raised concerns about relationships between a defense contractor, SCL, and Cambridge Analytica, who ran political campaigns; concerns included possible data, financial and staffing overlaps with some staff on defense projects working on political campaigns, and questions of whether defense-derived methodologies or possible hacking may have been used in political campaigns. It cannot be left to individuals’ personal integrity, it raises too great a level of risk. And it can be hard for people to speak out, particularly in the light of silencing and monitoring strategies of governments within national security.</p><p dir="ltr">It is vital governments not shy away from considering how companies seek to adapt services developed for defense beyond that domain, for example by adapting a business model or company structure to obscure what they do in lucrative political campaigns. SCL were a government contractor who developed their methodology through their own research facility, the ‘Behavioural Dynamics Institute’ (BDI), including through collaboration with US government’s Defense Advanced Research Projects Agency (DARPA) (<a href="https://www.parliament.uk/business/committees/committees-a-z/commons-select/digital-culture-media-and-sport-committee/news/fake-news-briant-evidence-17-19/">Briant, 2018</a>; <a href="http://data.parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/digital-culture-media-and-sport-committee/fake-news/written/81874.html">Wylie, 2018a</a>). All SCL Group companies could draw on the methodologies developed. If these may have been able to inform tactics deployed in democratic elections this is very serious. </p><p dir="ltr">In <a href="http://data.parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/digital-culture-media-and-sport-committee/fake-news/oral/81592.pdf">her Fake News Inquiry testimony</a> Brittany Kaiser, former Development Manager for Cambridge Analytica revealed that: </p><p dir="ltr">“I found documents from Nigel Oakes, the co-founder of the SCL Group, who was in charge of our defence division, stating that the target audience analysis methodology, TAA, used to be export controlled by the British government. That would mean that the methodology was considered a weapon — weapons grade communications tactics — which means that we had to tell the British government if that was going to be deployed in another country outside the United Kingdom. I understand that designation was removed in 2015.” The potential for defense-derived methods and knowledge to be commercially sold in other industries raises further risks to national security, as techniques could migrate abroad. Concern was raised by whistleblowers over Cambridge Analytica’s pitches to Lukoil, a Russian FSB-connected oil company, while SCL Group were delivering counter-Russian propaganda training for NATO, that methods for both might be based on a similar methodological core and could be utilized by Russia. My own evidence indicates around the same time, Alexander Nix from CA contacted Julian Assange at Wikileaks about amplifying the release of damaging emails; the Russian government has been accused of the hacking of these, which it denies.</p><h2 dir="ltr">Oversight of defense</h2><p dir="ltr">CA and many SCL Group companies may have gone bankrupt now, but SCL Insight appears to remain and new companies are growing from their ashes (Auspex International, Emerdata and Datapropria for example – Datapropria, 2018; Murdock, 2018; Siegelman, 2018b). These companies are part of a wider industry we mustn’t lose sight of. </p><p dir="ltr">Proposals from politicians and media demand information warfare responses to Russia, but do not fully consider how to address the problems SCL highlighted in how this is overseen by government, or how intelligence and oversight might be strengthened to prevent future recurrence. It is important to ensure potential vulnerabilities that might have contributed to the crisis are addressed. Nigel Oakes, the CEO of defense contractor SCL Group <a href="https://www.parliament.uk/business/committees/committees-a-z/commons-select/digital-culture-media-and-sport-committee/news/fake-news-briant-evidence-17-19/">said to me in interview</a>, “the defense people can't be seen to be getting involved in politics, and the State Department, they get very upset-” and stated that they imposed “strong lines” between the companies as a result. If the State Department had expressed concern, one might wonder if this could be due to the troublingly anti-democratic and potentially destabilizing roles CA played in international elections in Nigeria, Kenya and beyond. Oakes’ comments imply that the State Department may have been concerned that there was something to be ‘upset’ about in the conduct of, or relationships between, the companies. Oakes, the defense contractor, <a href="https://www.parliament.uk/business/committees/committees-a-z/commons-select/digital-culture-media-and-sport-committee/news/fake-news-briant-evidence-17-19/">in interview with me</a>, stressed his importance to the methods underpinning what CA did in politics, saying that if Alexander Nix was “the Steve Jobs, I’m the Steve Wozniak. I’m sort of the guy who wants to get the engineering right and he’s the guy who wants to sell the flashy box. And he’s very good at it. And I admire him enormously for doing it. But I’m the guy who say, yeh, but without this you couldn’t do any of that!”. It is vital that US and UK governments, including research entities like DARPA who worked with BDI, build into private contracts more control over tools and weapons they help to create for information warfare.</p><p dir="ltr">The public also to know that networks of companies cannot obscure unethical practices, flows of data, financial interests or possible conflicts of interest with foreign powers – all concerns raised in the Cambridge Analytica scandal. On the question of related companies the UK Fake News Inquiry’s recent interim <a href="https://publications.parliament.uk/pa/cm201719/cmselect/cmcumeds/363/36302.htm">report states</a>:</p><p dir="ltr"> “We do not have the&nbsp;remit or the capacity to investigate these claims ourselves, but&nbsp;we urge the Government to ensure that the National Crime&nbsp;Agency thoroughly investigates these allegations.”&nbsp;</p><p dir="ltr">This sadly is beyond the current scope of the UK National Crime Agency and a UK Defence Select Committee Inquiry is needed to review this and why UK export control restrictions were removed. </p><h2 dir="ltr">What to do</h2><p dir="ltr">We need government to take action to address it. Investigations in both countries should ensure oversight is fully reviewed, particularly in relation to oversight of groups of companies, protection against commercial exposure of IO practices and migration into elections. Transparency in government contracting, stronger oversight and reporting mechanisms, and reforms in the industry such as licensing that could be revoked or fines set at a deterrent level that can prevent future scandals are essential in both countries. Transparency in the US could be improved by a reporting system for private companies equivalent to Companies House in the UK. There could also be penalties for defense contractors in each country found obscuring overlaps and company relationships. Strict regulation of the influence industry, or perhaps professional licensing that can be revoked on evidence of abuse, would not only protect citizens, it would give substance to a truthful narrative that would undermine Russian and other hostile narratives directed at democracies. And it would commercially protect the industry itself, creating a resulting ‘soft power’ economic benefit for industry and Western governments.</p><p dir="ltr">While Damian Collins MP of the UK Fake News Inquiry and Sen. Mark Warner are rightly cautious about government interventions regulating the media, improved oversight in the national security realm, electoral protections and licensing in the influence industry provide little threat to free speech, indeed unethical conduct in the influence industry could be argued to threaten free speech and democratic debate. Policymakers should ensure a) competition is enabled via data portability and b) influence industries are properly regulated, with enforced codes of conduct, professional licensing we see in other professions, and robust monitoring of companies and individuals beyond their contracts to ensure defense technologies are restricted and elections are protected. Preconditions for resolving this are of course greater transparency in the industry and may include dedicated monitoring by expert-led independent regulators and industry licensing bodies. This would actually strengthen an industry in which the absence of regulation has become unsustainable, threatening democracy and national security in this case. Current unethical practices can also be exploited by those wishing to spread narratives about the ‘corrupt West’ and weakness of democracy. The measures proposed above will together ensure that data portability produces competition and therefore innovation, and concurrently ensure that media consumers are not vulnerable to the actions of unethical companies. Democratic controls would strengthen public trust in democracy, help to protect and secure our elections and have a long term ‘soft power’ benefit for both countries. </p><p dir="ltr"><em>Bibliography</em></p><p dir="ltr">Baron, S; Crootof, R &amp; Gonzalez, A. (2017) <a href="https://law.yale.edu/system/files/area/center/isp/documents/fighting_fake_news_-_workshop_report.pdf">Fighting Fake News Workshop Report</a>, <a href="https://law.yale.edu/system/files/area/center/isp/documents/fighting_fake_news_-_workshop_report.pdf">Yale University</a>. </p><p dir="ltr">Datapropria (2018) ‘Data and Behavioral Science Experts’ Datapropria.com </p><p dir="ltr">Kaiser, Brittany (2018) <a href="blank">Oral Evidence to the Digital, Culture, Media and Sport Committee: Fake News, HC363</a>.</p><p dir="ltr">Freedman, D (2018) <a href="blank">'Populism and media policy failure'</a> in European Journal of Communication: journals.sagepub.com/doi/abs/10.1177/0267323118790156?journalCode=ejca&amp;</p><p dir="ltr">Murdock, Jason (2018) What is Emerdata? As Cambridge Analytica Shuts, Directors Surface in New Firm <a href="https://www.newsweek.com/what-emerdata-scl-group-executives-flee-new-firm-and-its-registered-office-909334">in Newsweek</a>.</p><p dir="ltr">Siegelman, W (2018b) '<a href="https://medium.com/@wsiegelman/theyre-back-ex-cambridge-analytica-employees-launch-auspex-international-to-focus-on-social-and-aa51a0ab6fca">They’re Back: Ex-Cambridge Analytica employees launch Auspex International to focus on social and political campaigns in the Middle East and Africa'</a> in <a href="https://medium.com/@wsiegelman/theyre-back-ex-cambridge-analytica-employees-launch-auspex-international-to-focus-on-social-and-aa51a0ab6fca">Medium</a>.</p><p dir="ltr">Tambini, D (2017) <a href="blank">Media Policy Brief 20 Fake News: Public Policy Responses</a>, LSE Media Policy Project: &nbsp;&nbsp;eprints.lse.ac.uk/73015/1/LSE%20MPP%20Policy%20Brief%2020%20-%20Fake%20news_final.pdf</p><p dir="ltr">Wylie, C (2018a) ‘<a href="blank">A response to Misstatements in relation to Cambridge Analytica’</a>, <a href="http://data.parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/digital-culture-media-and-sport-committee/fake-news/written/81874.html">Supplementary written evidence published by Digital, Culture, Media and Sport Committee</a>.</p><fieldset class="fieldgroup group-sideboxs"><legend>Sideboxes</legend><div class="field field-related-stories"> <div class="field-label">Related stories:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> <a href="/uk/brexitinc/adam-ramsay/cambridge-analytica-is-what-happens-when-you-privatise-military-propaganda">Cambridge Analytica is what happens when you privatise military propaganda</a> </div> <div class="field-item even"> <a href="/nathan-oxle/cambridge-analytica-hacked-our-social-lives-to-win-elections-but-more-is-at-stake-than-v">Cambridge Analytica hacked our social lives to win elections - but more is at stake than votes</a> </div> <div class="field-item odd"> <a href="/marcus-gilroy-ware/cambridge-analytica-outrage-is-real-story">Cambridge Analytica: the outrage is the real story</a> </div> </div> </div> </fieldset> <div class="field field-rights"> <div class="field-label">Rights:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> CC by 4.0 </div> </div> </div> uk digitaLiberties Can Europe make it? uk DUP Dark Money Brexit Inc. Emma L Briant Tue, 09 Oct 2018 07:16:24 +0000 Emma L Briant 119983 at https://www.opendemocracy.net Russia, the internet and "political technologists" - is this the future of democracy? https://www.opendemocracy.net/uk/nick-inman/monument-to-last-democrat <div class="field field-summary"> <div class="field-items"> <div class="field-item odd"> <p>As more revelations emerge about Russian interference in Western democracies, Nick Inman reviews a BBC broadcast that asks if Russia is merely where 21st century ideas of democracy died first.</p> </div> </div> </div> <p><span class='wysiwyg_imageupload image imgupl_floating_none 0'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/549093/russia facebook.jpg" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/549093/russia facebook.jpg" alt="" title="" width="460" height="307" class="imagecache wysiwyg_imageupload 0 imagecache imagecache-article_xlarge" style="" /></a> <span class='image_meta'></span></span><em>Image: Jaap Arriens/PA Images, all rights reserved</em></p><p>Democracy used to be defined as a system in which a society debates the issues that confront it. Two or more teams compete with each other to persuade a majority of voters that they have the most workable and cogent solutions to the problems. They put forward evidence and reasons as to why they should be trusted. A vote is held. And then the winning team implements its policies over the following few years with the full and conscious consent of the electorate.</p> <p>This description may be more fantasy and nostalgia than reality in today’s democracies, according to Peter Pomerantsev of the London School of Economics, as he explains in a recent radio broadcast, <em><a href="https://www.bbc.co.uk/programmes/b0b91w19">British Politics: A Russian View</a></em>. The programme is public broadcasting at its best. It should be required listening for anyone trying to make sense of the last presidential election in the US, the Brexit referendum result, the collapse of traditional political parties in France and the success of populists everywhere.</p> <p>As the title suggests, the programme begins by looking at British politics through Russian eyes; but Peter Pomerantsev has something much more important to offer us. “Russia could well be the country where the future arrived first,” he says in his introduction, “where 21st century ideas died earliest; and another type of political logic emerged.” </p> <p>Democracy as described above is hard work. All that campaigning and rallying, debating, interviewing and sound-biting, posturing and questioning of candidates consumes a lot of time and energy. It is far too hit and miss for today’s political aspirants, reared on the Silicon Valley mantra of “move fast, break things.” In all western democracies our worried eyes should not be on the candidates but the invisible people working on their behalf who go under a new job title, “political technologists”. </p> <p>The tools and materials of the technologist are not speeches, rosettes, walkabouts and mass meetings but big data and social media. He/she (usually the former) goes for the jugular of democracy. Who cares which policy is right or wrong, better or worse; the object of an election or referendum is to win and nothing else. The political technologist’s only concern is to deliver a majority for the candidate or cause that is paying for his services. Every election is just another gig. Success requires a hyper-rationalist, asset-stripping, technological approach to the mechanics of democracy. </p> <p>To win an election, it is not necessary to build a majority in the sense of a mass of like-thinking people who share common interest and a vision for the future. The technologist seeks a majority in number only: 50.01% or more. It’s maths, not politics.</p> <p>“The electorate, unlike well-defined social groups, is a very plastic thing,” says one contributor to the BBC programme. “You just draw up a map of the electorate and gather a majority on the side you need.”</p> <p>This flash majority doesn’t need to endure into the morning after the election. Like a furtive subatomic particle, it only needs to exist at the moment the count is taken. After that it can happily dissipate. The technologist’s client has by then assumed power. </p> <p>The creation of ephemeral majorities is possible because our addiction to social media delivers vast amounts of data into the technologist’s hands and allows precisely targeted communications.</p> <p>The first step of a contemporary electoral campaign is to establish a narrative or “fairy story” to be used in all its communications. This needs to be emotive rather than rational. It must be summed up in merely a memorable slogan. It must not suggest an uncertain future or difficult choices, let alone the need for nuance and compromise. A phrase like “take back control” can mean whatever the hearer wants to believe it means. </p> <p>It is far easier for the technologist to mobilise the voter against some clearly identifiable demon rather than the uninspiring but functioning status quo. To get the voter’s attention, the technologist must promise radical action from a baggage-free outsider who will eject elites and corrupt incumbents from office and sweep the Augean stables clean. Fear and anger have to be harnessed and directed towards a nominated bogeyman. According to one estimate, made by a participant in the programme, identifying a villain can immediately add 20% of votes to a campaign. </p> <p>Terminology is carefully controlled because words propagate rapidly online. “The people”, “the few”, “the have nots” and even “us” all make the voter feel part of a just cause. </p> <p>The technologist borrows tactics from the world of marketing. His or her real task is to segment the “market” using the data that social media users willingly feed into the system. The electorate is seen as different “communities” built around single interests or obsessions. Each of these needs to be fed a particular message, preferably one that can be taken to heart and shared promiscuously. If this sounds to you as if the election of politicians and the direction of international relations is being treated with the same level of seriousness as the “liking” of videos showing the antics of talented cats, you are getting the right idea.</p> <p>In fact, animals provide a good example of what the technologist can achieve. Convince a critical mass of animal-lovers that the EU is more cruel in its farming regulations than a future re-sovereignised UK government will be and you have viral videos of calves crowded into lorries doing the rounds of kind-hearted people, generating emotional support for the Leave campaign.</p> <p>Hold on, though, because it gets more alarming. If the political technologist can target “communities” using the tools of social media why shouldn’t he target each individual in the way in which he will be most susceptible to instruction? As another contributor to the programme puts it: “I can shout into one person’s ear one message and shout into another person’s ear – who is right beside them – another message and neither of them understands that I am shouting different messages to different people”. </p> <p>Each person in the ephemeral majority that flickers across election night may have voted for an entirely personal reason, like passengers sitting next to each other on a cut price airline who have paid different ticket prices for the same service. You really can fool all the people all of the time as long as you fool each person in a customised way.</p> <p>There are many useful conclusions to draw from this abuse of democracy by technology. One is that no one who claims to speak for the “will of the people” should be taken seriously. That should be “the wills of people.”</p> <p>Another lesson is that we should stop looking the wrong way. All those commentators who talk about seismic shifts in society may be talking nonsense to justify their jobs. What we are seeing may not be anything to do with how the people do or do not think and feel. It might all be dictated by the choices they are fed by unscrupulous operators.</p> <p>What is certain is that post-post-modern elections and referendums have nothing to do with informed debate and rational, independent decisions taken by marginal voters. One speaker on the programme illustrated this with an image: “If there were a monument to the democratic citizen it should be a person who is ready to change his position on the basis of an argument. We’re living in a democracy in which this person doesn’t exist anymore because nobody is trying to change his mind, everyone is playing on his feelings”.</p> <p><em>British Politics: A Russian View Analysis BBC Radio 4. 30 minutes. Presented by Peter Pomerantsev, senior visiting fellow at the LSE, produced by Ant Adeane. First broadcast on 9 July 2018. To listen or download go to: </em><em><a href="http://www.bbc.co.uk/programmes/b0b91w19">www.bbc.co.uk/programmes/b0b91w19</a></em><em>&nbsp;</em></p><fieldset class="fieldgroup group-sideboxs"><legend>Sideboxes</legend><div class="field field-related-stories"> <div class="field-label">Related stories:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> <a href="/uk/jennifer-cobbe/problem-isn-t-just-cambridge-analytica-or-even-facebook-it-s-surveillance-capitali">The problem isn’t just Cambridge Analytica or Facebook – it’s “surveillance capitalism”</a> </div> <div class="field-item even"> <a href="/od-russia/tom-junes/russian-interference-in-virtual-world-is-not-problem">Russian interference in the virtual world is not the problem</a> </div> <div class="field-item odd"> <a href="/anthony-barnett/how-should-we-think-about-roles-cambridge-analytica-facebook-russia-and-shady-billio">How should we think about Cambridge Analytica, Facebook, Russia and shady billionaires</a> </div> </div> </div> </fieldset> <div class="field field-rights"> <div class="field-label">Rights:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> CC by 4.0 </div> </div> </div> uk digitaLiberties Can Europe make it? uk Nick Inman Fri, 05 Oct 2018 09:35:45 +0000 Nick Inman 119800 at https://www.opendemocracy.net Techlash: why Facebook’s approach to #FakeNews ultimately fails https://www.opendemocracy.net/digitaliberties/juan-ortiz-freuler/techlash-why-facebook-s-approach-to-fakenews-ultimately-fails <div class="field field-summary"> <div class="field-items"> <div class="field-item odd"> <p>Zuckerberg needs to take a step back and allow institutions to flourish. There are only bigger waves looming ahead. </p> </div> </div> </div> <p><span class='wysiwyg_imageupload image imgupl_floating_none caption-xlarge'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/500209/PA-38016819.jpg" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/500209/PA-38016819.jpg" alt="lead lead lead " title="" class="imagecache wysiwyg_imageupload caption-xlarge imagecache imagecache-article_xlarge" style="" width="460" /></a> <span class='image_meta'><span class='image_title'>Graffiti depicting Mark Zuckerberg on the wall of separation in Bethlehem. August,2018. Richard Gray/ Press Associaiton. All rights reserved.</span></span></span></p><p>Every policy-tweak Facebook attempts to roll out is faced with public criticism. This signals a structural problem: Facebook developed quicker than its own systems of governance and now struggles to carry its own weight. In other words, Facebook seems to lack the legitimacy to exercise the huge power it has amassed over the years.</p> <p>If the user base were smaller, Facebook would have a group of like-minded individuals that could be more easily catered to. But Facebook has become very big and diverse. With over 2 billion active monthly users, it’s bigger and more diverse than any community we’ve ever seen.</p> <p>If Facebook hadn’t taken such an aggressive strategy towards consolidating itself as a central node for information distribution, then it would be held to lower standards. But it has amassed such power that experts and public opinion refer to it as the <em>digital public square</em>: the place where people protest, sign up for public events, get information about politics, and more.</p> <p>So Facebook’s responsibilities go beyond those of any other digital platform, and yet people seem to believe it fails to cope with these responsibilities. The percentage of people who think Facebook is<a href="https://www.recode.net/2018/4/12/17215142/facebook-negative-impact-society-mark-zuckerberg-poll"> having a negative impact on society</a> is as high as 33% in Australia and 20% in Brazil. This happens against a background in which many are pointing out that Facebook’s business practices in the global south represent<a href="https://www.huffingtonpost.com/entry/facebook-cambridge-analytica-developing-world_us_5ab50bc7e4b0decad04951d1"> a new form of colonialism</a>. <span class='wysiwyg_imageupload image imgupl_floating_none caption-xlarge'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/500209/Screen Shot 2018-09-05 at 08.32.24.png" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/500209/Screen Shot 2018-09-05 at 08.32.24.png" alt="" title="" class="imagecache wysiwyg_imageupload caption-xlarge imagecache imagecache-article_xlarge" style="" width="460" /></a> <span class='image_meta'></span></span></p><p dir="ltr"><strong>Screenshot:<a href="https://www.blog.google/products/search/how-google-autocomplete-works-search/"> Google autocomplete predictions</a></strong></p><p>Creating rules to govern a group that is large, diverse, and has a lot of skin in the game is a problem that is not in itself new. It’s actually a problem rulers have faced daily since before the times of the Roman Empire.</p><p>How have they survived? Institution building. Centuries of iterations towards developing institutions capable of offering solutions that are fair. Or perhaps, more accurately, <em>perceived </em>as fair. If a person feels a decision has been unfair, there is a system of checks and balances: an Ombuds that oversees the acts of government and is ready to act in defense of people’s interests, a system of independent judges, and, ultimately, an electoral system through which to change the people in charge of appointing the heads of institutions.</p><p>As part of its efforts to stamp out misinformation, Facebook is building a network of fact- checking NGOs. This decision might seem like an approach towards institution building. But the process is opaque and based on a customer-service model that excludes the chance of real engagement. <span class="mag-quote-center">The process is opaque and based on a customer-service model that excludes the chance of real engagement.</span></p><p>Facebook needs to engage its users in more substantive acts of participation. That is how a shared identity can be developed, a feeling of belonging. The precursors to an <em>actual</em> community. Wikipedia offers a good example of a participatory governance model with proven capacity to deal with tussle. Facebook has a larger community to cater to, but also has more funds. Facebook can and should go further.</p><p>Institutions are ultimately built on community and trust. Facebook has some sort of loose community it can start working with. But is definitely running low on trust.</p><p><span class='wysiwyg_imageupload image imgupl_floating_none caption-xlarge'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/500209/Screen Shot 2018-09-05 at 08.41.58.png" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/500209/Screen Shot 2018-09-05 at 08.41.58.png" alt="" title="" class="imagecache wysiwyg_imageupload caption-xlarge imagecache imagecache-article_xlarge" style="" width="460" /></a> <span class='image_meta'></span></span></p><p dir="ltr"><strong>Screenshot:<a href="https://blogs.bing.com/search/2013/03/25/a-deeper-look-at-autosuggest"> Bing autocomplete</a></strong></p><p>Facebook mistakenly assumes that locally respected fact-checking NGOs can supply the trust Facebook itself is lacking. But NGOs lack standing to execute the tasks Facebook wants to offload.</p> <p>In a liberal democracy, NGOs can legitimately elevate arguments and offer counterarguments to government positions. These actions imply adding or organizing the arguments of a public debate. In liberal democracies we assume arguments will be tested publicly, and after the positions of different stakeholders have been taken into account, perhaps adopted as policy by democratically elected representatives. NGOs have not gone through any process that could possibly provide the legitimacy to limit the arguments available for public debate. <span class="mag-quote-center">NGOs have not gone through any process that could possibly provide the legitimacy to limit the arguments available for public debate.</span></p> <p>If Facebook wants to remain the digital public square, it needs to develop its own process of institution building. Bottom-up. One that resolves the tension between maximizing corporate profit and being responsive to social problems. It needs to limit Facebook Corp. to selling ads, and delegate decision-making power regarding all other matters onto a set of institutions, including enforcing transparency rules over Facebook Corp. </p> <p>Facebook needs to involve its +2 billion members in debate, elections and decision-making. Candidates in charge of explaining to the users how the technology works, and discuss publicly how it should be used. Candidates in charge of translating human rights to this digital age. Their impact would go far beyond Facebook itself. It would reduce a gap in information that is fueling a <em>teclash</em> and is likely to lead governments to implement bad regulation that would affect the whole internet.</p><p> The NGO-network approach is a reasonable patch to an urgent problem. But Facebook needs to start signalling what its long-term strategy looks like. Many,<a href="https://www.commondreams.org/views/2018/03/20/cambridge-analytica-scandal-drop-water-trickling-down-visible-top-iceberg"> including myself</a>, believe the power Facebook (and a handful of other companies) yield is becoming too big of a risk for society at large. Calls for Facebook to be broken up are gaining traction as public trust in the corporation gets increasingly eroded. Facebook can mitigate part of these concerns by distributing political control over the platform. <span class="mag-quote-center">Wikipedia offers a good example of a participatory governance model with proven capacity to deal with tussle.</span></p><p><span class='wysiwyg_imageupload image imgupl_floating_none caption-xlarge'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/500209/Screen Shot 2018-09-05 at 08.55.37.png" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/500209/Screen Shot 2018-09-05 at 08.55.37.png" alt="" title="" class="imagecache wysiwyg_imageupload caption-xlarge imagecache imagecache-article_xlarge" style="" width="460" /></a> <span class='image_meta'></span></span></p><p><strong><em>Screenshot:</em><a href="https://duck.co/help/features/autosuggest"><em> </em><em>DuckDuckGo autocomplete</em></a></strong></p> <p>So Facebook should….?</p> <p>Build institutions is the right answer, I believe. Yet there are reasons to be cynical about Facebook’s ability to establish institutions that actually work. After all, Facebook has shareholders to satisfy on a quarterly basis. But even shareholders should understand that Facebook’s longterm sustainability requires a new governance structure. One that can ease the growing tensions between human values and profit. One that, by putting people first, ensures users feel part of some imagined community that is worth belonging to. One that is prepared for the challenges the future has in store. <span class="mag-quote-center">Mark Zuckerberg has a chance to take his promise to build <em>community</em> seriously.</span></p> <p>Mark Zuckerberg has a chance to take his promise to build <em>community</em> seriously. And it could be a community of a size and complexity the world has never seen. As Facebook navigates further into uncharted waters, one thing is clear: we can only expect the waves will get bigger…</p><p><em>This article was original published by <a href="https://www.commondreams.org/views/2018/08/28/facebook-will-fail-solve-fakenews" target="_blank">Common Dreams</a> under a Creative Commons license.</em></p><p>&nbsp;</p> <p>&nbsp;</p><div class="field field-topics"> <div class="field-label">Topics:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> Civil society </div> <div class="field-item even"> Conflict </div> <div class="field-item odd"> Culture </div> <div class="field-item even"> Democracy and government </div> <div class="field-item odd"> Economics </div> <div class="field-item even"> International politics </div> <div class="field-item odd"> Internet </div> </div> </div> <div class="field field-rights"> <div class="field-label">Rights:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> CC by 4.0 </div> </div> </div> digitaLiberties digitaLiberties Civil society Conflict Culture Democracy and government Economics International politics Internet Juan Ortiz Freuler Wed, 05 Sep 2018 07:45:15 +0000 Juan Ortiz Freuler 119542 at https://www.opendemocracy.net Do you agree?: What #MeToo can teach us about digital consent https://www.opendemocracy.net/digitaliberties/elinor-carmi/what-metoo-can-teach-us-about-digital-consent <div class="field field-summary"> <div class="field-items"> <div class="field-item odd"> <p>The conversation around sexual consent could radically change the way we think of consent online.</p> </div> </div> </div> <p dir="ltr"><em><span class='wysiwyg_imageupload image imgupl_floating_none 0'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/565030/consent.png" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/565030/consent.png" alt="" title="" width="460" height="322" class="imagecache wysiwyg_imageupload 0 imagecache imagecache-article_xlarge" style="" /></a> <span class='image_meta'></span></span>It started in the beginning of April. It was late at night, and I was swiping mostly left on the famous dating app Binder. One guy sent a message inviting me to experience his “enormous talent”. Rolling my eyes to yet another tempting offer, I unmatched him. Bored and tired from these original solicitations, I decided to watch another Chelsea Handler comedy special online and go to sleep. </em></p><p dir="ltr"><em>In the morning, when I opened Facebook I saw a new message from a person I didn’t recognise. “Hi hotstuff, did u see what I sent you yesterday? I’m free toni8, let’s meet! And here’s a preview pic to help ur imagination ;)”. Gross! Oh god I haven’t even have my coffee yet, how the hell did this guy find my personal Facebook account? Then I remembered, that for some reason, we have mutual friends. He must have searched my name and found me. I blocked and deleted the “talented” guy, thinking this is surely a one time thing. But it wasn’t, it was only the beginning. Suddenly, I started receiving messages from other guys: “hey, remember we dated that one time a decade ago? Let's stay in touch, here’s a pic in case you forgot ;)". An hour later: "hey, remember we talked a couple of years ago in a pub? Let's hang out, k?". Pissed off and annoyed I decided to close my Facebook account, I might not remember anyone’s birthday anymore, but I can’t handle this shit. But then the next hour, I received a message in my Gmail inbox. Then another message on Twitter and WhatsApp. They just kept coming, like zombies “<a href="https://www.theguardian.com/lifeandstyle/2017/may/08/cushioning-breadcrumbing-benching-language-modern-dating">haunting</a>” me – “ghosting” was no longer a thing, apparently. Guys who I swiped right and left on, dated or even just talked to once in the past found all my online accounts, even my Hotmail. “THAT’S IT! I am deleting all my accounts!!! I’m going offline, they can’t find me here!”. Disconnected from everything, I sat in my living room and felt relieved. No more intrusions, I thought with a smile. And just when I was enjoying the silence, I heard a knock on the door...</em></p><p dir="ltr">Even if you don’t participate in the wonderous world of online dating, if you have lived in Europe in the past year this story may be familiar in an altogether different light. During April and May, Europeans have been harassed by multiple websites and services which they previously visited or used. Bombarded by these uninvited intrusions to their private lives, people were left scratching their heads and trying to remember when did they actually interact with these websites? Did they actually give their details? Why did the websites still have their contact details after such a long time? And how do we make it stop? Oh yes, we only need to consent. &nbsp;&nbsp;&nbsp;</p><p dir="ltr">Those who have lived in Europe for some time might also feel a sense of deja vu. Since the European Union’s General Data Protection Regulation (GDPR) came into force on May 25, many websites have introduced pop-up boxes that ask for your consent to collect personal data. These requests are not dissimilar to those that started appearing on websites in 2009, following the revised EU e-Privacy Directive which required websites and third-party actors to get consent before sending tracking technologies (such as cookies, pixels and others) to people’s computers. In addition, companies had to give a clear explanation of the purpose of any cookies they use and allow users to reject them entirely. The definition of consent that was used in the Data Protection Directive was defined as “any freely given specific and informed indication of his wishes by which the data subject signifies his agreement to personal data relating to him being processed”. The solution that advertisers, publishers and tech companies decided on was a pop-up dialog box, stating that the website you are visiting will store a cookie on your device. People were supposedly empowered by clicking &nbsp;‘agree’, ‘consent’, ‘accept’ or ‘OK’ in response. Privacy campaigners rejoiced and the internet changed forever… right? Well, not quite.&nbsp;</p><p dir="ltr"><span class='wysiwyg_imageupload image imgupl_floating_none 0'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/565030/image5.png" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_medium/wysiwyg_imageupload/565030/image5.png" alt="" title="" width="240" height="427" class="imagecache wysiwyg_imageupload 0 imagecache imagecache-article_medium" style="" /></a> <span class='image_meta'></span></span></p><h2>What does consent mean?</h2><p dir="ltr">The consent pop-up box was a bad cosmetic treatment that was supposed to cover up how people’s data is used by companies, without making an actual change. It did nothing to stop data collection and it did not result in publishers and advertisers adding explanations of how cookies worked or what their purposes were. The business model adopted by most tech companies in the 2000s also remained unchanged. Websites continued to seemingly offer free services and content, while surreptitiously monetising both through the collection of vast amounts of user data. Internet users remained a captive audience; refusing to accept the consent pop-up boxes usually resulted in people being denied access. </p><p dir="ltr">With internet users left confused rather than empowered by these changes, in 2011 the Article 29 Working Party, the European Commission data protection advisory body, decided to clarify what consent online actually means.<a href="http://www.pdpjournals.com/docs/88081.pdf">&nbsp;In a document</a> clarifying the misunderstanding and flexibility of meanings of consent and how people can express it, the Article 29 Working Party identified several key characteristics: ‘indication’, ‘freely given’, ‘specific’, ‘explicit’ and ‘informed’. </p><p dir="ltr">Inspired by western liberal thought about individual freedom and autonomy, the European Commission’s definition of consent always assumes a rational person making decisions with all the information and facts available to them. But as historian Yuval Noah-Harari said in <a href="https://www.ted.com/talks/yuval_noah_harari_why_fascism_is_so_tempting_and_how_your_data_could_power_it#t-660215">his recent Ted Talk</a> “in the end, democracy is not based on human rationality, it is based on human feelings”. In an online context, to make an informed decision people need to know first how the online ecosystem works: <a href="http://crackedlabs.org/en/corporate-surveillance">which companies are collecting their data</a>? What is the value of their data? What kind of data do those companies use and for what purposes? How might that affect them in the near and far future? For how long will that data be used? Will these data be used in other contexts and by other companies? And much more. But when even the CEOs of tech companies such as <a href="https://www.salon.com/2018/04/11/mark-zuckerberg-doesnt-know-how-facebook-works-unfortunately-neither-does-anyone-in-congress/">Mark Zuckerberg admit they do not fully understand how their systems work</a>, how can we expect internet users to make informed decisions? </p><p dir="ltr">People make decisions according to their emotions, cultural background, education, cognitive abilities, financial situation, family history, different media representations they engage with, health condition, <a href="https://datasociety.net/output/searching-for-alternative-facts/">religious beliefs</a>, gender identity and many other parameters. To assume that a decision can, in the words of EU legislation be “freely given” and “informed”, is misguided and simply wrong. As the 2016 US presidential election and 2016 Brexit referendum show, many important decisions are influenced by micro-targeting. <a href="https://www.theguardian.com/technology/2018/jul/27/fake-news-inquiry-data-misuse-deomcracy-at-risk-mps-conclude">As the recent report</a> from the UK’s Digital, Culture, Media and Sport Committee about disinformation and fake news concludes: “relentless targeting of hyper-partisan views… play[s] to the fears and prejudices of people, in order to influence their voting plans and their behaviour”. Thanks to the design of online platforms, which conceal what happens in the back-end, these messages are tailored, personalised and targeted through computational procedures to<a href="https://www.theguardian.com/news/2018/may/06/cambridge-analytica-how-turn-clicks-into-votes-christopher-wylie"> influence people’s behaviour</a>. </p><p dir="ltr">Many thought that following the<a href="https://www.opendemocracy.net/anita-gurumurthy-amrita-vasudevan/snowden-to-cambridge-analytica-making-case-for-social-value-of-pri"> Cambridge Analytica</a> scandal people would leave Facebook and follow the #DeleteFacebook movement. But this didn’t happen. This is because, as Siva Vaidhyanathan, a professor at the Centre for Media and Citizenship <a href="https://www.theguardian.com/books/2018/jun/25/anti-social-media-how-facebook-disconnects-us-undermines-democracy-siva-vaidhyanathan-review?CMP=share_btn_tw">argues</a>, “for many people, deleting their accounts would amount to cutting themselves off from their social lives. And this has engendered a feeling that resistance is futile”. So why do we still get asked to consent to protect our online privacy and experience, when it is impossible to do so? </p><h2>Binding contract</h2><p dir="ltr">Consent has traditionally been used as part of a contract. You sign a contract for a house, job or insurance, as an indication that you agree to the conditions of the product, service or employment. But whereas these contracts are static and deal with one particular aspect, online contracts are far from it. In fact, it will take <a href="https://qz.com/691228/watch-norways-hosting-a-30-hour-reading-of-the-legalese-your-favorite-apps-use-to-control-you/">you days, if not weeks</a>, to read the terms and conditions of all the contracts of the online services, platforms and apps you use. Even if you do read all these terms, and manage to understand all the legal jargon deployed, online services frequently change their terms without notifying people. In this way, people have no way of engaging with and understanding what they actually consent to.</p><p dir="ltr">But even if you do manage to make the time and read all the terms, and companies will follow the GDPR’s Article 12 which requires them to be transparent about their procedures, it is still not enough to make an ‘informed decision’. This inability to make sense of online contracts is what Mark Andrejevic, one of the most prominent scholars in surveillance studies, calls the <a href="http://ijoc.org/index.php/ijoc/article/view/2161">data divide</a>. As he argues, “putting the data to use requires access to and control over costly technological infrastructures, expensive data sets, and the software, processing power, and expertise for analysing them”. And as Zeynep Tufekci, a digital sociology professor, <a href="https://www.nytimes.com/2018/03/19/opinion/facebook-cambridge-analytica.html">points out</a>, given the constantly shifting nature of Facebook’s data collection “consent to ongoing and extensive data collection can be neither fully informed nor truly consensual — especially since it is practically irrevocable”. In short: we simply cannot understand how the data collected about us is used. We do not have the processing abilities and big tech resources to see the wider picture. </p><p dir="ltr">This inability to understand algorithmic procedures also renders the<a href="https://www.eugdpr.org/article-summaries.html"> GDPR’s ‘Article 21 - the Right to Object’</a> quite useless. The right to object enables people to refuse the processing of their personal data, including common practices used by digital advertisers such as profiling. But how can you object to something when you do not understand how your data can be used to harm you? In order to object, people first need to be aware how their data is being used. Withdrawing consent according to the GDPR’s article 7 is also problematic for the same reasons, but also because companies make it very difficult to find the mechanisms that enable people to do so. So, once again, why are we still being asked to consent? Power and control. &nbsp;</p><h2>Default control</h2><p dir="ltr">The current definition of online consent transfers responsibility to the individual under the guise of offering users choice. But the way it effectively works is as a control mechanism. As Becky Kazansky, a cyber security scholar and activist <a href="http://twentysix.fibreculturejournal.org/fcj-195-privacy-responsibility-and-human-rights-activism/">argues</a>, this kind of ‘responsibilisation’ is “[e]ncouraging an emphasis on the individual as the primary locus of responsibility for protection from harm… [and has] the convenient effect of deflecting attention from its causes”. As in the offline domain, when you sign a contract you are responsible for abiding by the conditions, and if you do not then you are liable for breaching that contract. And yet the legal and tech narratives frame it as if people are empowered to make decisions and are able to control the way their data is used and will be used. </p><p dir="ltr">This line of thinking is predicated on the assumption that a person’s personal data is a tangible and clearly bounded, singular object that people supposedly have ownership and direct control over. But the way our data is assembled online is more like an ever-evolving “data-self” than a piece of personal property. After all, the data people create, or the traces they leave online, outlines some of their characteristics and behaviours. For example, your mobile phone knows <a href="https://www.nytimes.com/2017/01/20/technology/personaltech/how-your-phone-knows-where-you-have-been.html">where you have been</a>, who you talked with, when you are awake, the websites you visited, the videos you watched, the songs you played, the food you ordered and much more. But as Erving Goffman argues in his famous book ‘The Presentation of Self in Everyday Life’ back in 1956, we perform our selves differently in different contexts. Importantly, we never reveal all the aspects of our lives. Our data-self is incomplete, inaccurate and consists of multiple messy representations. </p><h2>How we present ourselves in different social media platforms</h2><p dir="ltr">The way we present ourselves in different contexts is fluid, evolving and never fixed. Data points about us are enormous and ever-growing, and they can be reshuffled, recombined and assembled in multiple ways over stretches of time. But the way that consent is applied online is static; we are asked only once to consent to multiple procedures often with little or no choice. More than that, our data-self is attached to the profile that companies assemble on us, and is usually connected to our given birth name. This is partly done to improve commercial and advertising potential by connecting our data-self and offline self. But another reason is that it makes us legally responsible for our actions, something that benefits commercial and governmental bodies. </p><p dir="ltr">Some of you might wonder at this point: what’s the big deal? How can the processing of my data cause me any harm? The <a href="https://www.timesupnow.com/">Time’s Up movement</a> is relevant here, and this is also the reason why I opened with my fictional story. The string of sexual harassment cases brought to light by the movement has prompted an important discussion about the fluid nature of consent. Context is crucial to consent, we can change our opinion over time depending on how we feel in any given moment and how we evaluate the situation; <a href="https://www.vanityfair.com/hollywood/2018/01/aziz-ansari-accused-of-sexual-misconduct">the controversy over Aziz Ansari</a> is a case in point. Just because you kissed someone or dated them does not mean you are interested in anything more than that. Consent is an ongoing negotiation and not a one-time signed contract.</p><p dir="ltr">The global response to the Weinstein scandal revealed how widespread sexual harassment is both within the film industry and at large. It became clear that rather than individual cases, sexual harassment is a structural problem. The power of these men was possible not only because of their hierarchical position but also because of a network of people, standards and norms. It is an environment that is designed for sexual harassment to exist and flourish. Ultimately, these actions are about directing, narrowing and controlling women’s agency. &nbsp;</p><p dir="ltr">Although they are two different cases, the abuses of power that occur within the film industry and on online platforms share certain similarities. Both rely on a power structure that exploits people, and usually those who are less privileged and marginalised get hurt the most. In the past few years we have seen examples of how data is used to exploit <a href="https://gizmodo.com/how-algorithmic-experiments-harm-people-living-in-pover-1822311248">poor people</a>, <a href="https://www.youtube.com/watch?v=Q7yFysTBpAo">people of colour</a>, the <a href="https://www.theverge.com/2018/6/4/17424472/youtube-lgbt-demonetization-ads-algorithm">LGBTQ community</a>, <a href="http://twentysix.fibreculturejournal.org/fcj-195-privacy-responsibility-and-human-rights-activism/">activists</a> and <a href="https://www.theverge.com/2018/3/21/17144260/healthcare-medicaid-algorithm-arkansas-cerebral-palsy">people with a chronic health condition</a>. And even if <a href="https://www.wired.com/2013/06/why-i-have-nothing-to-hide-is-the-wrong-way-to-think-about-surveillance/">you think you have nothing to hide</a>, the diffusion of digital technologies, artificial intelligence and the internet of things combined with the privatisation of services, mean that all of us will become vulnerable in various ways. Importantly, the asymmetrical power relation that this architecture creates teaches people what is their position within a particular system. It is about controlling people’s actions, individualising their actions and importantly – narrowing their agency, controlling their data-self. &nbsp;</p><p dir="ltr">Rather than empowering people to negotiate and decide on their own conditions of service, like people do when they sign contracts, we see a strategy to control our actions in these datafied environments. The contractual approach leaves people in a passive position, preventing them from having the opportunity to demand other things from online services. Under EU legislation discourse, our actions online are still dictated by the standardised and automated architectures provided by browsers, publishers and advertisers. In this way, the concept of control mechanisms, in the shape of the consent banner, is used against people not for people. The options available are pre-decided, limited and designed in a way that narrows and manages the way people could use and, ultimately, understand the internet. </p><h2>The old with the new</h2><p dir="ltr">Is a better internet possible, one in which privacy as a value and right is internalised by its architecture? By requiring that data protection is built into systems “by design and by default” ” as Article 25 indicates, the GDPR could be a first step towards this aim. But apparently, when it comes to technology companies respecting contracts, or laws, things get more flexible, and much darker. Calling out the design fail following the transposition of the GDPR, the Norwegian Consumer Council – <a href="https://www.forbrukerradet.no/">Forbrukerrådet</a> - released <a href="https://www.forbrukerradet.no/undersokelse/no-undersokelsekategori/deceived-by-design">a report</a> on June 27, 2018 that criticised tech companies for using “default settings and dark patterns, techniques and features of interface design meant to manipulate users… to nudge users towards privacy intrusive options”. During May and June 2018, the council examined the messages Facebook, Google and Microsoft sent to users in order to comply with the GDPR. Some of the “dark patterns” that report identified include: preselecting default settings with the least privacy friendly options, hiding and obscuring settings, making privacy options more cumbersome, and textual and colour nudges towards data sharing. In other words, what these companies do through these designs is trying to discourage us from exercising our rights to privacy. This is precisely why the council titled their report “Deceived by Design”. &nbsp;</p><p dir="ltr"><span class='wysiwyg_imageupload image imgupl_floating_none 0'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/565030/image3.jpg" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/565030/image3.jpg" alt="" title="" width="460" height="182" class="imagecache wysiwyg_imageupload 0 imagecache imagecache-article_xlarge" style="" /></a> <span class='image_meta'></span></span>(Deceived by Design Report, Forbrukerrådet, Page 3)</p><p dir="ltr">As I mentioned in <a href="https://www.tandfonline.com/doi/abs/10.1080/13600869.2017.1304616">my article about the regulation of behaviours in the European Union</a> internet, we can trace these design strategies to the early days of web cookies. In the late 90s, Netscape Communication released a version of their Navigator 4.0 browser which enabled users to reject third-party cookies. But as Lou Montulli, the Netscape developer also credited with inventing web cookies admitted, the feature did not affect the online advertising industry because people didn’t bother to change their default settings. As I argue in the article:</p><p dir="ltr">This is how advertising, tech and publishing companies have been controlling information flow on the internet, the design of the architecture where it flows, but also users’ online behaviour and understandings of this environment. Spying on users’ behaviour and distorting their experience if they express their active rejection of cookies is presented as necessary procedures to the internet’s existence. </p><p dir="ltr">This notion of ‘consent’ naturalises and normalises digital advertising and technology companies’ practices of surveillance, and educates people about the boundaries of their actions. It also marks the boundaries of what users can demand and expect from commercial actors and state regulators. Portrayed as control, autonomy and power, consent actually moves responsibility from the service or technology providers to individual users. </p><p dir="ltr">Consent is a design fail that should not be engineered into our online lives any more. But what are the alternatives, then? A good place to start is with the legal scholar Julie Cohen’s <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3162178">latest article</a>, where she challenges the current functioning of privacy legislation. As Cohen argues, instruments that are meant to have operational effects, such as notice-and-consent, do not work. Ultimately, as Cohen argues “data harvesting and processing are one of the principal business models of informational capitalism, so there is little motivation either to devise more effective methods of privacy regulation or to implement existing methods more rigorously”. To tackle this, one of the first steps towards a change of the current ecosystem is a need to rethink this business model and invest in alternatives that would make the current model undesirable. These are some other possible solutions:</p><ol><li dir="ltr"><p dir="ltr">Breaking monopolies of big companies such as Facebook, Alphabet (Google), Amazon and Microsoft.</p></li><li dir="ltr"><p dir="ltr">An internet tax which is funnelled to creating public services and spaces.</p></li><li dir="ltr"><p dir="ltr">Promoting decentralised systems such as peer-to-peer.</p></li><li dir="ltr"><p dir="ltr">De-individualising use of technology.</p></li><li dir="ltr"><p dir="ltr">A live communication platform that connects users to national and EU data protection authorities so that they can complain, discuss, negotiate and monitor how their rights are applied online.</p></li><li dir="ltr"><p dir="ltr">A real-time and dynamic terms and condition panel where people can get updates on changes, and can control and negotiate the different clauses without being denied access. This panel should also connect people to their networks so they can make collective decisions about settings and see the wider impact of those decisions.</p></li><li dir="ltr"><p dir="ltr">Developing education programmes, television shows and radio programmes to teach people about algorithms, data-harvesting and processing, data ethics and their rights.</p></li><li dir="ltr"><p dir="ltr">Enabling a control panel that is part of web browsers and cell phones, which shows what is happening in the back-end and enables people to have real-time negotiations with services. This panel, again, should also connect people with their networks. </p></li></ol><p dir="ltr">These are just some ideas, but none of them should be seen on their own as the only answer that provides the ‘ultimate solution’. Multiple solutions and approaches should be made and promoted, primarily to change the way we use, think and understand the internet. As many media historians show, there are multiple ways in which technology can be used, developed and designed; the default setting is never fixed. &nbsp;The moment we create more possible ways for the internet to function, we can think of <a href="https://www.kickstarter.com/projects/1520156881/openbook-the-honest-open-source-and-awesome-social">alternative</a> ways to engage with it. Ways which really empower us, and not only give us ‘control’, but agency, autonomy and meaningful choices – individually and collectively. Don’t you consent to that?</p><fieldset class="fieldgroup group-sideboxs"><legend>Sideboxes</legend><div class="field field-read-on"> <div class="field-label"> 'Read On' Sidebox:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> <p><a href="https://opendemocracy.net/hri"><img src="//cdn.opendemocracy.net/files/smallhribanner.jpg" alt="" /></a></p> <p>More from the <a href="https://opendemocracy.net/hri">Human Rights and the Internet</a> partnership.</p> </div> </div> </div> </fieldset> <div class="field field-rights"> <div class="field-label">Rights:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> CC by NC 4.0 </div> </div> </div> digitaLiberties digitaLiberties hri Elinor Carmi Wed, 15 Aug 2018 10:50:15 +0000 Elinor Carmi 119273 at https://www.opendemocracy.net Facebook and journalism. Part two https://www.opendemocracy.net/digitaliberties/moh-hamdi/facebook-and-journalism-part-two <div class="field field-summary"> <div class="field-items"> <div class="field-item odd"> <p>Facebook has fundamentally changed the news ecosystem and has, in fact, jeopardised press freedom and plurality – whether willingly or not. </p> </div> </div> </div> <p><span class='wysiwyg_imageupload image imgupl_floating_none caption-xlarge'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/500209/PA-35648053.jpg" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/500209/PA-35648053.jpg" alt="lead lead " title="" class="imagecache wysiwyg_imageupload caption-xlarge imagecache imagecache-article_xlarge" style="" width="460" /></a> <span class='image_meta'><span class='image_title'>March 2018, Hamburg: Mathias Döpfner, CEO of Axel Springer SE and President of the Federal Association of German Newspaper Publishers, at 'Online Marketing Rockstars' fair. Axel Heimken/Press Association. All rights reserved.</span></span></span></p><p>When asked in December 2016 what kind of a company Facebook was exactly, <a href="https://eu.usatoday.com/story/tech/news/2016/12/21/mark-zuckerberg-facebook-not-a-traditional-media-%20company/95717102/">Mark Zuckerberg</a> replied that "Facebook is a new kind of platform. It's not a traditional technology company. It's not a traditional media company." Zuckerberg has always been reluctant to describe his company as a publishing house, but the control he has acquired over who gets to see what and when makes him the most powerful news editor the world has ever seen and although Facebook does not produce any content of its own, the news publishers’ dependence on the social medium for the distribution of their content has turned Facebook into a de facto news company. </p> <p>One of the more serious consequences of this development is that neither the news publishers nor the news consumers have much control over the flow of news content. Instead, Facebook can decide which user sees what type of information, based on detailed psychographic profiles. The company therefore has a considerable impact on how its more than two billion active users perceive the world. How easy it is for Facebook to affect the sensitivities of its users has been proven by a number of controversial experiments conducted by Facebook without its users’ knowledge. In 2013 for example, a group of psychologists tried to see whether or not a change in the news feed can alter the mental states of Facebook users. By tweaking the algorithm, the scientists tried to find out whether exposure to positive or negative posts would lead to emotional contagion. They found strong evidence for mood alterations on Facebook. “When positive expressions were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred”, they <a href="https://www.researchgate.net/publication/262813340_Experimental_Evidence_of_Massive-Scale_Emotional_Contagion_Through_Social_Networks">concluded.</a> <span class="mag-quote-center">The company therefore has a considerable impact on how its more than two billion active users perceive the world.</span> </p> <p>There have been fears that Facebook could use the sway it holds over its users’ emotions to influence election results. Unknowingly to its users, Facebook conducted several social experiments during the 2012 US <a href="https://www.motherjones.com/politics/2014/10/can-voting-facebook-button-improve-voter-turnout/">presidential election</a> campaign to see whether a higher amount of political news in one’s news feed would increase interest in politics and, consequently, voter turnout. The data analysts concluded that Facebook did have an impact on voting behavior, as the number of voters did indeed go up as a consequence of the experiment. Likewise, in 2016 Facebook launched its “get out the vote” campaign, which, according to a post by <a href="https://www.facebook.com/zuck/posts/10104067130714241?pnref=story">Zuckerberg</a>, has “helped as many as 2 million people register to vote”. What Zuckerberg presents as civic responsibility of his company was seen by many as a meddling in the election campaigns. <a href="https://gizmodo.com/facebook-says-it-doesnt-try-to-influence-how-people-vot-1771276946?rev=%201460755179651">Facebook </a>has reacted by declaring it will never use the platform to influence the electoral results: “Voting is a core value of democracy and we believe that supporting civic participation is an important contribution we can make to the community. We encourage any and all candidates, groups, and voters to use our platform to share their views on the election and debate the issues. We as a company are neutral – we have not and will not use our products in a way that attempts to influence <em>how</em> people vote.” <span class="mag-quote-center">The distribution of news on Facebook is anything but impartial. The platform can be – and has been – misused for political manipulation and propaganda. </span></p> <p>Conservative groups in particular were not persuaded by these promises of neutrality – not only because Zuckerberg has never made any bones about his support for the Democratic party, but more crucially, because Facebook has been accused in the past of wilfully suppressing conservative news publishers. In January 2014, Facebook launched <a href="https://newsroom.fb.com/news/2014/01/finding-popular-conversations-on-facebook/">“Trending Topics”.</a> Thanks to this news feature, which showed up on the right side of the News Feed, Facebook users could know which stories were most popular on the platform at a given moment. Publishers for their part were able to extend their reach and engagement. Furthermore, advertisers could adapt their ads campaign to the prevailing trends. In May 2016 the tech website Gizmodo issued an investigative piece based on interviews with Facebook “news curators”, who claimed that conservative news were deliberately prevented from showing up in the Trending Topics list. At the same time, news about liberal causes such as posts relating to the Black Lives Matter movement were artificially injected into the list to push their reach, according to the same Facebook employees. Furthermore, Facebook also indicated that news about Facebook should not be circulated via the Trending Topics feature. Not only did <a href="https://gizmodo.com/former-facebook-workers-we-routinely-suppressed-conser-1775461006">Gizmodo’s article</a> accuse Facebook of political bias but it also showed that the distribution of news was not merely based on algorithms but that human editors were also at work. </p> <p>The report seemed to <a href="https://gizmodo.com/facebook-employees-asked-mark-zuckerberg-if-they-should-1771012990">corroborate </a>the fears of Trump supporters who worried about Facebook’s opposition to their candidate following a leak that documented how Facebook employees were toying with the idea of stopping Donald Trump from becoming the next US president. Tom Stocky, who is responsible for Facebook’s Trending Topic feature, and Mark Zuckerberg responded to the allegations by denying any wrongdoing. Internal investigation did not find any evidence to support the claims iterated by Gizmodo. “ We have rigorous guidelines that do not permit the prioritization of one viewpoint over another or the suppression of political perspectives ”, Zuckerberg wrote in a <a href="https://www.facebook.com/zuck/posts/10102830259184701">May 2016 post</a>. Nonetheless, Zuckerberg has met with leaders of <a href="https://www.cnet.com/news/mark-zuckerberg-to-meet-with-conservative-leaders-about-trending-topics/">conservative publishers</a>, such as Glenn Beck, following Gizmodo’s scathing article. A few months after the meeting, Facebook decided to get rid of its <a href="https://www.businessinsider.de/facebook-human-news-editors-2016-9?r=US&amp;IR=T">news editors</a> in favour of distributions based completely on <a href="https://www.wired.co.uk/article/facebook-trending-topics-censorship-humans">algorithms</a>. </p> <p>While conservative groups have voiced concern over Zuckerberg’s support for the Democratic party, several newspapers have documented how the Republican party itself had instrumentalized the platform during the 2016 elections by flooding the News Feed with fake news in order to promote Trump and denigrate his adversaries. Fake news are believed to have strongly influenced the outcome of the US presidential election – though it is difficult to measure their political impact. However, according to a 2016 <em>BuzzFeed </em>study, the most successful election-related fake news stories generated more engagement on Facebook than the most successful posts from reference <a href="https://www.buzzfeednews.com/article/craigsilverman/viral-fake-election-news-outperformed-real-news-on-facebook#.btkRY5Elp">newspapers</a>. Facebook algorithms are believed to have prioritized fake news over accurate reporting – mostly to the advantage of the GOP. The Cambridge Analytica (CA) and Aggregate IQ scandals further corroborated the accusations against the Republican party. (Interestingly enough, CA’s <a href="https://opendemocracy.net/uk/brexitinc/adam-ramsay/cambridge-analytica-is-what-happens-when-you-privatise-military-propaganda">parent firm</a> <em>Strategic Communications Laboratories Group </em>has been hired by several national defense departments to disseminate military propaganda.) Fake news also influenced political decisions in Europe. It is believed that disinformation has strongly affected the outcome of the Brexit referendum. In April 2018, Facebook representative Mike Schroepfer had to give evidence to the UK’s Digital, Culture, Media and Sport Select Committee on the issue of fake news and its impact in Britain. </p> <p>Political leaders worldwide have reacted to the dangers of fake news in <a href="https://www.nytimes.com/2016/11/18/technology/fake-news-on-facebook-in-foreign-elections-thats-not-new.html;">upcoming elections</a> and have urged Zuckerberg to take measures against the <a href="https://www.nytimes.com/2016/12/02/world/europe/italy-fake-news.html">scourge.</a> What Gizmodo’s revelations as well as the scandals around the elections and referenda show, is that the distribution of news on Facebook is anything but impartial. The platform can be – and has been – misused for political manipulation and propaganda. </p> <h2><strong>Monetizing</strong></h2> <p>Financial incentives are an important factor in the propagation of fake news as well as in the decline of accuracy and quality of news. In the current system publishers hope to attract as many clicks as possible in order to monetize on the ads placed on their website. As a result of this ads-based monetization system quality and truth becomes subordinate to reach and engagement. As a consequence, online journalism has adopted the style and methods of tabloids and muckraking newspapers in order to attract the attention of Facebook users. Articles are often emotionally charged, strongly biased, hyperbolic or intentionally provoking. Phenomena such as clickbait or engagement bait have invaded the social network. Viral propagation, shitstorms and hashtag campaigns have become the boon of online journalism. Even highly respected news publishers have preferred to produce trashy pieces and infotainment that may be low-quality but that nonetheless results in a great deal of engagement. <span class="mag-quote-center">Even highly respected news publishers have preferred to produce trashy pieces and infotainment that may be low-quality but that nonetheless results in a great deal of engagement.</span></p> <p>The Facebook page of leading international newspapers are filled with so-called junk food news. There is in fact an enormous quality gap between print and online versions of the same news publishers as the quality of online newspapers is considerably lower to the print versions due to the “eye-catching” effects needed to attract the attention and the clicks of Internet users. </p> <p>News consumers have now become very sceptical of the press in general and are questioning the objectivity and accuracy of news reporting. A 2018 global report by the <a href="https://cms.edelman.com/sites/default/files/2018-01/2018%20Edelman%20Trust%20Barometer%20Global%20%20Report.pdf">Edelman Trust Barometer</a>, which surveyed people from 28 countries, showed that media in general are facing a crisis of trust. 66% of the respondents agreed with the statement that news organisations are overly focused on attracting large audiences rather than reporting, 65% found that news organisations sacrifice accuracy in order to be the first to break a story and 59% thought that news media supported an ideology instead of informing the public. The study also shows that 7 in 10 people worry about fake news being used as a weapon. 63% of respondents have admitted that they were unable to tell good journalism from rumour or falsehood. </p> <h2><strong>Polarisation and echo</strong></h2> <p>Both fake news and partisan low-quality articles have led to two other dangerous and related developments: the polarization of society and online echo chambers. Content that can potentially go viral often deals with divisive and highly emotional issues such as immigration as well as identity-related topics such as race or gender. By flooding the social platform with partisan articles on these topics in order to generate more engagement, newsrooms have contributed to further fragment society along ideological and identity-based lines. Furthermore, the result of this distrust is not that people read less news or that they restrict their news consumption to renowned news publishers but rather that they only read news that confirms pre-existing beliefs. Ironically, this only worsens the problem as confirmation is replacing accuracy as a yardstick for good journalism. News consumers are less likely to read and engage with content that does not conform to their pre-existing beliefs, which in turn leads to more polarization. </p> <p>Facebook plays a big part in this development. Algorithm-generated distribution may reduce the risk of biased human intervention, but it opens the door to other dangers, such as for instance the risk of creating echo chambers. This concept denotes the selective exposure to ideas that a given individual agrees with, thereby reinforcing her belief system. As stated above, most news that appear on our wall tally with a user’s perceptions thereby reinforcing the impression that her own beliefs are the predominant ones. The user is only confronted with topics she is interested in and with opinions she agrees with. The digital world makes it easier for individuals who share certain outlooks and beliefs, to gather in virtual discussion spaces and mutually confirm their common bias. Several studies have indeed found evidence that echo chambers exist on Facebook and that Facebook users are highly <a href="http://maint.ssrn.com.s3-website-us-east-1.amazonaws.com/site-unavailable.html">polarised. </a>This polarisation obviously has consequences in the offline world, too. </p> <p>After initially dismissing the scope and effects of fake and low-quality news on its platform, Facebook has promised to introduce new measures in order to reduce these phenomena. Acknowledging that financial gain is one of the main incentives for spreading fake news as well as clickbaiting headlines and sensationalist articles, Facebook made several updates to make it more difficult to reap profits from such misleading posts. On August 2017 for instance, Facebook made an update to block ads from users who have repeatedly shared false or misleading content on <a href="https://newsroom.fb.com/news/2017/08/blocking-ads-from-pages-that-repeatedly-share-false-news/">Facebook</a>. Critics were quick to warn about possible abuses of power. Facebook should not take the spread of misinformation on its platform as an excuse to suppress content. In an interview with the Swiss publication “Blick”, <a href="http://uk.businessinsider.com/axel-springer-ceo-mathias-dopfner-interview-on-facebook-and-fake-news-2017-1">Mathias Döpfner</a>, CEO of the German publishing house Axel Springer, explained the dangers of Facebook turning into a news editor: “Those who are now calling upon Facebook to employ an editor-in-chief and to verify the accuracy of independent editorial texts, deleting them where appropriate, they are creating a kind of global super censor and are destroying precisely the diversity that makes up our democracy.” <a href="https://www.indexoncensorship.org/2016/12/dunja-bother-quick-take-lying-social-media/">Dunja Mijatović </a>, the Representative on Freedom of the Media for the Organization for Security and Co-operation in Europe, seconded Döpfner and argued that monitoring content “may just cause greater harm to free expression than any lie, no matter how damaging”. A number of NGOs advocating free speech, such as <a href="https://ifex.org//international/2017/01/12/fake_news_regulation/">IFEX </a>or Article19, have documented cases of government censorship that have been justified on the grounds of protection against fake news, thereby underlining the dangers of instrumentalising the debate for political goals. </p> <h2><strong>Global super censor</strong></h2> <p>Facebook has repeatedly pledged that, as a tech company, it was not going to make such <a href="https://newsroom.fb.com/news/2017/04/working-to-stop-misinformation-and-false-news/">editorial decisions</a>. “We cannot become arbiters of truth ourselves, it’s not feasible given our scale, and it’s not our role. Instead, we’re working on better ways to hear from our community and work with third parties to identify false news and prevent it from spreading on our platform.” One possibility is to work with third parties who check the veracity of news posted online. In December 2016 Facebook announced its collaboration with the International Fact-Checking Network (IFCN) to fight<a href="https://www.poynter.org/news/facebook-has-plan-fight-fake-news-heres-where-we-come"> fake news.</a> Once a news item has been identified as fake, the fact-checkers can issue a rebuttal that will then be attached to the original link. Döpfner has criticized the initiative arguing that journalists are doing even more free work for Facebook in order to solve its fake news <a href="http://uk.businessinsider.com/axel-springer-ceo-mathias-dopfner-interview-on-facebook-and-fake-news-%202017-1?r=UK&amp;IR=T">problem</a> instead of producing quality articles. Some fact-checkers themselves have also questioned the efficiency of <a href="https://www.theguardian.com/technology/2017/nov/13/way-too-little-way-too-late-facebooks-fact-checkers-%20say-effort-is-failing">the initiative</a>. Besides, it has been argued that the project puts newspapers in a conflict of interest given that some selected media outlets were granted the privilege to assess the articles of other newspapers. </p> <p>Additionally, Facebook has introduced measures to expose users to a more diverse range of sources. In April and October 2017, Facebook has tested a function that would show related articles in order to offer different <a href="https://newsroom.fb.com/news/2017/04/news-feed-fyi-new-test-with-related-articles/">perspectives</a>, as well as a function that provides context for articles that are shared on the news feed, such as information on the publisher or related articles, so that <a href="https://newsroom.fb.com/news/2017/10/news-feed-fyi-new-test-to-provide-context-about-articles/">readers</a> can make “an informed decision about which stories to read, share, and trust”. Another early attempt was to introduce a function that let users flags news as “disputed”. According to Facebook, this would make it easier to tell fake articles from authentic ones and thus prevent the spread of hoaxes. </p> <p>A year after introducing the function Facebook withdrew it, admitting that it generated the <a href="http://thehill.com/policy/technology/366020-facebook-drops-disputed-tags-for-news-stories">opposite</a> effect of what was hoped. In 2018, Zuckerberg introduced a new measure to rank publishers according to their trustworthiness: Facebook users could now decide for themselves which news publishers they deem authentic. News publishers who are “broadly trusted across society, even by those who don’t follow them directly” would as a consequence be <a href="https://www.wired.com/story/facebooks-latest-fix-for-fake-news-ask-users-what-they-trust/">prioritised</a>. This again has drawn criticism by those who accuse Facebook of “taking the path with least responsibility”, implying that Facebook should censor instead of continuing to make money with hoaxes. </p> <h2><strong>“Meaningful interaction”</strong></h2> <p>It is clear that whatever the solutions offered, there will always be downsides. The press has become so dependent on Facebook that each adjustment may lead to undesired consequences. This has been exemplified by Facebook’s introduction of a separate news feed which would only display non-promoted press content. The original news feed would instead focus on posts made by friends and family in order to foster what <a href="https://www.facebook.com/zuck/posts/10104413015393571">Zuckerberg</a> described as “meaningful interaction”. </p> <p>Ads would still show up in the original news feed, as well as news – if publishers paid for it to appear there. The feature was launched in 6 countries on 19 October 2017 and has resulted in a massive loss of referral traffic for the concerned news outlets – especially smaller and local publishers relying on Facebook for reaching their audience and making money. The reach of the <a href="https://www.theguardian.com/technology/2017/oct/25/facebook-orwellian-journalists-democracy-guate%20mala-slovakia">Guatemalan </a>publisher Soy502 has dropped by 66%, according to one of its editorial board members. In <a href="https://www.theguardian.com/technology/2017/oct/23/facebook-non-promoted-posts-news-feed-new-trial-%20publishers">Slovakia</a>, publishers are reported to have lost two thirds to three quarters of their Facebook reach. The failed experiment shows just how much the press is hooked on Facebook and what effects a change in Facebook’s policies can have on media pluralism and especially local and small-sized media outlets in countries as distant as Slovakia and Guatemala. </p> <p>One of Facebook’s more ambitious and arguably less controversial initiatives has been the launch of its “Facebook Journalism Project” (FJP) under the aegis of Campbell Brown. The declared goal of this news initiative is to raise the quality of and the trust in journalism and is based on three pillars according to <a href="https://media.fb.com/2017/01/11/facebook-journalism-project/?_ga=1.203317651.322283429.1425207372">Fidji Simo</a>, the director of the project: collaborative projects, training for journalists and training for everyone. One of the focal points of FJP is therefore the educational axis. FJP offers several <a href="https://media.fb.com/2016/10/25/introducing-online-courses-for-journalists-on-facebook/?_ga=2.19895679%20.619604578.1524587663-563576275.1523471564">courses </a>directed at journalists who want to enhance their digital skills. In January 2018, Brown has announced that FJP would offer <a href="https://www.facebook.com/facebookmedia/blog/introducing-the-facebook-journalism-project-scholarship/">scholarships</a> to 100 ongoing journalists. Furthermore, FJP tries to advance media literacy through the “News Integrity Initiative” (NII) which it founded in <a href="https://www.journalism.cuny.edu/2017/04/announcing-the-new-integrity-initiative/">collaboration</a> with a series of other institutions in order to help people “make informed judgments about the news they read and share online”. NII has also pledged to support quality <a href="https://www.journalism.cuny.edu/2017/11/news-integrity-initiative-teams-up-with-international-partners-%20announces-2-5-million-in-grants/">journalism</a> with a $2.5 million grant to partners worldwide. </p> <p>Facebook FJP has also started to reach out to local media with projects such as the Local News Subscriptions Accelerator aimed to help local media raise their <a href="https://www.facebook.com/facebookmedia/blog/helping-local-news-publishers-develop-digital-subscriptions/">subscription revenue</a> or apps such as <em>Today In </em>which only displays <a href="https://www.recode.net/2018/1/10/16871480/facebook-local-news-section-journalism">local news</a> and events. The results of these projects remain to be seen but, here again, publishers have questioned the <a href="https://www.theguardian.com/technology/2017/nov/13/way-too-little-way-too-late-%20Facebooks-fact-checkers-say-effort-is-failing;%20https://mondaynote.com/facebooks-walled-wonderland-is-inherently-incompatible-with-news-media-b145e2d0078c">efficiency</a> of the initiative, wondering whether FJP was not a mere PR stunt after all.</p> <h2><strong>Reluctant to quit </strong></h2> <p>While discontent over Facebook is constantly growing, media outlets have nonetheless been reluctant to quit the platform. In fact, they have become too dependent on the social network. News publishers see themselves confronted with a difficult dilemma: should they continue to use Facebook as a distributive platform in order to reach larger audiences at the expense of quality and control? </p> <p>Or should they stop using Facebook to retain control over production and readership and produce high-quality journalism but suffer a loss of audience as well as revenue? The dilemma involves Facebook users and news readers, too: should they consume free, low-quality news that is conveniently selected by a Facebook algorithm according to one’s interests and preferences or should they find more cumbersome alternatives that involve fees but also higher quality standards? It seems that, without the willingness of readers to pay for journalism, news publishers will hardly have an incentive to quit Facebook. <span class="mag-quote-center">Without the willingness of readers to pay for journalism, news publishers will hardly have an incentive to quit Facebook. </span></p> <p>Facebook too is facing an important dilemma. Should it stress its quality as a tech company whose ultimate goal it is to maximise its gain ­– in which case, it can continue to monetize on viral posts that are usually low quality, sometimes false and often extremely divisive. Or should it identify more strongly as a news distributor, thereby assuming greater responsibilities in its function as a primary news source for a growing amount of its users worldwide? In this case, Facebook ought to promote better journalism, which entails sharing a greater part of its revenue with publishers. The company may as a consequence also experience a reduction of reach and engagement for many posts. </p> <p>But even if Facebook opts for the latter, as seemingly it has during the last two years, it is still difficult to find a solution that satisfies all parties: the advertisers, major publishers, local news outlets, governments, NGOs and, obviously, Facebook itself as well as its users. Changes in Facebook’s News Feed or algorithms may for instance have serious consequences on some of the publishers, as the fates of Slovakian and Guatemalan news outlets have shown. Furthermore, Facebook is caught between demands for more regulation of news content on the one hand, being accused of a laissez-faire attitude vis-à-vis fake news and low-quality garbage, while on the other hand it is confronted with accusations of using fake news as an excuse to monitor and censor content on its platform along the lines of ideological preferences. </p> <p>Whatever the decision taken by the different parties involved, the repercussions on the press may be considerable. Facebook has fundamentally changed the news ecosystem and has, in fact, jeopardised press freedom and plurality – whether willingly or not. </p><fieldset class="fieldgroup group-sideboxs"><legend>Sideboxes</legend><div class="field field-related-stories"> <div class="field-label">Related stories:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> <a href="/digitaliberties/moh-hamdi/facebook-and-journalism-part-one">Facebook and journalism. Part one</a> </div> </div> </div> </fieldset> <div class="field field-country"> <div class="field-label"> Country or region:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> United States </div> </div> </div> <div class="field field-topics"> <div class="field-label">Topics:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> Civil society </div> <div class="field-item even"> Culture </div> <div class="field-item odd"> Democracy and government </div> <div class="field-item even"> Economics </div> <div class="field-item odd"> Ideas </div> <div class="field-item even"> International politics </div> <div class="field-item odd"> Internet </div> </div> </div> <div class="field field-rights"> <div class="field-label">Rights:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> CC by NC 4.0 </div> </div> </div> digitaLiberties digitaLiberties United States Civil society Culture Democracy and government Economics Ideas International politics Internet Moh Hamdi Fri, 10 Aug 2018 14:58:10 +0000 Moh Hamdi 119226 at https://www.opendemocracy.net In the era of artificial intelligence: safeguarding human rights https://www.opendemocracy.net/digitaliberties/dunja-mijatovi/in-era-of-artificial-intelligence-safeguarding-human-rights <div class="field field-summary"> <div class="field-items"> <div class="field-item odd"> <p>Today, it is all too easy for governments to permanently watch you and restrict the right to privacy, freedom of assembly, freedom of movement and press freedom. </p> </div> </div> </div> <p><span class='wysiwyg_imageupload image imgupl_floating_none 0'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/565030/frankenstein.jpg" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/565030/frankenstein.jpg" alt="lead " title="" width="460" height="292" class="imagecache wysiwyg_imageupload 0 imagecache imagecache-article_xlarge" style="" /></a> <span class='image_meta'><span class='image_title'>Frankenstein's monster's bust in the National Museum of Cinema of Turin, Italy Wikicommons/ André Ribeiro from Curitiba, Brasil. Some rights reserved.</span></span></span></p><p>Humans and machines are destined to live in an ever-closer relationship. To make it a happy marriage, we have to better address the ethical and legal implications that data science carry. </p> <p>Artificial intelligence, and in particular its subfields of machine learning and deep learning, may only be neutral in appearance, if at all. Underneath the surface, it can become extremely personal. </p> <p>The benefits of grounding decisions on mathematical calculations can be enormous in many sectors of life. However, relying too heavily on AI inherently involves determining patterns beyond these calculations and can therefore turn against users, perpetrate injustices and restrict people’s rights. <span class="mag-quote-center">AI in fact can negatively affect a wide range of our human rights.</span></p> <p>AI in fact can negatively affect a wide range of our human rights. The problem is compounded by the fact that decisions are taken on the basis of these systems, while there is no transparency, accountability and safeguards on how they are designed, how they work and how they may change over time. </p> <h2><strong>Encroaching on the right to privacy and the right to equality </strong></h2> <p>The tension between advantages of AI technology and risks for our human rights becomes most evident in the field of privacy. Privacy is a fundamental human right, essential in order to live in dignity and security. But in the digital environment, including when we use apps and social media platforms, large amounts of personal data is collected - with or without our knowledge - and can be used to profile us, and produce predictions of our behaviours. We provide data on our health, political ideas and family life without knowing who is going to use this data, for what purposes and why. </p> <p>Machines function on the basis of what humans tell them. If a system is fed with human biases (conscious or unconscious) the result will inevitably be biased. The lack of diversity and inclusion in the design of AI systems is therefore a key concern: instead of making our decisions more objective, they could reinforce discrimination and prejudices by giving them an appearance of objectivity. There is increasing evidence that women, ethnic minorities, people with disabilities and LGBTI persons particularly suffer from discrimination by biased algorithms. <span class="mag-quote-center">There is increasing evidence that women, ethnic minorities, people with disabilities and LGBTI persons particularly suffer from discrimination by biased algorithms. &nbsp;&nbsp;</span></p> <p>Studies have shown, for example, that Google was more likely to display adverts for highly paid jobs to male job seekers than female. Last May, a <a href="http://fra.europa.eu/en/publication/2018/big-data-discrimination">study</a> by the EU Fundamental Rights Agency also highlighted how AI can amplify discrimination. When data-based decision making reflects societal prejudices, it reproduces – and even reinforces – the biases of that society. This problem has often been raised by academia and NGOs too, who recently adopted the <a href="https://www.accessnow.org/cms/assets/uploads/2018/05/Toronto-Declaration-D0V2.pdf">Toronto Declaration</a>, calling for safeguards to prevent machine learning systems from contributing to discriminatory practices. </p> <p>Decisions made without questioning the results of a flawed algorithm can have serious repercussions for human beings. For example, software used to inform decisions about healthcare and disability benefits has wrongfully excluded people who were entitled to them, with dire consequences for the individuals concerned. </p> <h2><strong>Stifling freedom of expression and freedom of assembly</strong></h2> <p>Another right at stake is freedom of expression. A recent Council of Europe publication on <a href="https://rm.coe.int/algorithms-and-human-rights-en-rev/16807956b5">Algorithms and Human Rights</a> noted for instance that Facebook and YouTube have adopted a filtering mechanism to detect violent extremist content. However, no information is available about the process or criteria adopted to establish which videos show “clearly illegal content”. </p> <p>Although one cannot but salute the initiative to stop the dissemination of such material, the lack of transparency around the content moderation raises concerns because it may be used to restrict legitimate free speech and to encroach on people’s ability to express themselves. </p> <p><a href="https://www.ohchr.org/Documents/Issues/Opinion/Legislation/OL-OTH-41-2018.pdf">Similar concerns</a> have been raised with regard to automatic filtering of user-generated content, at the point of upload, supposedly infringing intellectual property rights, which came to the forefront with the proposed Directive on Copyright of the EU. In certain circumstances, the use of automated technologies for the dissemination of content can also have a significant impact on the right to freedom of expression and of privacy, when bots, troll armies, targeted spam or ads are used, in addition to algorithms defining the display of content. <span class="mag-quote-center">Similar concerns have been raised with regard to … the Proposed Directive on Copyright of the EU.</span></p> <p>The tension between technology and human rights also manifests itself in the field of facial recognition. While this can be a powerful tool for law enforcement officials for finding suspected terrorists, it can also turn into a weapon to control people. Today, it is all too easy for governments to permanently watch you and restrict the right to privacy, freedom of assembly, freedom of movement and press freedom. </p> <h2><strong>What governments and the private sector should do </strong></h2> <p>AI has the potential to help human beings maximise their time, freedom and happiness. At the same time, it can lead us towards a dystopian society. Finding the right balance between technological development and human rights protection is therefore an urgent matter – one on which the future of the society we want to live in depends. </p> <p>To get it right, we need stronger co-operation between state actors – governments, parliaments, the judiciary, law enforcement agencies – private companies, academia, NGOs, international organisations and also the public at large. The task is daunting, but not impossible. </p> <p>A number of standards already exist and should serve as a starting point. For example, the case-law of the European Court of Human Rights sets clear boundaries for the respect for private life, liberty and security. It also underscores states’ obligations to provide an effective remedy to challenge intrusions into private life and to protect individuals from unlawful surveillance. In addition, the modernised Council of Europe Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data adopted this year addresses the challenges to privacy resulting from the use of new information and communication technologies. </p> <p>States should also make sure that the private sector, which bears the responsibility for AI design, programming and implementation, upholds human rights standards. The Council of Europe <a href="https://search.coe.int/cm/Pages/result_details.aspx?ObjectID=0900001680790e14">Recommendation on the roles and responsibilities of internet intermediaries</a>, the <a href="http://www.ohchr.org/Documents/Publications/GuidingPrinciplesBusinessHR_EN.pdf">UN guiding principles on business and human rights</a>, and the <a href="https://freedex.org/a-human-rights-approach-to-platform-content-regulation/">report on content regulation</a> by the UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, should all feed the efforts to develop AI technology which is able to improve our lives. There needs to be more transparency in the decision-making processes using algorithms, in order to understand the reasoning behind them, to ensure accountability and to be able to challenge these decisions in effective ways. <span class="mag-quote-center">A third field of action should be to increase people’s “AI literacy”.</span></p> <p>A third field of action should be to increase people’s “AI literacy”. States should invest more in public awareness and education initiatives to develop the competencies of all citizens, and in particular of the younger generations, to engage positively with AI technologies and better understand their implications for our lives. Finally, national human rights structures should be equipped to deal with new types of discriminations stemming from the use of AI. </p> <p>Artificial intelligence can greatly enhance our abilities to live the life we desire. But it can also destroy them. We therefore have to adopt strict regulations to prevent it from morphing in a modern Frankenstein’s monster. </p><fieldset class="fieldgroup group-sideboxs"><legend>Sideboxes</legend><div class="field field-related-stories"> <div class="field-label">Related stories:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> <a href="/cathy-oneil-leo-hollis/weapons-of-maths-destruction">Weapons of maths destruction </a> </div> <div class="field-item even"> <a href="/can-europe-make-it/renata-avila-joren-de-wachter-christoph-schneider/eu-is-killing-our-democratic-sp">The EU is killing our democratic spaces using copyright as a Trojan horse</a> </div> <div class="field-item odd"> <a href="/jens-renner/to-restore-trust-in-technology-we-must-go-further-than-gdpr">To restore trust in technology, we must go further than GDPR</a> </div> <div class="field-item even"> <a href="/digitaliberties/phoebe-braithwaite-christina-rogers/our-data-doubles-how-biometric-surveillance-ushe">Our data doubles: how biometric surveillance ushers in new orders of control</a> </div> </div> </div> </fieldset> <div class="field field-rights"> <div class="field-label">Rights:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> CC by NC 4.0 </div> </div> </div> digitaLiberties digitaLiberties Dunja Mijatović Tue, 03 Jul 2018 14:40:34 +0000 Dunja Mijatović 118694 at https://www.opendemocracy.net The EU is killing our democratic spaces using copyright as a Trojan horse https://www.opendemocracy.net/can-europe-make-it/renata-avila-joren-de-wachter-christoph-schneider/eu-is-killing-our-democratic-sp <div class="field field-summary"> <div class="field-items"> <div class="field-item odd"> <p>DiEM25’s position on Copyright Reform: <del datetime="2018-06-29T12:38" cite="mailto:Renata%20Avila"></del>democratize technology instead of allowing it to be used as a giant censorship machine in the interest of big business.</p> </div> </div> </div> <p><span class='wysiwyg_imageupload image imgupl_floating_none caption-xlarge'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/500209/Giovanni_Domenico_Tipeolo,_Procession_of_the_Trojan_Horse_in_Troy._1773..jpg" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/500209/Giovanni_Domenico_Tipeolo,_Procession_of_the_Trojan_Horse_in_Troy._1773..jpg" alt="lead lead " title="" class="imagecache wysiwyg_imageupload caption-xlarge imagecache imagecache-article_xlarge" style="" width="460" /></a> <span class='image_meta'><span class='image_title'>Copyright as a Trojan horse. After Giovanni Domenico Tiepolo (1773) National Gallery. Public Domain.</span></span></span>Europe was one of the regions that connected massively to the Internet. Not only that, it was one of the few adopting literacy and inclusion programs early enough on to unleash the power of connected citizens, showing them how to create new business models and improve education but also how to express themselves, create, organize and protest. </p><p>But alarmingly, the European Parliament is on the verge of a dramatic change of direction. 
The EU has recently embarked on a new mission: controlling the Internet <a href="https://www.opendemocracy.net/digitaliberties/david-elston/link-tax-eu-copyright-directive-will-break-internet-as-we-know-it">through the monopoly of copyright</a>. This attempt to reform and control the Internet has not received half the attention it deserves. </p><p>As Julia Reda, MEP for the Pirate Party, has explained, the current project of EU legislation would impose automatic filters that control ANY content that anyone wants to upload. The reason would be the protection of copyright, a monopoly right that primarily benefits large media behemoths, without any possibility of advance verification. 
</p><p>You read that right: the EU wants to put in place a global censorship machine, on the basis of unverifiable monopoly rights, mostly held by large media corporations. </p><p>In DiEM25, we do not see this as just an outdated law, isolated from current politics. Indeed, that is precisely what is most worrying about it. We cannot see it as unconnected to the big push in Europe by authoritarian leaders wanting to restrict, to truly shrink the spaces of civil society. Increasing censorship online will reduce the ability of citizens to say what they think, filtering content before it is published. This&nbsp; will not only harm speech but increase surveillance and the meting out of punishments for things we say online. This is combined with all the existing online state surveillance already endured by EU citizens, which remains as powerful as ever. </p><p>With dismay, we are witnessing now an open boycott of the democratic achievement of a connected Europe. The European Parliament Legal Committee has just given <a href="https://creativecommons.org/2018/06/20/european-parliaments-legal-affairs-committee-gives-green-light-to-harmful-link-tax-and-pervasive-platform-censorship/">the green light to a law that will be a tool</a> to control speech, expression, criticism and increase the surveillance levels imposed on all EU citizens. </p><p>Disguised as a copyright reform, this is is a move to remove power from the hands of people and silence voices. Diem25 stands against it and urges all progressive MEPs to reconsider their position. </p><p>Otherwise, be prepared to wave goodbye to free speech, because you may want to use something that someone claims “exclusive rights” on. And while you are at it, say Bye, Ciao, Adios to democracy. </p><h2>Why is this such a problem?</h2><p>Because of free speech. Free speech is a fundamental right, right at the heart of any democratic system. 

If you can’t say a word, before checking if someone else has a publishing monopoly on that word (and don’t forget there is no way to check this), you effectively abolish free speech.

Copyright dates from the days of the printing press, when copying was difficult and expensive.&nbsp; It ensured a distribution monopoly for businesses who would distribute content that was hard to gather, hard to assess, hard to distribute, and hard to market. </p><p><strong>Let us shift the vote on July 5 and create a space to discuss the democratic future of the Internet. </strong></p><p>

But we live in the 21st century, not the 19th. The Internet changed all that. Yet, the distribution monopoly holders want to keep their rent-seeking practices, to benefit from the creative work of the real authors and creative people.

And the European Parliament looks like they want to support the monopoly holders over the rights of democratic debate and free speech.

 What can we do?<br />We need to engage in meaningful democratic debate. On July 5, the European Parliament will vote on the proposed Copyrighht Reform. The proposed text is deeply wrong for three reasons:
</p><p>- First, it has the balance between monopoly, expressed through copyright, and freedom of speech fundamentally wrong. As Europeans we expected our representatives to be designing the institutions of the future, not relying on and constructing all the architecture for our shared information based on outdated nineteenth century copyright laws.</p><p>- Second, there has been no proper debate about this. DiEM25 is the one political movement that is adopting a serious, crowd-sourced approach to issues such as the relationship between technology and free speech, through its current adoption of its Seventh Pillar on Technological Sovereignty in its ‘Progressive Agenda for the Internet and other technologies in Europe’. Remember the Cambridge Analytica scandal? DiEM25 is the only political movement with a structural approach to this problem. Rather than the sad cosmetic media-shows we’ve seen in the European Parliament.</p><p>
- Third, from a democratic and transparent perspective, we need to take back power over technology. Technology is great, but it is we humans who must have a democratic control over it. 
Europe won’t be democratised without democratising its technologies. The future of democracies, economies, the environment, public life, equality, freedom and justice are entwined with the futures of technologies – and vice versa. </p><p>Join us in shaping the policies for a democratic future <a href="https://diem25.org/progressive-agenda-for-europe/">and send us your proposals</a>. &nbsp;</p><p>Meanwhile, call your MEP to block this awful “copyright reform”.</p><div class="field field-country"> <div class="field-label"> Country or region:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> EU </div> </div> </div> <div class="field field-rights"> <div class="field-label">Rights:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> CC by NC 4.0 </div> </div> </div> Can Europe make it? digitaLiberties Can Europe make it? EU Christoph Schneider Joren De Wachter Renata Avila DiEM25 Fri, 29 Jun 2018 13:07:22 +0000 Renata Avila, Joren De Wachter and Christoph Schneider 118641 at https://www.opendemocracy.net CIA whistleblower: ''No regrets. I would do it all again'' https://www.opendemocracy.net/digitaliberties/yorgos-boskos-john-kyriakou/cia-whistleblower-no-regrets-i-would-do-it-all-again <div class="field field-summary"> <div class="field-items"> <div class="field-item odd"> <p>Whistleblower John Kiriakou explains why he and fellow-whistlebower Thomas Drake are committed to alerting their fellow Americans to a dangerous surveillance and war system designed to monitor their every activity. 31-minute video Interview.</p> </div> </div> </div> <p class="normal" style="text-align: left;"><span class='wysiwyg_imageupload image imgupl_floating_none caption-xlarge'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/500209/IMG-3253.jpg" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/500209/IMG-3253.jpg" alt="" title="" width="460" height="252" class="imagecache wysiwyg_imageupload caption-xlarge imagecache imagecache-article_xlarge" style=""/></a> <span class='image_meta'><span class='image_title'>John Kiriakou, 2018. Yorgos Boskos. All rights reserved.</span></span></span>Three years after he was released from prison, former CIA officer John Kiriakou again denounces the torture programme as illegal and unethical which he had exposed back in 2007. Kiriakou explains why he feels no regrets about his decision to blow the whistle, although it came at a high price for him as for NSA whistleblower Thomas Drake: he would do it all over again.</p> <p style="text-align: left;" class="normal">Kiriakou recollects the CIA's new director, Gina Haspel, overseeing torture sessions in a secret prison overseas. ‘‘When the programme was finally exposed, Haspel personally ordered to destroy videotapes of CIA torture’’, Kiriakou says.</p> <div></div> <p class="normal">On Donald Trump, Kiriakou believes his personal instability to be dangerous. ‘‘There is an anti-Russian hysteria in Washington, it's unlike anything I have seen before in my life. That's why I fear for the country’’, Kiriakou states.</p> <p class="normal">John Kiriakou describes three major techniques that the CIA used: waterboarding, sleep deprivation, and ''cold cell'', which led to the death of two prisoners. He believes that ‘‘those techniques were crimes against humanity’’.</p> <p class="normal">The former CIA Officer was six years old when Daniel Ellsberg went public with the Pentagon Papers, and still remembers what a hero he was in his house and the contribution that he made to American political culture. A few days before Kiriakou went to prison, Ellsberg told him ‘‘you are on the right side of history’’. As Kiriakou admitted, ‘‘Ellsberg was a real inspiration for me’’.</p><p> <iframe height="300" width="460" src="https://drive.google.com/file/d/1YXei-H0_cR-tj8PkmVxCBdEgfcXGB6VZ/preview"></iframe></p> <p class="normal">Emphasizing that the United States has been in war for seventeen consecutive years, Kiriakou states that Barack Obama and Hillary Clinton never saw ‘‘a war that they did not want to jump into with both feet’’. He is not alone in believing that drones create more terrorists than they kill. ‘‘When I was in Pakistan, I captured and interrogated many dozens of Al Qaeda fighters. Most of them told me that they never had a problem with the US until we bombed their village and killed their parents with a drone’’, Kiriakou reveals. According to him, in the last month of Obama's term ‘‘424 people were killed with the use of a drone’’. Moreover, the former CIA analyst affirms that the ‘‘US economy would collapse if we stopped fighting wars because it's a military-based economy’’.</p> <p class="normal">Besides, Kiriakou talks about Edward Snowden, five years on from leaking the biggest cache of top-secret documents in history. To him, ‘‘it was a great public service, we would have no idea that the government was spying on us, without Snowden’’. As Snowden told the New York Times, Thomas Drake and John Kiriakou inspired him to go public with his revelations. ‘‘That made me so happy and proud, so I decided to write him a private letter that my attorney delivered to him in Moscow because I didn't want him to make the same mistakes that I made’’, Kiriakou affirmed.</p> <p class="normal">They spy on us, American citizens. The former CIA Officer stressed that the NSA intercepts every phone call, text message, and email. ‘‘They keep all that data in a giant facility in Utah, where there is enough storage space for the next 500 years. This is not what the founders of the country had envisioned’’.&nbsp; </p><p class="normal">He also talks about best friend, NSA whistleblower, Thomas Drake, describing him as ‘‘a bona fide American hero’’. Thomas Drake told Kiriakou that US intelligence agencies can crack all encrypted messaging applications. ‘‘Rather than spend the time and the money breaking the encryption, they are able to mirror your phone in order to intercept it as you are typing, before your message is encrypted and sent’’, he adds.</p> <p class="normal">‘‘CIA tried to teach us that everything in life is a shade of grey, but some things are black and white’’, Kiriakou states and again insists that to him, the torture programme was illegal and immoral. ‘‘I kept my mouth shut for three years, waiting for somebody to come out. Nobody blew the whistle, so I did’’. If he had another chance, what decision would he make? ‘‘I have no regrets, I would do it all again’’, Kiriakou replies without hesitation.</p><fieldset class="fieldgroup group-sideboxs"><legend>Sideboxes</legend><div class="field field-related-stories"> <div class="field-label">Related stories:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> <a href="/wfd/mary-fitzgerald-thomas-drake/after-paris-be-careful-what-you-ask-for-interview-with-thomas-drake">After Paris, be careful what you ask for: an interview with Thomas Drake</a> </div> <div class="field-item even"> <a href="/digitaliberties/einar-thorsen/obama-commuted-chelsea-manning-sentence-but-legacy">Obama may have commuted Chelsea Manning’s sentence – but his legacy on whistleblowers is not one of clemency</a> </div> <div class="field-item odd"> <a href="/william-binney-anthony-barnett/%E2%80%9Cwe-had-to-wait-for-snowden-for-proof%E2%80%9D-exchange-with-william-binney">“We had to wait for Snowden for proof”, an exchange with NSA whistleblower William Binney</a> </div> </div> </div> </fieldset> <div class="field field-country"> <div class="field-label"> Country or region:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> United States </div> </div> </div> <div class="field field-rights"> <div class="field-label">Rights:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> CC by NC 4.0 </div> </div> </div> digitaLiberties digitaLiberties Can Europe make it? North-Africa West-Asia United States Yorgos Boskos John Kiriakou Sun, 10 Jun 2018 08:10:09 +0000 John Kiriakou and Yorgos Boskos 118326 at https://www.opendemocracy.net To restore trust in technology, we must go further than GDPR https://www.opendemocracy.net/jens-renner/to-restore-trust-in-technology-we-must-go-further-than-gdpr <div class="field field-summary"> <div class="field-items"> <div class="field-item odd"> <p>Privacy controls are a step in the right direction but more must be done to tackle misinformation.</p> </div> </div> </div> <p><span class='wysiwyg_imageupload image imgupl_floating_none 0'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/565030/Tape_over_laptop_webcam.jpg" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/565030/Tape_over_laptop_webcam.jpg" alt="A laptop with its webcam taped over" title="" width="460" height="345" class="imagecache wysiwyg_imageupload 0 imagecache imagecache-article_xlarge" style="" /></a> <span class='image_meta'><span class='image_title'>A laptop with its webcam taped over. Image: <a href="https://commons.wikimedia.org/wiki/File:Tape_over_laptop_webcam.jpg" target="_blank">Santeri Viinam&auml;ki</a>,&nbsp;CC BY-SA 4.0 CC BY-SA 4.0 </span></span></span>During the past five years, a trend has emerged that can be spotted in most cafés, libraries and corporate headquarters – the covering up of cameras on our personal devices. Once deemed a paranoid precaution, placing a sticker or tape over cameras on our laptops, tablets and even smartphones has now become a relatively commonplace measure. Over a third of Americans now cover up at least some of the cameras on the devices they own, according to a survey published by <a href="https://today.yougov.com/topics/technology/articles-reports/2018/03/12/majority-americans-feel-their-personal-information" target="_blank">YouGov last year</a>. The webcam sticker’s rise to ubiquity can be traced back to Edward Snowden’s NSA-leaks in the summer of 2013, after which an unprecedented public discussion of digital surveillance and privacy took place. Among the headlines were stories of how <a href="https://opendemocracy.net/freeform-tags/edward-snowden" target="_blank">Edward Snowden</a> and Mark Zuckerberg put stickers on their webcams, which led to a decline in public trust in the little eye above our screens. It was the first time that reports were published on how the American intelligence service, with its GUMFISH plug-in, could monitor people by hijacking their webcams. Since then, the webcam sticker has become a symbol for a growing distrust with technology and our attempt to uphold a sense of privacy. It has come to represent a physical means of protection against an unknown evil in tools we use everyday.</p><div>Last month, a landmark piece of privacy regulation came into effect in the European Union, which could have wide-reaching implications not just for how our data is used online but also for our privacy. Several years in the making, the General Data Protection Regulation (GDPR) arrives only months after the <a href="https://opendemocracy.net/digitaliberties/juan-ortiz-freuler/battle-for-decentralized-internet-navigating-troubled-waters-to-g" target="_blank">Cambridge Analytica</a> revelations. Both news stories have – in each their own way – thrown new light on how little control we have over our data and devices. An extensive <a href="http://www.pewinternet.org/2014/11/12/public-privacy-perceptions/" target="_blank">2014 Pew Research survey</a> revealed that there is a widespread sense of powerlessness over digital privacy: while 74% of Americans believe that being in control of their data is very important to them, only 9% feel like they have “a lot” of control over how much information is collected about them online. Is there any hope that we might regain a sense of trust and empowerment, or are we heading towards total alienation and apathy?</div><blockquote class="twitter-tweet"><p dir="ltr" lang="en">3 things about this photo of Zuck:<br /><br />Camera covered with tape<br />Mic jack covered with tape<br />Email client is Thunderbird <a href="https://t.co/vdQlF7RjQt">pic.twitter.com/vdQlF7RjQt</a></p>— Chris Olson (@topherolson) <a href="https://twitter.com/topherolson/status/745294977064828929?ref_src=twsrc%5Etfw">June 21, 2016</a></blockquote> <div>In the wake of the Cambridge Analytica scandal, there has been an uptick in digital activism aimed at reasserting control over the online services which collect our data. On the 19th of March, just two days after the first articles about the scandal appeared in the <em>Observer</em> and in the <em>New York Times</em>, #DeleteFacebook began trending on Twitter. The hashtag was catapulted into the spotlight when Tesla and SpaceX founder Elon Musk, deleted the Facebook pages for both his businesses, which had around 2.6 million followers each, in support of the movement. Activists were split over #DeleteFacebook, with some criticising the movement for ignoring the many businesses or communities that rely on the social network and accusing it of being only for the privileged few.</div><div>&nbsp;</div><div>“Most people simply can’t throw [their computers] away and move into the forest to become a self-sufficient farmer,” argues out Brandt Dainow, a researcher at the Computer Science Department at Maynooth University, Ireland. Dainow knows a thing or two about how our data gets used online. Late in the 1990s, he set up a software company to develop tracking software, “doing exactly the sort of stuff that, you know, everybody is now worried about.” He goes silent for a few seconds, then adds: “We just thought it was a way to get services delivered to people in a way that fitted their needs better. We never really thought about how it could be misused.”</div><div class="mag-quote-right">The problem is it's being presented as a data breach by one company and it’s not – it's business as usual.</div><div><br />Perhaps wishing to atone for his earlier contribution to online tracking, Dainow is currently working on a PhD in data ethics. His research focuses on the concept of digital alienation, which he suggests is the fault of tech firms. “Facebook has said ‘we're an advertising company’, so the end-user is the advertiser. There is no point in Facebook spending time and money building features that don't earn it any money, or which might even reduce its profits. It's a profit-making business. And that’s the alienation. It means that the social network in which you live is not designed for you.” As long as we aren’t the end-users, that sense of alienation is unlike to change. The huge interest in collecting personal data in order to target advertising will prevail, commodifying us, the obvious users, along the way. But the question remains: does the public actually care? “I don't think that this [Cambridge Analytica] controversy is making very much difference to people,” says Dainow. “See, the problem is it's being presented as a data breach by one company, as if it's an isolated incident. And it’s not – it's business as usual.”</div><div><br />The business in question is the data broker industry. Hidden behind your acceptance of a website’s use of cookies are literally hundreds of companies tracking you around the internet. Some of those companies are widely known, like Google and Facebook, but most are not. A study by Yale University’s Privacy Lab from November 2017 found hidden trackers in 75% of all Android apps, and an examination of more than 144 million websites by the tracking blocker company Ghostery found that 77% of all websites had trackers mapping users’ time spent on websites, their clicks, preferences, purchases and behavior. On a website as seemingly innocuous as Weather.com, Ghostery lists 18 different companies trying to track you.</div><div><span class='wysiwyg_imageupload image imgupl_floating_none 0'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/565030/Skærmbillede 2018-06-06 kl. 18.32.40.png" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/565030/Skærmbillede 2018-06-06 kl. 18.32.40.png" alt="A list of companies tracking you on Weather.com" title="" width="460" height="560" class="imagecache wysiwyg_imageupload 0 imagecache imagecache-article_xlarge" style="" /></a> <span class='image_meta'><span class='image_title'>A list of companies tracking you on Weather.com, according to Ghostery. Image: Jens Renner, CC BY-SA 4.0. </span></span></span></div><div>It’s not just websites that are collecting our data, the practice has also become integral to political organising. The Interactive Advertising Bureau, an organisation for online advertisers, published a report in 2012, praising the use of micro-targeted advertising during the Obama campaign. The conclusion: “Micro-targeting may have been instrumental for some campaigns in 2012. Micro-targeted advertising should – and almost certainly will – become part of a more data analytics-driven culture in successful political campaigns of the future – especially larger campaigns, such as the contest for the White House.” While micro-targeted political ads might not be new, we are only just beginning to understand the unscrupulous methods used to obtain the data needed for them. The data used by the Obama campaign was obtained from consenting supporters through an app. However, the data used by the Trump campaign was first obtained by Cambridge Analytica from users who had no knowledge of how it was going to be used. It remains unclear whether the practice of micro-targeting has a significant impact on elections.</div><div>&nbsp;</div><div>The free reign that tech companies once had over collecting and tracking our data may soon be coming to an end – in the EU at least. From May 25, companies within the EU, or companies handling data from within the EU, will have to follow a new set of regulations. Some of the hidden data collectors on the internet today, like AppNexus, Datalogix and DoubleClick – the trackers snooping in on my weather-checking habits above – will have to establish a direct relationship with me, in order to ask me for permission to do so. This means no more opt-out checkboxes and stupefying long privacy policies. It also means the right to have everything a company has on you erased for good. Among skeptics it is argued that the regulations will further strengthen the monopoly of advertising companies like Google and Facebook, which already have a direct relationship with their European users and thus a higher chance of getting explicit consent to continue the collection of personal information. But proponents of the legislation are equally outspoken, and have maintained that it hands users agency over their information and offers a model for the rest of the world.</div><div class="mag-quote-left">The internet shouldn’t be perfect. It can’t be. The physical world isn’t either.&nbsp;</div><div>Control over our data might not be enough, however. For one, it does not address a far larger problem – the deluge of misinformation online. Technological advances are making it increasingly harder for us to distinguish between what information is false and what isn’t. Last year, researchers at the University of Washington developed an algorithm that can transpose audio onto a 3D model of Barack Obama’s face, giving them the power to create hyper-realistic fake videos of the former US president. The potential risks of such seamlessly doctored videos are alarming. In a recent interview with <em><a href="https://www.buzzfeed.com/charliewarzel/the-terrifying-future-of-fake-news?utm_term=.wjjDABlArl#.wbbXqZeq4e" target="_blank">Buzzfeed</a></em>, Aviv Ovadya, a chief technologist for the University of Michigan’s Center for Social Media Responsibility, outlined a scenario where a fake video of Kim Jong-un declaring nuclear war reaches the person in charge of pushing the button to retaliate. “It doesn’t have to be perfect — just good enough to make the enemy think something happened that it provokes a knee-jerk and reckless response of retaliation,” he told the website.</div><div><br />Despite his warnings of a possible “info-apocalypse”, when I speak to Ovadya he is relatively optimistic about the possibility of positive change. “The internet shouldn’t be perfect. It can’t be. The physical world isn’t either. Facebook and Google developed from a state of nothing, and though they haven’t done that good of a job, they haven’t done that bad either. Ethically they’re not where they should be, but they’re not very far away either. I think they could get there.” Ovadya believes that both companies could eventually get to the point where they are serving the public good. They have acknowledged their role in the fake news crisis, for a start. For Ovadya, change must come from the companies themselves, as he is concerned that external activism cannot influence companies quickly enough as new threats emerge. “Social movements are often slow or coarse, but the problems of technology move very fast. I’m not sure that we can count on public movements to drive nuanced change.”</div><div><br />Although GDPR has its limitations, for now Europeans have gotten more choice and thus more power to influence the terms with which they use their technology. But our relationship with data is more complicated than ever before and changing faster than legislation can keep up. Until it does, it might be better to keep our webcams covered.</div><div></div><div><em>Correction, June 12 2018: the article originally suggested that Aviv Ovadya was skeptical about the impact of activism. In fact, Ovadya said that he was concerned about whether activism can influence tech companies rather than its overall impact.</em></div><div class="field field-rights"> <div class="field-label">Rights:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> CC by NC 4.0 </div> </div> </div> digitaLiberties Facebook Jens Renner Fri, 08 Jun 2018 15:25:59 +0000 Jens Renner 118314 at https://www.opendemocracy.net As Cambridge Analytica and SCL Elections shut down, SCL Group's defence work needs real scrutiny https://www.opendemocracy.net/uk/emma-l-briant/as-cambridge-analytica-and-scl-elections-shut-down-scl-groups-defence-work-needs-re <div class="field field-summary"> <div class="field-items"> <div class="field-item odd"> <p>We can’t understand the significance of Cambridge Analytica without looking at the network it sits in, and how inadequate controls nurtured aspects of this networks’ development.</p> </div> </div> </div> <p><span class='wysiwyg_imageupload image imgupl_floating_none 0'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/549093/cambridge analytica.jpg" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/549093/cambridge analytica.jpg" alt="" title="" width="460" height="347" class="imagecache wysiwyg_imageupload 0 imagecache imagecache-article_xlarge" style="" /></a> <span class='image_meta'></span></span><em>Image: Cambridge Analytica's offices in central London. Credit: Yui Mok/PA Images, all rights reserved.</em></p><p>In just a month, Cambridge Analytica has gone from relative obscurity to international notoriety. But for me, this story isn’t new. I first interviewed senior figures in Cambridge Analytica’s lesser known parent company SCL for my 2014 book “<a href="http://www.manchesteruniversitypress.co.uk/9780719091056/">Propaganda and Counter-terrorism - Strategies for Global Change</a>”, and I’ve followed their work closely ever since.</p> <p>It’s been frustrating to watch some of the key players manage to escape crucial questions that should be asked of them. Because this isn’t just a scandal about an obscure, unethical company. It’s a story about how a network of companies was developed which enabled wide deployment of propaganda tools - based on propaganda techniques that were researched and designed for use as weapons in warzones - on citizens in democratic elections. It’s a logical product of a poorly regulated, opaque and lucrative influence industry. There was little or nothing in place to stop them. </p> <p>Cambridge Analytica’s parent company, SCL, and its founder, Nigel Oakes, have done everything they can to distance themselves from Cambridge Analytica but politics was important to SCL’s work far earlier than many thought. And SCL’s main clients - NATO and the defence departments of its member states - have managed to get away without being asked how much they knew about what one of their key contractors was up to. </p> <p>Recently the UK’s parliamentary Digital, Culture, Media and Sport Committee Inquiry into Fake News <a href="https://www.parliament.uk/business/committees/committees-a-z/commons-select/digital-culture-media-and-sport-committee/news/fake-news-briant-evidence-17-19/">published some of the evidence I submitted drawing on research interviews</a> for an <a href="http://emma-briant.co.uk/books/">upcoming book</a>, among other publications. Some of my quoted interviews with key figures suggest that SCL’s military arm and Cambridge Analytica’s engagements may have been much more closely related than Oakes or Cambridge Analytica’s former CEO Alexander Nix like to publicly admit. And if governments genuinely didn’t know how the firm was using the skills it developed in counter-terrorism in divisive elections around the world, then this was a huge failing.</p> <h2>SCL’s defence ‘division’</h2> <p>To explain this, I’ll start with a man called Steve Tatham. I first interviewed Tatham for my<a href="http://www.manchesteruniversitypress.co.uk/9780719091056/"> 2014 book</a>, about the work he was doing for the British military, then for SCL. Steve Tatham is former<a href="https://www.rollingstone.com/politics/features/trump-facebook-data-scandal-cambridge-analyticas-psy-ops-warriors-w518189"> Commanding Officer of Britain's 15 (U.K.) Psyops Group</a><span> </span>and has played a lead role in SCL's defence work,<a href="https://www.flickr.com/photos/119558643@N05/16641794382/in/photostream/"> including through the company IOTA Global, which was part of the SCL Group, delivering training in counter-Russian propaganda in Eastern Europe</a> funded by the Government of Canada, as well as conducting research on target audience analysis which has influenced counter-insurgency doctrine.</p> <p>In February 2017, Carole Cadwalladr began <a href="https://www.theguardian.com/politics/2017/feb/26/robert-mercer-breitbart-war-on-media-steve-bannon-donald-trump-nigel-farage">reporting on Cambridge Analytica</a> in the Observer. On March 2 of that year, Tatham sent an email statement to a list of his contacts. Tatham declared that 'SCL Defence is a completely separate company to Cambridge Analytica, who were contracted to assist the Trump campaign during the election, albeit we are both part of the same group'.</p> <p>On 24th March 2018 <em>The Times</em> reported on SCL Group's <a href="https://www.thetimes.co.uk/article/dirty-tricks-firm-scl-group-trained-uk-officials-r0dvdnj8f">propaganda defence work</a>. In particular, it focussed on training carried out by Tatham for NATO's Center of Excellence in Strategic Communication in Latvia and the UK's Ministry of Defence. Shortly after the Times report, Tatham's company Influence Options Ltd made another statement, this time more publicly, <a href="https://www.thecanary.co/uk/2018/04/08/bad-news-for-cambridge-analyticas-parent-company-as-a-senior-insider-jumps-ship/">withdrawing from all work with SCL Group</a> and emphasising that they have not worked on any political campaigns. </p> <p>SCL Group has sought to distance SCL defence contracting from political campaign work by stressing SCL Elections and Cambridge Analytica were independent companies. I have no reason to suspect Tatham of having engaged in political work. However, his new statement begs the question of how 'separate' the entities were if they were too close for Tatham to sustain his longstanding relationship to the SCL defence contractor amid Cambridge Analytica allegations. His statement acknowledges he worked for the “defence division” of SCL, language which conveys a different relationship from that spelled out in his email to contacts in March 2017, which declared “completely separate company”. Divisions imply related entities in the same company, not separate companies. So which is it? And if they really are all divisions of the same organisation, surely the unethical activities of one part of the SCL Group urgently demands that real scrutiny is given to the defence 'division' of SCL too – and to government oversight of contracts.</p> <p><span class='wysiwyg_imageupload image imgupl_floating_none 0'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/549093/statement acknowleges.jpg" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/549093/statement acknowleges.jpg" alt="" title="" width="460" height="181" class="imagecache wysiwyg_imageupload 0 imagecache imagecache-article_xlarge" style="" /></a> <span class='image_meta'></span></span></p><p>SCL Ltd became SCL ‘Group’ in August 2015. There seem to have been efforts to distance the entities at least superficially; but this seems a more complex picture than “completely separate companies” would imply. My own research supports other evidence presented during the UK parliamentary ‘<a href="https://www.parliament.uk/business/committees/committees-a-z/commons-select/digital-culture-media-and-sport-committee/inquiries/parliament-2017/fake-news-17-19/">Fake News Inquiry</a>’ apparently indicating important staffing overlaps, financial relationships and methods in common between apparently separate companies. Last week also, <a href="http://parlvu.parl.gc.ca/XRender/en/PowerBrowser/PowerBrowserV2/20180424/-1/29127?useragent=Mozilla/5.0%20(iPad;%20CPU%20OS%2010_3_3%20like%20Mac%20OS%20X)%20AppleWebKit/603.3.8%20(KHTML,%20like%20Gecko)%20Version/10.0%20Mobile/14G60%20Safari/602.1">in testimony to the Canadian Parliament, Aggregate IQ, who worked with SCL on the Nigeria campaign, for Ted Cruz and who were contracted by Vote Leave in the UK’s EU Referendum </a>said they worked with <em>SCL</em>, not Cambridge Analytica, on the Cruz campaign, despite Cambridge Analytica being the entity that worked on this election.</p> <p>Brittany Kaiser, CA’s former Business Development Manager also <a href="https://www.parliament.uk/business/committees/committees-a-z/commons-select/digital-culture-media-and-sport-committee/news/fake-news-kaiser-evidence-17-19/">told the Fake News Inquiry </a>on April 17th that “our company tended to have a business model where we would partner with another company and that company would represent us as SCL Germany, or SCL USA. That was the model.” Kaiser added that she believed SCL Canada and Aggregate IQ were the same. Evidence such as this suggests the existence of a clearly associated network. Furthermore, Brittany Kaiser <a href="https://www.commonwealthy.com/voter-analytics-transcript/">declared in 2016</a> that the underpinnings of Cambridge Analytica’s political methods are the same social scientific research and data science techniques as are used for defence: “This was most often actually used in defense. We work for the Department of Defense and intelligence agencies in counter-terrorism operations with this exact same similar methodology. And now we decided to start building up a database to work in politics,” Kaiser said.</p> <h2>SCL and CA - were they really separate companies?</h2> <p>Another key figure who I interviewed before this story broke is Nigel Oakes, chief executive of the SCL Group. <a href="https://www.flickr.com/photos/119558643@N05/16569217855/in/photostream/">Here he is pictured</a> at NATO Stratcom in Riga, working with Steve Tatham. Nigel Oakes was listed as Director of IOTA Global, until the company dissolved in January 2017. Our most recent interview in November last year was very illuminating in revealing the relationship between the companies.</p> <p>When Oakes set up SCL Elections and Cambridge Analytica as the new political arm of SCL's business, the political ‘division’ worked less separately from SCL. There are reports of SCL working in <a href="https://qz.com/1240588/cambridge-analytica-how-scl-group-used-indonesia-and-thailand-to-hone-its-ability-to-influence-elections/">elections in Indonesia</a> in 1999. Oakes’ own expertise, which emerged in PR, developed further through counter-terrorism work and shaped the Behavioural Dynamics Institute (BDI) - a research unit underpinning SCL methods, and this expertise was being deployed in elections. We need to know which ones.</p> <p>Oakes told me he had worked on politics “in the past. I set up the company [Cambridge Analytica] but <em>now</em>, I'm totally defence and I've gotta <em>be</em> totally defence”. He said, “the defence people can't be seen to be getting involved in politics, and the State Department, they get very <em>upset.</em>” Oakes stated they imposed “strong lines” between the companies. It seems reasonable to infer that SCL have been restating their separation to ensure survival of business interests in defence and commercial contracting, motivated in part by nervousness and pressure received from the US and UK governments wanting to contract them for defence work. <a href="https://www.parliament.uk/business/committees/committees-a-z/commons-select/digital-culture-media-and-sport-committee/news/fake-news-briant-evidence-17-19/">As Oakes said</a> – “they get very <em>upset”.</em></p> <p>Yet in <a href="https://www.parliament.uk/business/committees/committees-a-z/commons-select/digital-culture-media-and-sport-committee/news/fake-news-briant-evidence-17-19/">my </a><span>interview</span> with Oakes he referred to what “we” are doing to include Cambridge Analytica not just his defence division - “…when we explain in the two-minute lift pitch what happened with Trump…” Any lack of clarity here matters – a lot. Cambridge Analytica also stressed that they do <a href="https://ca-political.com/news/time-facts-not-conjecture-says-cambridge-analytica-chief">”no work outside of North America</a>, although the Cambridge Analytica brand is now used worldwide”. According to whistle-blower Chris Wylie, Cambridge Analytica’s work in Nigeria included an ad with a <a href="http://www.theguardian.com/uk-news/2018/apr/04/cambridge-analytica-used-violent-video-to-try-to-influence-nigerian-election">montage of violence, including real footage of people being dismembered</a> and burned, from recent history, seeking to create fear of Muslims and intimidate voters. </p> <p>And then there’s Sam Patten. Patten was ‘senior director/campaign manager’, according to Kaiser, and oversaw the Nigeria campaign along with a second senior strategist. I interviewed him in July 2017 also about a previous job he did working for the <a href="http://www.iri.org/">International Republican Institute</a> in ‘reconstruction’ era Iraq. He told me he had also worked in the US, in Oregon, during one of the trial runs of Cambridge Analytica’s early deployment of psychographics, later deployed more fully in the Cruz campaign. He talked about preparations for this, “they were training a team, I was part of that team… they [...] trained me in England then they sent me to Canada for more training” then he developed messaging for the US campaigns. The Canada based company <a href="https://www.theguardian.com/us-news/2018/apr/06/facebook-suspends-aggregate-iq-cambridge-analytica-vote-leave-brexit">Aggregate IQ were reported in the Guardian as having links to SCL but have sought to distance themselves from that company</a>. Patten observed of the United States, “I’ve worked for Ukraine, Iraq, I’ve worked in deeply corrupt countries, and our system, isn’t very different” (See <a href="https://www.parliament.uk/business/committees/committees-a-z/commons-select/digital-culture-media-and-sport-committee/news/fake-news-briant-evidence-17-19/">Explanatory Essay 1</a>). </p> <p><a class="mag-quote-center" href="https://www.parliament.uk/business/committees/committees-a-z/commons-select/digital-culture-media-and-sport-committee/news/fake-news-briant-evidence-17-19/">&nbsp;"I’ve worked for Ukraine, Iraq, I’ve worked in deeply corrupt countries, and our system, isn’t very different’"</a></p> <h2>An open secret in Washington </h2> <p>SCL Group’s reputation seemed something of an open secret among some of my contacts in Washington DC information warfare and political campaign circles. This is conveyed in Patten’s flippant comments about a job with SCL: “Anyway, the irony was… because it was SCL I assumed it was the bad guys, but it wasn’t!”.</p> <p>Siloing activities or divisions off can be helpful when a company grows rapidly into new areas, for many reasons. Staff, like Tatham, in the original company, and the Behavioural Dynamics Institute, SCL Group’s ‘research institute’, are not homogenous, and there are some distinctions culturally between those with careers originating in defence and those without. Not all of these individuals wished to work with Cambridge Analytica, not all shared the political motivations represented in the lucrative new contracts. </p> <p>Siloing in companies engaged in nefarious or secretive activities of the kind <a href="https://www.youtube.com/watch?v=mpbeOCKZFfQ">Channel 4 </a>revealed at Cambridge Analytica can also help manage the potential for leaks and exposure. Regardless of how or why Oakes and his business partners may have ultimately organised the companies or 'divisions' to perpetuate their activities (somewhat) separately, the point is that there is a network of companies, with SCL Group central to it, which is responsible for a collection of worrying activities and pitching defence-derived methods to shady international actors. I would argue that, given the above evidence, particularly Oakes’ interview and Kaiser’s reporting and testimony, in order to understand and evaluate these activities we must at least consider the related yet somewhat-autonomous companies’ activities alongside each other, rather than in isolation, including:</p> <ul><li>- Assisting the campaigns of politicians using racist and violent video content designed to drive fear and intimidate voters in fragile states (<a href="https://www.theguardian.com/news/2018/mar/17/cambridge-academic-trawling-facebook-had-links-to-russian-university">Cambridge Analytica and Aggregate IQ</a>)</li><li>- Spreading Islamophobic and false narratives in the West including the 2016 US election and which was copied for the EU Referendum by Leave.EU (Cambridge Analytica<a href="https://www.parliament.uk/business/committees/committees-a-z/commons-select/digital-culture-media-and-sport-committee/news/fake-news-briant-evidence-17-19/"> - see my Explanatory Essay 1</a>). These narratives drive fear of Muslims which is used to justify calls for more spending on ‘counter-terrorism’ (Briant, 2015).</li><li>- Profiting from Western governments interventions ostensibly to resolve conflicts (often religious and ethnic) for counter-terrorism and counter-extremism (<a href="http://www.manchesteruniversitypress.co.uk/9780719091056/">see my last book</a>)</li><li>- Also (not mentioned in my submission to parliament), an archived version of <a href="https://web.archive.org/web/20170228043708/https:/sclgroup.cc/elections/projects/uk/">their website</a> appears to indicate that SCL have been involved in three elections in the UK. Though <a href="https://www.byline.com/column/67/article/2069">Nix has said</a> “we don't involve ourselves in the UK as a rule of thumb” he lists the UK’s Conservative Party in <a href="https://www.byline.com/column/67/article/2122">this letter</a> among parties they have helped.</li></ul> <p>These are not unrelated activities. </p> <p>When we consider the work of the overall group, these activities might variously be considered to drive instability in precarious democracies, drive fear of Muslims in the West and internationally, then profit from both wars against Muslim countries and Muslims’ marginalisation in the West, while claiming to ‘counter’ extremism.</p> <h2>Controlling the propaganda industries </h2> <p>Damian Collins MP as Chair of the Fake News Inquiry should now consider the extent to which Nigel Oakes, as SCL Group CEO and founder of SCL, should share responsibility along with Cambridge Analytica’s former chief executive Alexander Nix. A number of senior Cambridge Analytica figures are now involved with <a href="https://www.ft.com/content/72d791c6-4ee3-11e8-9471-a083af05aea7">Emerdata, a new Mercer-backed data analytics company.</a></p> <p>Oakes and his colleagues have spent many years studying extremism and terrorism including interviewing terrorists themselves. All of this social science and human intelligence work has been fed into BDI’s research core, which can be drawn on by all the companies. Steve Tatham has <a href="https://toinformistoinfluence.com/2015/11/28/iota-global-assists-nato-coe-in-training-moldova-government-in-strategic-communication/">claimed </a>that: </p> <p>“The BDI methodology uses the most advanced social science research to measure populations and determine, to a high degree of accuracy, how population groups may respond under certain conditions. The methodology is the only one of its type and has been verified and validated by the US Defence Advanced Research Project Agency (DARPA), the Sandia National Laboratories (USA) and the UK’s Defence Science and Technology Laboratories (DSTL).”</p> <p>Oakes said to me, of this social science research core – “without <em>this </em>[Alexander Nix] couldn’t do any of that!” (See <a href="https://www.parliament.uk/business/committees/committees-a-z/commons-select/digital-culture-media-and-sport-committee/news/fake-news-briant-evidence-17-19/">Explanatory Essay 3</a>). The companies were well equipped to understand what might drive extremism from their shared research base, and to understand the impact of the 'othering' or violent and terrifying ads deployed in domestic and international campaigns. My evidence shows Oakes is not naïve to the kind of campaigns Cambridge Analytica and his SCL Group deployed in the US. </p> <p>This case has further deeply important implications for our government’s defence contracting. In shocking <a href="http://data.parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/digital-culture-media-and-sport-committee/fake-news/oral/81592.pdf">new testimony </a>Brittany Kaiser, former Development Manager for Cambridge Analytica revealed that:</p> <p>“I found documents from Nigel Oakes, the co-founder of the SCL Group, who was in charge of our defence division, stating that the target audience analysis methodology, TAA, used to be export controlled by the British Government. That would mean that the methodology was considered a weapon—weapons grade communications tactics—which means that we had to tell the British Government if that was going to be deployed in another country outside the United Kingdom. I understand that designation was removed in 2015.” </p> <p>Interestingly, August 2015 is when SCL stopped being SCL Ltd and started being SCL ‘Group’. Again, Kaiser too refers to “our defence division” - not a separate company. And regarding other aspects, the US government’s <a href="https://www.darpa.mil/">Defense Advanced Research Projects Agency</a> (DARPA) worked with BDI during the ‘War on Terror’, developing methods together (see Explanatory <a href="https://www.parliament.uk/business/committees/committees-a-z/commons-select/digital-culture-media-and-sport-committee/news/fake-news-briant-evidence-17-19/">Essay 3</a>). If the methodologies BDI developed might have informed tactics deployed in democratic elections this is very serious, whether or not the tools were ‘effective’ or what specifically they were used for. It is <em>vital</em> that our governments, including research entities like DARPA, build into contracts more control over tools and weapons they help to create. They must not escape responsibility when private organisations extend these, to be developed beyond the original defence work. This must also apply when they are unofficially working together, but not contracted. </p> <p>Furthermore, it seems highly improbable that our intelligence agencies would not have been monitoring destabilising activities in Kenya, Nigeria, Indonesia and other countries with a precarious state of peace and with vulnerable democratic systems. It is their job to anticipate developing conflicts and instability in countries such as these. They also often maintain awareness of any potential security weaknesses, liabilities and conflicts of interest in the background and businesses of individuals working in national security. We should therefore ask how much they, and the State Department and the Pentagon in the US, and the FCO and MoD in the UK, and indeed NATO, might have known about other companies in this ‘Group’. It is vital that anyone with additional evidence that illuminates these questions further comes forward as a priority.</p> <p>My evidence shows that SCL Group had experienced some pressure from Western governments to make the ‘political’ companies more separate from the government contractor, concern that implies at least some knowledge that there may be something to be worried about. If so, to what extent did the policy of pushing them for separation, rather than dropping them as a defence contractor, allow SCL to continue their unethical practices? It would be extremely serious if our governments turned a blind eye to unethical work with the potential to destabilise vulnerable nations and potentially trigger future conflicts in which our military might be deployed. At the very least there was poor evaluation of risk and weak oversight, particularly in determining whether the actions of the SCL Group might undermine British and American interests abroad. </p> <p>Importantly, my evidence shows that Leave.EU copied and were able to deploy an effective campaign based on Cambridge Analytica’s methods following the pitch that Kaiser gave them. This raises questions of whether other entities who received a similar pitch could also have replicated the methodology - this is of particular important in relation to Lukoil for example, <a href="https://www.theguardian.com/news/2018/mar/17/cambridge-academic-trawling-facebook-had-links-to-russian-university">a Russian state-owned company that Cambridge Analytica pitched their methods to </a>around the time SCL were delivering training in methods to Eastern European countries to ‘counter Russian threat’. </p> <p>Actions in response to this evidence must include a review of the current processes for removal of the ‘export control’ restrictions along with the process where companies bid for a UK Government ‘Framework’ for privileged access to contracts over four years. A lot has changed in the last four years for SCL. Cambridge Analytica has been shut down. Now there must be proper inquiry into the process of procurement and oversight of government contracts as the implications of all this are very serious. Most importantly the actions of a ‘group’ of related but apparently autonomous companies <em>must</em> be treated as relevant, not just considering the contracted company in isolation. The group must be continuously monitored. We cannot allow this to happen again.</p><fieldset class="fieldgroup group-sideboxs"><legend>Sideboxes</legend><div class="field field-related-stories"> <div class="field-label">Related stories:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> <a href="/uk/brexitinc/adam-ramsay/cambridge-analytica-is-what-happens-when-you-privatise-military-propaganda">Cambridge Analytica is what happens when you privatise military propaganda</a> </div> <div class="field-item even"> <a href="/uk/jennifer-cobbe/problem-isn-t-just-cambridge-analytica-or-even-facebook-it-s-surveillance-capitali">The problem isn’t just Cambridge Analytica or Facebook – it’s “surveillance capitalism”</a> </div> <div class="field-item odd"> <a href="/ben-graham-jones/observing-elections-of-future">Observing the elections of the future</a> </div> </div> </div> </fieldset> <div class="field field-rights"> <div class="field-label">Rights:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> CC by NC 4.0 </div> </div> </div> uk digitaLiberties Can Europe make it? uk Emma L Briant Fri, 04 May 2018 11:10:54 +0000 Emma L Briant 117691 at https://www.opendemocracy.net Our data doubles: how biometric surveillance ushers in new orders of control https://www.opendemocracy.net/digitaliberties/phoebe-braithwaite-christina-rogers/our-data-doubles-how-biometric-surveillance-ushe <div class="field field-summary"> <div class="field-items"> <div class="field-item odd"> <p>The use of biometric data brings the border within the body: algorithms' apparent objectivity and efficiency obscure the brutality of the tasks they accomplish, deciding who is fit to stay or go, who to live or die.&nbsp;</p> </div> </div> </div> <p><span class='wysiwyg_imageupload image imgupl_floating_none 0'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/558532/PA-32011157.jpg" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/558532/PA-32011157.jpg" alt="" title="" width="460" height="307" class="imagecache wysiwyg_imageupload 0 imagecache imagecache-article_xlarge" style="" /></a> <span class='image_meta'><span class='image_title'>A refugee boy puts a flower on the fence as he waits with others on the Greek side of the border to enter Macedonia, near Gevgelija, Macedonia. Tomislav Georgiev/PA Images. All rights reserved.</span></span></span></p> <p><strong><em>Phoebe Braithwaite (PB): If you were to let your imagination run away with you, where could we be in say 20 years, the way things with biometrics are going? What would this mean for the most vulnerable members of society, and what for the wealthiest?</em></strong></p> <p dir="ltr">Christina Rogers (CR): Like so many others, I am stuck with the feeling that biometrics is a matter of ‘inevitability’ rather than choice, although I know that this is just how the introduction of biometrics has been framed by industries and governments. Biometrics are increasingly becoming a ‘normal’ thing to encounter, be it at the airport or when unlocking your phone with your fingerprint. I think we are heading further and further towards a normalisation of biometrics in many different spheres, providing access to objects (e.g. laptops), institutions (such as the university or library), states (at borders) and their respective social systems (for example in accessing welfare). </p><p dir="ltr">Current developments point to multiple biometric systems, meaning that systems no longer need rely on a single biometric feature, but rather work by capturing several traits from facial scanning to iris and fingerprint recognition. This is expected to enhance the accuracy of authentication systems, because multiple sources of information are used and cross-referenced. &nbsp;</p><p dir="ltr">Trusted traveler schemes and border controls already show quite clearly what this means for the wealthiest and most vulnerable members of society. Many privileged travelers voluntarily give away their data and enroll in biometric systems pre-departure in order to use special access corridors at the airport and make their transit more smoothly. This means that controls can be heightened by positioning border controls prior to the actual border at airports via ‘green-listing’ people, while, at the same time, giving these ‘bona fide’ travelers the feeling that, for them, the border is absent, as they do not have to experience time-consuming checks by border police or flight personnel.</p><p class="mag-quote-left" dir="ltr">Many privileged travelers voluntarily give away their data and enroll in biometric systems in order to gain special access, giving these ‘bona fide’ travelers the feeling that, for them, the border is absent</p><p dir="ltr">While, in these cases, people volunteer themselves for biometric databases, refugees and migrants increasingly face obligatory capture into biometric systems. Applying for asylum or having access to welfare systems, accessing work or getting residents’ permits mean having to submit to this form of capture. Also, databases containing information on refugees and migrants are more and more being linked together and used for different purposes, turning the databases regulating access to specific spaces and goods into intelligence files. </p><p dir="ltr">Biometric systems make it much more difficult for certain people to move to and within the EU. Most illegalised migrants in Europe didn’t cross the border by land, but rather travelled to the EU legally on a short-term visa and stayed after it expired. Thus, while visa restrictions and the Dublin regime produce the category of illegalisation as such, the Visa Information System, for example, is a database that helps to detect ‘visa-overstayers’ and deport them to their countries of origin. In other words, the better biometric and other data are stored and circulated throughout Europe, the more easily migrants come to be stuck at borders or concentrated in cities, camps and informal networks. While pieces of the ‘biometric body’ can travel as data, the embodied subject may not. </p><p dir="ltr"><strong><em>PB: Are biometric passports the main thing to worry about here or should we be concerned about biometrics in every sphere? </em></strong></p><p dir="ltr"><strong>CR:</strong> No, biometric passports are not the main thing to worry about. Making objects machine-readable is one thing, but linking these objects to the ‘machine-readable body’ is another thing entirely. A good example is EasyPASS at the airport: with this system you can scan your e-pass at the border (comprised of a biometric photo and usually also a fingerprint scan) and look into a camera, which will provide a facial scan and perform a one-to-one match. If your facial scan template fits the biometric picture of your passport and your requirements to access a country, you are free to proceed. </p><p class="mag-quote-right" dir="ltr">Biometric systems make it much more difficult for certain people to move to and within the EU.</p><p dir="ltr">But biometrics are also often used for one-to-many (1-n) matches, where a person, for example, is checked against a whole body of data, allowing access to different goods or spaces, such as to asylum in a country.</p><p dir="ltr">One problem with biometrics is that the databases go along with a whole system of categorisations and classifications that are attached to the body via biometrics, allowing for mechanisms of ‘social sorting,’ differentiating between the good and the bad or the authorised and unauthorised person. Here, a person’s iris can provide a kind of password for access or denial without a person’s control over that process.</p><p dir="ltr">Second, although technology is often regarded as ‘free of human flaws or errors’ – and, thus, free of discrimination –&nbsp;research in this field has shown that biometric databases do show forms of structural racism, classism and ageism and so on. For instance, people with certain kinds of bodily features may be enrolled less readily; at other times, only certain kinds of people may be enrolled – the Eurodac database, for instance, is comprised mostly of the data of people of colour. At the same time, databases are susceptible to human errors, because transnational databases are used by many people, opening up space for false inputs and false links between bodies and data.</p><p dir="ltr">Thirdly, the general problem with biometrics is that the body is instrumentalised in processes of dataveillance and control, often without giving the person sufficient information on where biometric information is stored and circulated and with which consequences, because the databases are not open to democratic insight or judgment.</p><p dir="ltr">People are embedded in bureaucratic processes they know little about, and the systems themselves – while closed to scrutiny – are regarded as truthful and objective. People are therefore liable to become victims of automated decisions, made by algorithms and administrations, not least because the people using the systems are often themselves unaware of how the systems they administer really operate. This has been a matter of discussion with regards to people working in foreign registration offices.</p><p dir="ltr"><span class='wysiwyg_imageupload image imgupl_floating_none caption-xlarge'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/500209/PA-30613010.jpg" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/500209/PA-30613010.jpg" alt="" title="" width="460" height="277" class="imagecache wysiwyg_imageupload caption-xlarge imagecache imagecache-article_xlarge" style="" /></a> <span class='image_meta'><span class='image_title'>The managing director of Dermalog, Guenther Mull explains a fingerprint-scanner to German Chancellor Angela Merkel and Japanese Prime Minister Shinzo Abe, Lower Saxony's premier Stephan Weil in their biometrics kiosk, which is used amongst other things to register refugees, during their tour of the CeBIT trade fair in Hanover, Germany, March 2017. Ole Spata/Press Association. All rights reserved.</span></span></span>We should be concerned about biometrics in every sphere, although biometrics have different consequences for different people. As biometrics are used in ever more spheres of society, there must be a better legal understanding of how this data can be used or accessed. Biometrics are increasingly used as intelligence files, instead of being utilised for only specific measures (which is usually problematised under the heading “function creep”). As fragments of the bodies of people are stored and circulated as templates across time and space, there should be a general interest in how this circulation and storage takes place, by whom and to what ends.</p><p dir="ltr"><strong><em>PB: What is the longer history of this kind of categorisation and identification, carried within the body? </em></strong></p><p dir="ltr"><strong>CR:</strong> One can draw lines across history to many forerunners of biometrics, including to the custom of potters in ancient China to mark their products with their fingerprints in wax. But I think the most important precedents are developments that took place within the course of the 19th century, where, due to the impact of natural science and Darwin, a general interest in varieties of “life” trickle down into many fields, including medicine, sociology, criminology and theories of racial otherness. </p><p dir="ltr">Two major strands dominate research in the history of biometrics: the development of fingerprinting in British India initially under the chief administrator William Herschel, and the invention of anthropometry by Alphonse Bertillon in France at the end of the 19th century. Fingerprinting was developed in the colonies, as British administrators were faced with the problem (as they self-proclaimed) of not being able to distinguish one native from the other. With a willingness to identify and control the colonised, British officials developed a system with which to collect and archive inked fingerprints of Indian subjects, who (due to racist ideology) were generally regarded as prone to criminal behavior. </p><p dir="ltr">Bertillon, on the other hand, developed police anthropometry in France, initially to detect recidivists. His system was very elaborate and included many measurements of the face, arms, fingers, feet, ears, and so on, that were catalogued in a meticulous archiving system, which included photographs of criminals. Anthropometry, or Bertillonage, not only created an archive, but a classification system of ‘deviant’ Europeans with categories such as “prostitutes”, the “mad”, members of the “working classes” and so on. Unlike fingerprinting, anthropometry failed to set an international standard, simply because it was so complicated.</p><p dir="ltr">The history of biometrics show that this technology is closely related to the construction and detection of the external or internal “other”. It also shows how the colonies functioned as a “laboratory” for techniques of control and classification that were only then used to surveil whole populations in the west.</p><p dir="ltr"><strong><em>PB: Where are the opportunities for civil disobedience when it comes to something likely to penetrate border policing as deeply as biometric passports? Where are the weak points where pressure can be applied? </em></strong></p><p dir="ltr"><strong>CR:</strong> Well, one can begin on a personal level and simply be a bit <a href="https://en.wikipedia.org/wiki/Bartleby,_the_Scrivener">Bartleby</a> when it comes to presenting oneself at the gateway to biometric systems: easyPASS and many other border systems at airports still are optional, but many do not feel inclined to avoid them, although they have the right to do so. The same is the case for locking one’s device with biometrics. As long as the majority of people allow the introduction of biometrics into their lives to be such a seamless process; as long as they ask no questions, and show no signs of hesitance or resistance, governments and industries will feel no need to provide explanations, and biometrics will be introduced unrestrained.</p><p class="mag-quote-left" dir="ltr">The colonies functioned as a “laboratory” for techniques of control and classification that were only then used to surveil whole populations in the west.</p><p dir="ltr">Attention should also be paid to the legal frameworks supporting biometrics. Much critique has been formulated in terms of privacy issues following the observation that legal frameworks concerned with privacy have increasingly been watered down to allow for biometrics to function in the way they do. People working in law have much leverage to pressure biometric systems. It is crucial to highlight the rights people still have or should make use of in terms of these systems. Beyond simple privacy matters, though, people should devote more attention to other legal categories, such as matters of ‘bodily integrity’ or questions concerning people’s choices over their own body parts.</p><p dir="ltr">Finally, biometrics have been hailed as fairly failsafe in public discourse, but that’s simply not the case: there are many ways to trick these systems. You can create forged irises and fingerprints by replicating biometric traits and fool the system that you are someone else. You can interfere with the scan via a so-called ‘digital spoof’ and make the system believe it has scanned an authenticator which in fact it didn’t, and so on. </p><p dir="ltr">Many do not have the technical literacy to do it, and it is of course illegal to try to trick a biometric system installed by the state. The very people who could actually benefit from tricking a system are usually also the most vulnerable in society, such as legal or illegalised migrants. Performing such a crime, then, has existential consequences: it may lead to detention, deportation or restricted rights of residence. People leave biometric traces everywhere and as it is possible to replicate them, there could be a lot of irritation when more (especially privileged) people begin masquerading, with all this biometric potential flying around…</p><p dir="ltr"><strong><em>PB: What kind of struggles do migrants face in the context of biometrics, and how do they circumvent these obstacles at present?</em></strong></p><p dir="ltr">CR: Migrants face the struggles of having to take considerable care of their ‘data doubles’ as well as when and where they present themselves to a biometric system. There are many places where they can be scanned into different databases. UNHCR, for example, uses iris scans to control who is eligible for care packages and who can access camps in some countries. Furthermore, the surveillance assemblage of the EU has three major databases regulating the movement of migrants: The Schengen Information System, Eurodac and the Visa Information System.</p><p dir="ltr">The Dublin regime states that people willing to lodge an asylum claim need to do so in the country they first arrive at. But which country a refugee arrives at can have grave effects on whether they are considered suitable as refugees. Landing in the wrong country can lead to deportation. It can also &nbsp;determine whether they are allowed to become part of an economic system, or have access to the welfare system which in turn determines their chances of not only integrating but surviving. Eurodac is the heart of the Dublin regime and uses a biometric matching system to enforce the <a href="http://openmigration.org/en/analyses/what-is-the-dublin-regulation/">Dublin regulations</a>. </p><p dir="ltr">Biometric systems place the border within the bodies of migrants. The European border is not a geographical line at the outer edges of a territory. The biometric border means that the body is inscribed with the power of the border. The biometric border decides who stays and leaves with the scan of a fingertip.</p><p class="mag-quote-right" dir="ltr">Biometric systems place the border within the bodies of migrants. </p><p dir="ltr">Migrants therefore often try to avoid the first stage of biometric capture: enrolment. This means that information on where scans into the Eurodac system are performed are shared via social media in order to let people pass in corridors to and through Europe without being enrolled. People also refuse to have their fingerprints taken. Many voices of the refugee movement in Europe have tried to raise awareness about these issues, and demanded that their biometric data be erased so they can move more freely. Refugees and migrants have made considerable demands to access the data gathered on them and to open this systematic data collection up to democratic debate. Freedom of movement and questions of data management have been and should be thought of as one struggle in the field of migration.</p><fieldset class="fieldgroup group-sideboxs"><legend>Sideboxes</legend><div class="field field-related-stories"> <div class="field-label">Related stories:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> <a href="/digitaliberties/krystian-woznicki-felicity-scott/outlaw-spaces-strategic-reversals-of-power-at-margi">Outlaw spaces: strategic reversals of power at the margins</a> </div> <div class="field-item even"> <a href="/shinealight/mia-light/lawyers-fail-migrants-in-uk">How lawyers fail migrants in the UK</a> </div> <div class="field-item odd"> <a href="/can-europe-make-it/chris-jones/ongoing-march-of-eu-s-security-industrial-complex">The ongoing march of the EU’s security-industrial complex</a> </div> </div> </div> </fieldset> <div class="field field-rights"> <div class="field-label">Rights:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> CC by NC 4.0 </div> </div> </div> digitaLiberties digitaLiberties Phoebe Braithwaite Christina Rogers Tue, 01 May 2018 08:11:38 +0000 Christina Rogers and Phoebe Braithwaite 117579 at https://www.opendemocracy.net Doing anti-surveillance activism differently https://www.opendemocracy.net/digitaliberties/jane-duncan/doing-anti-surveillance-activism-differently <div class="field field-summary"> <div class="field-items"> <div class="field-item odd"> <p>Recent campaigns waged in two Southern African Development Community (SADC) countries provide some interesting lessons about challenging excessive state security power.&nbsp;</p> </div> </div> </div> <p><span class='wysiwyg_imageupload image imgupl_floating_none caption-xlarge'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/500209/Screen Shot 2018-04-30 at 19.20.44.png" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/500209/Screen Shot 2018-04-30 at 19.20.44.png" alt="lead " title="" class="imagecache wysiwyg_imageupload caption-xlarge imagecache imagecache-article_xlarge" style="" width="460" /></a> <span class='image_meta'><span class='image_title'>Screenshot: Right2Know marches to the South African Parliament, October 2011.YouTube.</span></span></span>South Africa’s new president, Cyril Ramaphosa, has a great deal of work to do to fix the country’s state spy agencies. The most broken agencies are the Crime Intelligence Division of the South African Police Service (Saps) and the State Security Agency (SSA). At best, these agencies&nbsp;<a href="https://www.dailymaverick.co.za/article/2018-02-22-op-ed-what-ramaphosa-needs-to-do-to-fix-state-spying-part-3-intrusive-surveillance-and-counterintelligence/">failed to prevent</a>&nbsp;state capture by corrupt elements linked to former president Jacob Zuma, and at worst, they&nbsp;<a href="https://www.dailymaverick.co.za/article/2017-08-17-op-ed-how-state-spying-enables-state-capture/">enabled</a>&nbsp;state capture.&nbsp;&nbsp;</p> <p>But what if Ramaphosa doesn’t take drastic actions to fix these agencies? Then we’ll have to rely on activists to push for the necessary changes. </p> <p>But what types of activism work and what doesn’t in relation to state surveillance? In answering this question, it’s useful to draw some lessons from anti-surveillance activism in countries as diverse as the UK, South Africa and Mauritius.&nbsp;</p> <p>Edward Snowden’s leaks about state spy agency excesses reminded us that no country should be allowed to enjoy over-broad surveillance powers. Governments the world over have abused such powers to move far beyond their stated purposes of fighting crime and terrorism, to spy on trade unionists, activists and other politically inconvenient people.&nbsp;Yet, state spy agencies are so shadowy and powerful that they may appear unassailable.&nbsp;</p> <p>Yet, state spy agencies are so shadowy and powerful that they may appear unassailable.&nbsp;</p> <p>We should be particularly concerned if the major global surveillance powers give themselves more powers than they should, because their spying activities are likely to extend far beyond their borders.&nbsp;&nbsp;</p> <h2><strong>UK’s Investigatory Powers Bill – a democratic failure</strong></h2> <p>In view of these dangers, it should concern everyone that in the dying days of 2016, the UK Parliament&nbsp;<a href="http://www.legislation.gov.uk/id?title=Investigatory+Powers+Act+2016">passed</a>&nbsp;the Investigatory Powers Bill into law. This it did despite significant opposition from digital rights and privacy groups. Why were campaigns against the Bill largely unsuccessful?&nbsp;</p> <p>According to&nbsp;<a href="https://www.opendemocracy.net/digitaliberties/arne-hintz-lina-dencik/expanding-state-power-in-times-of-surveillance-realism-how-uk">research</a>&nbsp;conducted by academics at Cardiff University, campaigns against the Bill relied too heavily on specialist lobbying and advocacy, and not enough on broader public awareness-raising and mobilisation.&nbsp;</p> <p>Anti-surveillance advocates failed to engage social justice movements in the campaign, and consequently they felt alienated from it. In spite of the fact that many activists were at risk of surveillance, organised responses to the Bill were left to expert communities.&nbsp;</p> <p>With the exception of&nbsp;<a href="https://www.theguardian.com/international">The Guardian</a>, the mainstream press was largely pro-surveillance, as they were dominated by the voices of politicians. The public became resigned to security discourses as terrorist threats were real and present. As a result, there was no significant mass opposition.&nbsp;<br /> Yet, a mere six years earlier, UK privacy campaigners had&nbsp;<a href="https://www.theguardian.com/politics/2010/may/27/theresa-may-scrapping-id-cards">stopped</a>&nbsp;the government’s plans to introduce a ‘smart’ ID card system. The media (including the right wing press) questioned official claims about the contributions of ID cards to fighting terrorism. The public were cynical about the system and feared misuse of their personal information.&nbsp;</p> <p>Anti-surveillance campaigns that are driven by specialists, and that eschew, or do not pay sufficient attention to, building effective mass opposition, will be doomed to fail.&nbsp;</p> <h2><strong>Public awareness</strong></h2> <p>Anti-surveillance activists need to take movement-building seriously, and a precondition of such work is public awareness-raising.&nbsp;After all, governments with vested interests in the continued viability of the surveillance industry (such as the UK’s) are unlikely to be persuaded to adopt different positions purely on the basis of good arguments.&nbsp;</p> <p>There needs to be social power behind these arguments, too, and social power implies collective action.&nbsp;</p> <p>The forces of reaction are growing stronger by the day in the very countries that lie at the heart of the surveillance industry. If government over-securitisation is going to be challenged effectively, then anti-surveillance and pro-privacy campaigners clearly need to ‘do’ their work differently.&nbsp;The forces of reaction are growing stronger by the day in the very countries that lie at the heart of the surveillance industry.</p> <p>This needs to start with mapping those social forces and their organisations that are making progressive socio-economic and democratic claims, and placing them at the centre of anti-surveillance work.</p> <p>But what collective actions are needed to reign in unaccountable surveillance, and which are the social forces that are most likely to be most effective?&nbsp;</p> <p>In conditions of neoliberal precarity where the industrial working class has declined in power, it is less easy but not impossible to identify the most likely motors of potentially emancipatory social change, including around surveillance.&nbsp;</p> <p>Neoliberalism has sharpened inequality, leading to the number of marginalised populations increasing: these include the unemployed, those in insecure work and others in the ‘precariat’, youths (especially urban youths), black people, Muslims, lesbian, gay and transgender people.&nbsp;</p> <p>These are the very ‘problem populations’ that are the likely targets of surveillance, which can be used to make them more visible to the state. As a result, they clearly have an interest in resisting unaccountable surveillance. The anti-austerity movements that developed in response to the 2008 global recession, have an immediate interest in anti-surveillance work, too.&nbsp;</p> <h2><strong>Existing inequality struggles&nbsp;</strong></h2> <p>But in order to make anti-surveillance work relevant to these movements, then it is necessary to find ways of ‘mainstreaming’ this work in the everyday campaigns that bring ordinary people into organised social and political work.&nbsp;</p> <p>Of necessity, working class communities are often highly organised. So, with some creative campaigning, it should not be difficult to relate surveillance and its dangers to mobilisations in defence of public services, for jobs and free education for young people and against climate change.&nbsp;</p> <p>The important part of mass-based anti-surveillance campaigning is to relate the work to existing struggles on the ground. These campaigns need to concentrate on the roles of surveillance in the creation and reproduction of inequality, as it is this conflict that is driving the massive expansion of the global security apparatuses, industries and discourses.&nbsp;</p> <h2><strong>Privacy as an enabler of collective rights</strong></h2> <p>If resistance to this expansion is going to be effective, then it needs to provide a political voice to the otherwise voiceless, which means articulating an understanding of privacy that makes most sense to these social groups.&nbsp;</p> <p>This means that the campaigns will need to focus less on privacy as an individual right, and more on its content an enabler of collective rights. So, if privacy is denied these actors, then this will prevent collective discussion and organisation. The problem with understanding privacy as ‘the right to be left alone’, is that when it is pitched against other collective rights, like national security, then inevitably privacy will have to give way.&nbsp;</p> <p>Recent campaigns waged in two Southern African Development Community (SADC) countries provide some interesting lessons about challenging excessive state security power.&nbsp;</p> <p>Single-issue campaigns such as the one mounted against the Investigatory Powers Bill allow campaigners to focus consistently on a technically complex issue. But this strength can also be a weakness, in that wily governments are more than able to marginalise these campaigns if they do not enjoy significant social power.&nbsp;</p> <p>One option of drawing on the strengths of single-issue campaigns, while limiting their weaknesses, is to adopt a ‘movement-of-movements’ approach, where coalitions are formed between mass movements and Non-governmental Organisations (NGOs) on specific issues.&nbsp;</p> <p>For these campaign coalitions to be successful, though, they would need to accept the realities of working-class leadership, and consciously steer away from NGO dominance. Rather, NGOs should play a supporting role, providing technical expertise to the campaign without dominating it.&nbsp;NGOs should play a supporting role, providing technical expertise to the campaign without dominating it.</p> <p>Campaigns involving coalitions of movements and NGOs working on issues of mutual interest are fraught with difficulties, especially if they are cross-class in nature; but if handled with maturity, they can achieve significant results.&nbsp;&nbsp;</p> <h2><strong>South Africa – a useful history </strong></h2> <p>It hasn’t been difficult to build a popular basis for anti-surveillance work in South Africa. The country faces no major national security threats. As a result, governments cannot use terrorism as a beating stick to ensure public acquiescence to overbroad security and surveillance powers.&nbsp;</p> <p>Many older activists experienced surveillance abuses under apartheid, and know how to mount effective campaigns. Surveillance abuses, and broader abuses of the concept of national security to justify massive repression are still part of their lived experiences.&nbsp;</p><p><span class='wysiwyg_imageupload image imgupl_floating_none caption-xlarge'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/500209/Screen Shot 2018-04-30 at 19.21.49.png" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/500209/Screen Shot 2018-04-30 at 19.21.49.png" alt="" title="" class="imagecache wysiwyg_imageupload caption-xlarge imagecache imagecache-article_xlarge" style="" width="460" /></a> <span class='image_meta'><span class='image_title'>Screenshot: Right2Know marches to the South African Parliament, October 2011.YouTube.</span></span></span>These factors made the onset of ‘surveillance realism’ less likely in the country, and created the basis for intergenerational learning about state surveillance and its dangers.&nbsp; </p><p>South African activists put these learnings to use in a campaign against a Protection of State Information&nbsp;<a href="https://www.gov.za/sites/www.gov.za/files/B6F-2010_15Oct2013.pdf">Bill</a>, which the State Security Ministry wished to use to throw a shroud of secrecy over the country’s security apparatus. Parliament passed an amended version of the Bill in 2013, after a huge public campaign against it.&nbsp;</p> <p>Tellingly, the Bill languished unsigned on the desk of scandal-ridden former president Zuma. Not even he was willing to risk the public backlash of signing it into law, which means that Ramaphosa sits with the headache.&nbsp;</p> <h2><strong>Ramaphosa’s headache</strong></h2> <p>Now, the South African government goes out of its way to consult on potentially controversial Bills. While discussing one such Bill with activists, one senior politician even said ‘please don’t give us another Secrecy Bill campaign’.&nbsp;</p> <p>The tiny island nation of Mauritius, off the coast of south east Africa, also offers some interesting lessons. Mauritius has a highly organised working class, largely owing to militant trade unionism on the sugar plantations.&nbsp;</p> <p>In 2013, the Mauritian government&nbsp;<a href="http://mnis.govmu.org/English/Pages/default.aspx">introduced</a>&nbsp;a ‘smart’ ID card system, similar to the one the UK government envisaged before it abandoned its plans. It argued that the card would help the government stamp out identity fraud and theft.&nbsp;</p> <p>While on the surface of things, this initiative sounded laudable, Mauritians rose up and&nbsp;<a href="http://www.lalitmauritius.org/en/newsarticle/1669/elections-ix-freedom-infringed-danger-of-the-new-id-cards/">opposed</a>&nbsp;the ID card through several campaigns, claiming that it threatened privacy and even democracy itself. The government’s plans were particularly draconian, though, as they required residents to carry their identity cards at all times, on pain of a fine or even imprisonment if they didn’t.&nbsp;This lived experiences of population registration being abused for social control purpose has been passed down through the generations.&nbsp;</p> <p>What gave the Mauritian campaigns such traction was the country’s history of colonialism, slavery and indentured labour. Indentured labourers were required to carry identity cards at all times, which created widespread resentment against mandatory identification systems. This lived experiences of population registration being abused for social control purpose has been passed down through the generations.&nbsp;</p> <h2><strong>‘It’s part of you’</strong></h2> <p>The campaigns formulated several demands, including that the government should destroy the biometric database and stop the ID card from being mandatory. A popular campaign slogan, ‘It’s part of you’, conscientised citizens about their right to exercise control over their biometrics, as this data was tied intimately to their personhood.&nbsp;</p> <p>Campaigners relied on national radio, posters and leaflets to spread the message. They also used village councils to conduct public education on the dangers of the system, although many of these councils ended up complying with the government.&nbsp;</p> <p>In the campaign, activists made links between workers’ rights and the state’s surveillance efforts, including through the ID card system. In the process, they turned what could otherwise have been an issue dominated by the technical part of society into a mass issue that focused on the repressive framework that underpinned the system. In other words, they politicised the issue.&nbsp;</p> <p>Activists also engaged in direct action and passive resistance, organising ‘go-slows’ at the ID conversion centres and blocking queues by refusing to enrol their fingerprints. These efforts fostered a popular consciousness about the dangers of biometric technologies.</p> <p>Anti-surveillance actions spread beyond the ID card as people developed confidence in their abilities to struggle. Workers began to refuse to provide fingerprints for registration purposes at their workplaces, and started criticising cameras on buses as violations of their privacy.&nbsp;</p> <p>In spite of the government’s gains in coercing many citizens to enrol, the campaigns could not be ignored, especially by those in the Parliamentary opposition. Senior members of the opposition were brought over to the side of the campaigners, and supported their objectives.&nbsp;</p> <p>Campaigners used the courts, too; but significantly their recourse to the law was but one of several tactics used. This was because activists recognised that they were unlikely to win their demands in the courts if they had not won them on the streets first, as court actions on their own were unlikely to change the balance of social forces.&nbsp;</p> <p>Prominent Mauritians brought court cases against the system on constitutional grounds (such as the right to privacy), including the ex-Vice Prime Minister and ex-Minister of Justice.&nbsp;</p> <p>Eventually, the government destroyed the centralised biometric database, and converted the biometric system from a one-to-many system (requiring the verification of a person within an entire population) to a&nbsp;<a href="http://www.elandsys.com/~sm/mnic-id-card.html">one-to-one system</a>, where a person’s identity was confirmed through a comparison of their biometric data with previously enrolled data.&nbsp;</p> <p>However, the government continued with the mandatory enrolment of citizens and the requirement to present proof of identity to the police on demand, which meant that in spite of this limited victory, the campaign continued.&nbsp;</p> <p>What do we learn from these campaigns? A political understanding of the problem of surveillance, that moves beyond a rights-based approach, and recognises the root of the problem ­– which&nbsp;<a href="http://www.truth-out.org/news/item/21656-totalitarian-paranoia-in-the-post-orwellian-surveillance-state">according to</a>&nbsp;Henry Giroux is the growth in the exercise of arbitrary state power as neoliberalism intensifies – is more likely to be both effective and sustainable.&nbsp;</p> <h2><strong>Coda for NGO’s</strong></h2> <p>What is key, though, is an approach to anti-surveillance work that builds the capacity of mass movements to take on campaigns themselves, rather than outsourcing the struggle to specialist NGOs.&nbsp;</p> <p>In fact, NGO employees need to make conscious efforts to work their way out of their jobs. Once these campaign strategies are incorporated into anti-surveillance work, then activists may start to enjoy some truly significant victories over these most secretive and intractable areas of state and commercial power.&nbsp;&nbsp;&nbsp;</p><p><em>An <a href="https://www.dailymaverick.co.za/article/2018-03-13-op-ed-what-if-ramaphosa-doesnt-fix-state-spying-part-7-activism/#.WuiWC8gh2V4">earlier version of this piece</a> appeared in the </em>Daily Maverick <em>on March 13, 2018</em>.</p><fieldset class="fieldgroup group-sideboxs"><legend>Sideboxes</legend><div class="field field-related-stories"> <div class="field-label">Related stories:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> <a href="/henry-giroux-joan-pedro-cara-ana/henry-giroux-public-intellectual-on-menace-of-trump-and-new-authori">Henry Giroux, public intellectual, on the menace of Trump and the new authoritarianism</a> </div> <div class="field-item even"> <a href="/ourkingdom/guy-aitchison/digital-rights-and-freedoms-part-1">Digital rights and freedoms: Part 1</a> </div> <div class="field-item odd"> <a href="/ourkingdom/guy-aitchison/digital-rights-and-freedoms-part-2">Digital rights and freedoms: Part 2</a> </div> <div class="field-item even"> <a href="/digitaliberties/jim-killock-rosemary-bechler-guy-aitchison/everybody-has-to-be-incredibly-careful-about-maintaining-">In new gods do we trust?</a> </div> <div class="field-item odd"> <a href="/cathy-oneil-leo-hollis/weapons-of-maths-destruction">Weapons of maths destruction </a> </div> <div class="field-item even"> <a href="/uk/brexitinc/adam-ramsay/cambridge-analytica-is-what-happens-when-you-privatise-military-propaganda">Cambridge Analytica is what happens when you privatise military propaganda</a> </div> </div> </div> </fieldset> <div class="field field-country"> <div class="field-label"> Country or region:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> South Africa </div> <div class="field-item even"> Mauritius </div> <div class="field-item odd"> UK </div> </div> </div> <div class="field field-topics"> <div class="field-label">Topics:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> Civil society </div> <div class="field-item even"> Conflict </div> <div class="field-item odd"> Culture </div> <div class="field-item even"> Democracy and government </div> <div class="field-item odd"> International politics </div> <div class="field-item even"> Internet </div> </div> </div> <div class="field field-rights"> <div class="field-label">Rights:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> CC by NC 4.0 </div> </div> </div> digitaLiberties digitaLiberties UK Mauritius South Africa Civil society Conflict Culture Democracy and government International politics Internet Jane Duncan Mon, 30 Apr 2018 16:59:46 +0000 Jane Duncan 117578 at https://www.opendemocracy.net Microsoft’s Tech Accord – what it tells us about the cyber state of play https://www.opendemocracy.net/digitaliberties/lea-kaspar/microsoft-s-tech-accord-what-it-tells-us-about-cyber-state-of-play <div class="field field-summary"> <div class="field-items"> <div class="field-item odd"> <p class="normal">In the current climate, the impact of the Cybersecurity Tech Accord which, without explicitly saying it, gestures towards a form of self-regulation for the tech industry – needs close monitoring.</p> </div> </div> </div> <p class="normal"><span class='wysiwyg_imageupload image imgupl_floating_none caption-xlarge'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/500209/Screen Shot 2018-04-27 at 17.01.04.png" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/500209/Screen Shot 2018-04-27 at 17.01.04.png" alt="" title="" class="imagecache wysiwyg_imageupload caption-xlarge imagecache imagecache-article_xlarge" style="" width="460" /></a> <span class='image_meta'><span class='image_title'>Screenshot: The Guardian, April 19, 2018.</span></span></span>Last week, Microsoft and 33 other leading tech companies unveiled their <a href="https://cybertechaccord.org/accord/">Cybersecurity Tech Accord</a> – an agreement on a broad set of principles committing the signatories to “protecting users and customers everywhere”.</p> <p class="normal">The introduction to the Accord makes its intention clear: it is a corrective to a troubled cyberspace, characterised by a growing proliferation of malicious actors “from criminal to geopolitical” and the deterioration in trust, stability and security that this has brought about. While human rights are not explicitly mentioned, this broad diagnosis of the challenge is one that many human rights defenders will probably share. Exercising privacy and free expression online, after all, depends on a free, open and secure cyberspace. “Protecting our online environment”, as the Accord correctly notes, “is in everyone’s interest”.</p> <h2 class="normal"><strong>Much to welcome</strong></h2> <p class="normal">There is indeed much to welcome in the text from a human rights perspective, not least the commitment in Principle 1 that the parties to the Accord will strive to protect their users and customers from cyberattacks, whether they are by individuals or governments, and no matter their location. This principle also commits the parties to “design, develop, and deliver products and services that prioritize security, privacy, integrity and reliability, and in turn reduce the likelihood, frequency, exploitability, and severity of vulnerabilities”. </p> <p class="normal">The commitment in Principles 3 and 4 – to fostering partnerships with other groups on cybersecurity, and assisting in cyber capacity building in the global South – is also a useful reinforcement of the multistakeholder approach in the context of cybersecurity. Given the growing complexity and urgency of cybersecurity challenges, it should be enacted as soon as possible.</p> <h2 class="normal"><strong>Innocence and other problems</strong></h2> <p class="normal">Other aspects are more problematic. Most notably, Principle 2 introduces a commitment to “oppose cyberattacks on innocent civilians and enterprises”. Who will decide whether citizens or enterprises are “innocent” – or, for that matter, “guilty” – and according to what criteria? The inclusion of this undefined and potentially subjective and arbitrary requirement of innocence stands in stark contrast to the universal nature of international human rights and the need for any restrictions to be limited, necessary and proportionate. Without this qualifier – reported <a href="https://www.ft.com/content/269349ba-425c-11e8-803a-295c97e6fd0b">as a last minute addition </a>– the principle would have been a powerful and unequivocal defence of user rights.</p> <p class="normal">Lastly, the Accord leaves open the question of how these commitments will be implemented.&nbsp; For example, what will happen if a government comes to a signatory of the Accord, seeking access to private communications or data citing secret intelligence of an urgent threat to national security? How will courses of action be decided on, in practice – and how will these be communicated? There is a single mention, at the end, of ‘public reporting’ on progress against goals which are as yet undefined.</p> <h2 class="normal"><strong>Digital Geneva Convention?<br /></strong></h2> <p class="normal">In some ways, the most interesting aspect of this (for now) slender manifesto is what it says about the current state of play in cyberspace, and where we are heading. Why are Microsoft and other tech companies doing this – and why now?</p> <p class="normal">The Accord has to be read in the context of a fractured and fracturing geopolitical system. Last year, efforts to establish consensus on international norms for responsible behaviour in cyberspace at the UN Group of Governmental Experts <a href="https://thediplomat.com/2017/07/un-gge-on-cybersecurity-have-china-and-russia-just-made-cyberspace-less-safe/">stalled dramatically</a>; while the 2017 Global Conference on CyberSpace <a href="https://www.gp-digital.org/gccs2017-a-cyberspace-free-open-and-secure-but-mostly-secure/">proved a showcase</a> for the growing polarisation between several divergent visions of cyberspace. </p> <p class="normal">Though the Accord&nbsp; principally concerns itself with the behaviour of companies, it is a component of Microsoft’s broader <a href="https://blogs.microsoft.com/on-the-issues/2017/02/14/need-digital-geneva-convention/">proposal for a Digital Geneva Convention</a>, which also calls for a new international treaty “to protect civilians, infrastructure and private companies from state-sponsored cyberattacks”. Regardless of whether Microsoft is the right player to call for or broker such an arrangement, the fact that companies are stepping up to take this role is hardly surprising, given the current state of debate.</p> <h2 class="normal"><strong>Rumblings and augurs</strong></h2> <p class="normal">Another important item of context is the growing regulatory pressure on tech companies. When the EU’s General Data Protection Regulation (GDPR) comes into force in May, companies will face substantial new obligations to protect the data of their users. Although its remit is not cybersecurity-specific, the GDPR reflects a changing regulatory tide. US tech giants are already having to adapt to the legislation in their European outposts (or, in at least one case, <a href="https://www.theguardian.com/technology/2018/apr/19/facebook-moves-15bn-users-out-of-reach-of-new-european-privacy-law">hurriedly move their operations out of its reach</a>), and even in the US – where a light regulatory environment has long prevailed – there are rumblings and augurs, exemplified by the image of a chastened Mark Zuckerberg testifying before Congress. </p> <p class="normal">The Cybersecurity Tech Accord which, without explicitly saying it, gestures towards a form of self-regulation for the tech industry, might therefore be seen as an attempt to demonstrate that companies can behave responsibly without additional legal obligations.</p> <p class="normal">Before we can judge the Accord’s likely impact in addressing the issues it identifies, we will need more to go on. But in the current geopolitical climate, it may at least provide an impetus to move us beyond this cyber impasse.</p><div class="field field-country"> <div class="field-label"> Country or region:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> United States </div> <div class="field-item even"> EU </div> </div> </div> <div class="field field-rights"> <div class="field-label">Rights:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> CC by NC 4.0 </div> </div> </div> digitaLiberties digitaLiberties EU United States Lea Kaspar Fri, 27 Apr 2018 16:10:42 +0000 Lea Kaspar 117547 at https://www.opendemocracy.net In this age of populism, it’s not ‘Cyber’ that’s being Balkanised – it’s people https://www.opendemocracy.net/digitaliberties/edin-omanovic/in-this-age-of-populism-it-s-not-cyber-that-s-being-balkanised-it-s-pe <div class="field field-summary"> <div class="field-items"> <div class="field-item odd"> <p>The Balkans, already facing charges of bringing the world the First World War and the term ‘ethnic cleansing’, are now being blamed with a new offence: ‘Cyber-Balkanisation’.</p> </div> </div> </div> <p><span class='wysiwyg_imageupload image imgupl_floating_none caption-xlarge'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/500209/PA-1176196.jpg" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/500209/PA-1176196.jpg" alt="lead " title="" class="imagecache wysiwyg_imageupload caption-xlarge imagecache imagecache-article_xlarge" style="" width="460" /></a> <span class='image_meta'><span class='image_title'>This image, featuring Yugolavian President Slobodan Milosevic, internal security chief Radomir Markovic, and military leaders Colonel General Dragolub Ojdanic, Lieutenant Colonel General Pavkovic, was shown at a Ministry of Defence press briefing in London, March 29, 1999. PA/Press Associaition. All rights reserved.</span></span></span>The term itself is dubious. But it’s currently used by learned cyber-experts all over the world, who are usually getting at something fairly simple: the internet, supposed to be a unitary global platform bringing people together, is being split along very non-cyber and real national borders. </p> <p>The evidence is real enough: economic opportunism and aggressive spy agencies around the world are demanding that big tech store the internet’s preferred currency – data – in their national jurisdictions, and modify it to suit their domestic regulations. It’s the internet, but with national characteristics: China and Russia have both recently passed strong data localisation laws. </p> <p>However, the real Cyber-Balkanisation isn’t happening to ‘Cyber’ – ‘Cyber’ isn’t an actual thing – it’s happening to people, through the internet. </p> <p>People-Balkanisation-Facilitated-Through-The-Internet might not be as snappy, but it’s very real. And it is much more dangerous. The much snappier terms used to explain our current world are a part of it: fake news, populism, ethno-nationalism, and echo chambers. </p> <p>What happened to the Balkans in the 90s is now playing out internationally. An underlying driver in Yugoslavia, the effect of a transition away from socialism on people’s lives, is comparable to the effect being inflicted by the new digital economy. </p> <h2><strong>Yugoslavia comes apart by the seams</strong></h2> <p>The economy in Yugoslavia collapsed in the 1980s – people who once had money and economic certainty found themselves living through a series of crises and complete instability, facing severe levels of unemployment, foreign debt, and inflation. The feeling of uncertainty and regression is familiar to people in Britain’s industrial cities or the US rust belt, people who are the principal losers of not only the 2008 global collapse but of the new internet-greased economy.</p> <p>In Yugoslavia, politicians were quick to spot the potential: a series of macho dictators emerged promising salvation through a return to historical national greatness. One of them, a fraudster with ridiculous hair, Radovan Karadzic, and his accomplice Milosevic promised a Greater Serbia. They wanted to make Serbia great again, but mostly they wanted power. </p> <p>And they knew how to get it. They would turn Yugoslavia’s national motto, Brotherhood and Unity, into one of the most bitter ironies of the modern age. The division they stoked was mutually-enforcing: to their counterparts, Croatians became fascist Ustaše, Muslims mujahideen, and Serbs genocidal Chetniks. Once leaders established the threat, they ruthlessly propagated it through the state controlled media. Milosevic weaponised Serbian TV, beaming nightly reports of Serbs under ethnic attack into people’s living rooms. </p> <p>Croatia’s leadership followed the script. It was fake news beamed directly into people’s echo chamber. </p> <p>The obvious solution these dictators proposed was ethno-nationalism: the idea that the state should be extended to ensure that all of your fellow ethnics lived in the same country, for their protection and everyone’s glory. </p> <p>Religion, military greatness, and historical injustices mixed to engineer a national mythology based on the threat of the other. And in order to counter that threat, you had to support the strongman who is willing to stand up for it. The path that leads people to rape, pillage, and organised murder was set.</p> <h2><strong>Fear of the Other</strong></h2> <p>It is a tried and tested source of power and today, the internet is the principal means of delivering it. People’s own echo chambers and the data on which the internet runs and amasses is used to deliver these mythologies with the most accurate targeting system ever invented. </p> <p>One group is told that hordes of scrounging, raping, terrorising foreigners are to blame for all the faults in their life. Another is told that crusaders armed with drones are responsible for the slaughter of their fellow believers. </p> <p>Other groups are targeted with the same myths within their national contexts: in Kenya last year, a Texan-based rival to Cambridge Analytica using data to target ads, said that once elected, presidential candidate Raila Odinga would remove ‘whole tribes’; in Hungary, voters are told George Soros and immigrants are a threat to their ethnic purity and prosperity and that lists of Jews should be made; in the US, it’s Mexicans and a so-called Muslim database; across the world, articles, memes and videos are targeted at and shared with people as political weapons. We share the same world but not the same internet. </p> <p>The proposed solution is again ethno-nationalist and religious. End all immigration, build walls, send them home, join the caliphate, vote for the candidate that will defend your tribe. Their message is mutually-enforcing, giving everyone a sense of belonging, and the likes of Trump, Tommy Robinson, and ISIS exactly what they want – power.&nbsp;The people weaponising data nowadays are different from those in the Balkans, but they’re very familiar. </p> <h2><strong>No borders?</strong></h2> <p>The internet was indeed designed to transcend old national borders. This means that you no longer need to rely on a state TV station to deliver fake news, it can be targeted directly to those you want it to reach. While in Yugoslavia entire villages and countries may have been living in these echo chambers, it’s now possible for people simply living in the same house. This is People-Balkanisation-Facilitated-Through-The-Internet.</p> <p>So what can be done? Well, in the 90s, NATO ended up bombing Serbian TV, but the internet was designed specifically to withstand a nuclear attack. And it doesn’t make sense to censor the internet, you don’t blame ‘television’ just because you disagree with what someone says on it. </p> <p>Calls for state regulation are always dangerous: imagine the new breed of dictator with control over the greatest tool for information ever invented by humankind. Calls for a government-free utopia only leave people at the mercy of corporate power. That power and its influence is as always the principal problem – and on the internet those who have the data have the power. </p> <h2><strong>Privacy, anti-trust and data protection laws</strong></h2> <p>To check that power, it means having checks on access to and use of data. This means, for example, making sure that state organs don’t have access to mass surveillance, that data is not concentrated in the private sector into a few platforms, and that there exists legislation protecting how it is used. </p> <p>It means ensuring that spy agencies don’t have persistent access to everyone’s communications, that firms like Cambridge Analytica aren’t allowed to weaponise data in elections, and that people have control over how their data is brokered by firms they’ve never even heard of. </p> <p>While privacy, anti-trust, and data protection laws might not sound like a revolutionary plan for peace in our time, they’re currently the best defence we’ve got.</p><div class="field field-topics"> <div class="field-label">Topics:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> Civil society </div> <div class="field-item even"> Conflict </div> <div class="field-item odd"> Culture </div> <div class="field-item even"> Democracy and government </div> <div class="field-item odd"> Economics </div> <div class="field-item even"> Ideas </div> <div class="field-item odd"> International politics </div> <div class="field-item even"> Internet </div> <div class="field-item odd"> Net neutrality </div> </div> </div> <div class="field field-rights"> <div class="field-label">Rights:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> CC by NC 4.0 </div> </div> </div> digitaLiberties digitaLiberties Civil society Conflict Culture Democracy and government Economics Ideas International politics Internet Net neutrality Edin Omanovic Thu, 26 Apr 2018 17:46:12 +0000 Edin Omanovic 117525 at https://www.opendemocracy.net The Cam-Book gate scandal will not restore our privacy, will it? https://www.opendemocracy.net/digitaliberties/idreas-khandy/cambridge-analyticafacebook-scandal-will-not-restore-our-privacy-will- <div class="field field-summary"> <div class="field-items"> <div class="field-item odd"> <p>For us to care about the practices of corporations, reclaim our privacy and contest mass-surveillance we should not need the shock therapy of Trumpian politics.</p> </div> </div> </div> <p><span class='wysiwyg_imageupload image imgupl_floating_none caption-xlarge'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/500209/shutterstock_198736082_0.jpg" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/500209/shutterstock_198736082_0.jpg" alt="lead " title="" class="imagecache wysiwyg_imageupload caption-xlarge imagecache imagecache-article_xlarge" style="" width="460" /></a> <span class='image_meta'><span class='image_title'>Shutterstock/Ollyy. All rights reserved.</span></span></span>The Cam-Book gate is creating ripples across the world as both Cambridge Analytica and Facebook scramble to control the damage. The data extracted from Facebook without the informed consent of the users&nbsp;was&nbsp;allegedly used to&nbsp;<a href="https://hackernoon.com/cambridge-analytica-what-the-media-wont-tell-you-772d7ec80e4">influence</a>&nbsp;the outcome of the&nbsp;<a href="https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election">US elections</a>, which saw the rise of Trump and his acolytes to power. Suggestions are&nbsp;being made&nbsp;that&nbsp;<a href="https://www.theguardian.com/uk-news/2018/mar/24/aggregateiq-data-firm-link-raises-leave-group-questions">Brexit</a>&nbsp;was orchestrated&nbsp;by using similar tactics. Cambridge Analytica has maintained that it obtained the data&nbsp;<a href="https://www.cnet.com/news/facebook-cambridge-analytica-data-mining-and-trump-what-you-need-to-know/">legally</a>.</p> <p>There is <a href="https://www.reuters.com/article/us-facebook-cambridge-analytica-india/india-queries-cambridge-analytica-over-alleged-facebook-data-breach-idUSKBN1H00AO">noise coming</a>&nbsp;out of India as well&nbsp;about&nbsp;data breaches. It is mere politicking and not a serious debate as it is a country with no robust privacy laws, and the state is more than happy to give corporations such as Facebook enough leeway as long as they toe its line. India has&nbsp;<a href="https://www.thequint.com/tech-and-auto/tech-news/facebook-gets-2nd-most-user-data-request-from-indian-government-as-per-latest-data">requested user data from Facebook</a>&nbsp;more than every other country except the&nbsp;United&nbsp;States. In the recent past, Facebook has also&nbsp;<a href="https://www.theguardian.com/technology/2016/jul/19/facebook-under-fire-censoring-kashmir-posts-accounts">blocked users</a>&nbsp;from Indian Occupied Kashmir for expressing anti-India sentiments on its platform and so has&nbsp;<a href="https://thewire.in/173958/centres-request-twitter-blocks-accounts-tweets-kashmir-content/">Twitter</a>. Were it not for the whistle-blower Christopher Wylie, how many Facebook users would have been aware&nbsp;of&nbsp;the pervasive data-gathering tactics of the social media behemoth?&nbsp;&nbsp;</p> <p>It is disappointing to see that while it has&nbsp;been well documented&nbsp;how Facebook has been violating the privacy of its&nbsp;users, users&nbsp;seem not to care. The good old ‘I have nothing to hide’ argument&nbsp;is peddled&nbsp;left, right, and centre. People appear to have wholeheartedly embraced a disturbing ‘<a href="http://www.hup.harvard.edu/catalog.php?isbn=9780674504578">exhibitionism</a>’, argues Bernard Harcourt in his&nbsp;must-read&nbsp;book ‘<em>Exposed-Desire and Disobedience in the Digital Age’</em>.&nbsp;This is not the end of the road: Facebook’s hunger for data has only grown regardless of how much its users provide willingly. </p> <p>Facebook has aggressively pursued a merger and acquisitions strategy from its earliest days, and&nbsp;has&nbsp;so&nbsp;far succeeded&nbsp;in&nbsp;acquiring as&nbsp;many as 67 businesses, including other tech giants such as WhatsApp, Instagram, Jibbigo, tbh, and ConnectU among others. Many of these acquisitions were purely data grabbing exercises, as was pointed out in case of the&nbsp;<a href="https://motherboard.vice.com/en_us/article/pgkmw7/facebook-whatsapp-ftc-complaint-epic">WhatsApp acquisition</a>. It has&nbsp;spent <a href="https://techcrunch.com/2013/02/28/facebook-acquires-atlas/">millions of dollars</a>&nbsp;to reinforce&nbsp;its&nbsp;capacity to target its users with a barrage of ads, the same feature which was allegedly exploited by the Trump campaign. </p> <p>Let’s forget politics for a minute, and talk about how Facebook boosts the numbers of likes and engagement for pages that pay for such a boost. A YouTube channel by the name of Veritasium pointed out way back in 2014 that&nbsp;<a href="https://www.youtube.com/watch?v=oVfHeWTKjag">Facebook&nbsp;utilises&nbsp;click farms in third-world countries</a>&nbsp;to increase the number of likes for pages, a practice that Facebook says is a violation of its policies. </p> <p>Facebook, in fact, has never really cared about&nbsp;<a href="https://www.theguardian.com/technology/2016/nov/02/admiral-facebook-data-insurers-internet-of-things">users’ privacy</a>: privacy and Facebook’s business model, to put it bluntly, are&nbsp;<a href="https://www.motherjones.com/politics/2013/10/facebook-personal-data-online-privacy-social-norm/">antithetical to one another</a>. It is ironic that the same company has put itself&nbsp;in charge&nbsp;of the fight against ‘fake news’. </p> <p>So what Cambridge-Analytica did was that to apply these practices, which are central to&nbsp;<a href="https://www.theverge.com/2018/3/25/17161726/facebook-cambridge-analytica-data-online-marketers">Facebook’s business model and known to its clients</a>, to politics. The question that arises is – was this a first of its kind occurrence? Not at all.</p> <p>The US elections&nbsp;were&nbsp;not the first time Facebook has&nbsp;played a central part in shaping the outcome of an election. How Team Obama made use of Facebook in its campaign for a second term is well known and has become part of&nbsp;<a href="http://eprints.bournemouth.ac.uk/22598/1/Gerodimos_and_Justinussen_%282014%29_final_proofs.pdf">academic literature</a>&nbsp;as well.&nbsp;Although, Obama campaign managers claim that they used the data scrupulously,&nbsp;<a href="https://www.theguardian.com/world/2012/feb/17/obama-digital-data-machine-facebook-election">a Guardian story from 2012 says</a>, “<em>The re-election team, Obama for America, will be inviting its supporters to log on to the campaign website via Facebook, thus allowing the campaign to access their personal data and add it to the central data store”.&nbsp;</em></p> <p>Furthermore, The Intercept reported on March 14, 2018, how Facebook ‘<a href="https://theintercept.com/2018/03/14/facebook-election-meddling/">quietly hid’</a>&nbsp;all the blog posts where the company was apparently bragging about its ability to influence elections. So, why this outpouring of outrage and disbelief that something like this could happen. Why are the hashtags of #deletefacebook being endorsed by the likes of&nbsp;<a href="http://www.thedrum.com/news/2018/03/24/elon-musk-supports-deletefacebook-campaign">Musk</a>, now? The answer apparently is the rise of Trump to power. Framing the issue of privacy violation as a subset of Trump’s rise to power is deeply problematic. Are we being asked to really only care about our privacy when distasteful politics like that of Trump comes to the fore?&nbsp;</p> <p>For us to care about the practices of corporations, reclaim our privacy and contest mass-surveillance we should not need the shock therapy of Trumpian politics. Privacy and consent are the two essential pillars that support the idea of liberty itself. Take either of them away, you either get coercion or self-censorship, and potentially on a massive scale. Privacy is far too important to be thought of as a by-product of other processes; it is worth defending and demanding in its own right.</p><fieldset class="fieldgroup group-sideboxs"><legend>Sideboxes</legend><div class="field field-related-stories"> <div class="field-label">Related stories:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> <a href="/uk/brexitinc/adam-ramsay/cambridge-analytica-is-what-happens-when-you-privatise-military-propaganda">Cambridge Analytica is what happens when you privatise military propaganda</a> </div> <div class="field-item even"> <a href="/anita-gurumurthy-amrita-vasudevan/snowden-to-cambridge-analytica-making-case-for-social-value-of-pri">Snowden to Cambridge Analytica – making the case for the social value of privacy</a> </div> <div class="field-item odd"> <a href="/digitaliberties/people-before-party-having-actual-conversations-in-digital-era">From Obama to Cambridge Analytica: how did we get here? (Podcast)</a> </div> <div class="field-item even"> <a href="/digitaliberties/juan-ortiz-freuler/battle-for-decentralized-internet-navigating-troubled-waters-to-g">The Cambridge Analytica scandal is a drop of water trickling down the visible top of an iceberg. Focus on decentralizing power</a> </div> <div class="field-item odd"> <a href="/marcus-gilroy-ware/cambridge-analytica-outrage-is-real-story">Cambridge Analytica: the outrage is the real story</a> </div> <div class="field-item even"> <a href="/digitaliberties/ivan-manokha/cambridge-analytica-surveillance-is-dna-of-platform-economy">‘Cambridge Analytica’: surveillance is the DNA of the Platform Economy</a> </div> <div class="field-item odd"> <a href="/uk/jennifer-cobbe/problem-isn-t-just-cambridge-analytica-or-even-facebook-it-s-surveillance-capitali">The problem isn’t just Cambridge Analytica or Facebook – it’s “surveillance capitalism”</a> </div> </div> </div> </fieldset> <div class="field field-country"> <div class="field-label"> Country or region:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> United States </div> <div class="field-item even"> India </div> <div class="field-item odd"> EU </div> <div class="field-item even"> UK </div> </div> </div> <div class="field field-topics"> <div class="field-label">Topics:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> Democracy and government </div> <div class="field-item even"> International politics </div> <div class="field-item odd"> Internet </div> <div class="field-item even"> Net neutrality </div> </div> </div> <div class="field field-rights"> <div class="field-label">Rights:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> CC by NC 4.0 </div> </div> </div> digitaLiberties digitaLiberties UK EU India United States Democracy and government International politics Internet Net neutrality Idreas Khandy Sun, 15 Apr 2018 16:04:34 +0000 Idreas Khandy 117289 at https://www.opendemocracy.net Dreadful symmetry: kill boxes, racism and US sovereign power in the digital age https://www.opendemocracy.net/neal-curtis/dreadful-symmetry-kill-boxes-racism-and-us-sovereign-power-in-digital-age <div class="field field-summary"> <div class="field-items"> <div class="field-item odd"> <p>Nearly all of the killings and excuses for killings carry this mark of the “pre-insurgent”. All the time we hear, “we thought he was reaching for a gun”.</p> </div> </div> </div> <p><a href="https://opendemocracy.net/hri"><img src="//cdn.opendemocracy.net/files/finalbannerhri2_0.jpg" alt="HRI" width="460px" /></a></p> <p><span class='wysiwyg_imageupload image imgupl_floating_none caption-xlarge'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/500209/PA-35752554.jpg" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/500209/PA-35752554.jpg" alt="lead lead " title="" class="imagecache wysiwyg_imageupload caption-xlarge imagecache imagecache-article_xlarge" style="" width="460"/></a> <span class='image_meta'><span class='image_title'>March 28, 2018, New York: protests over the fatal police shooting of Stephon Clark, an unarmed black man in Sacramento, California. Erik McGregor/Press Association. All rights reserved.</span></span></span>I started writing this just as the funeral for <a href="https://www.washingtonpost.com/news/post-nation/wp/2018/03/29/funeral-begins-for-stephon-clark-amid-outrage-over-fatal-police-shooting/?utm_term=.31ed99391722">Stephon Clark</a> was taking place in Sacramento. Clark was the latest in a very long line of unarmed African Americans summarily killed by police in the US. </p> <p>In this case, a black man carrying nothing but a cell phone was shot 20 times in his own garden after police were called out to a reported incident in the area to which Clark had no connection. The banality of standing in your own garden contrasted with the violence of 20 shots – so extreme it tore his body apart, preventing the family from ritually washing it – seems to make this case stand out. But it doesn’t. </p> <p>The <a href="https://blacklivesmatter.com">Black Lives Matter</a> movement acts as testimony to the countless deaths within the black community and the continuation of the personal and state violence that has been visited upon them since the onset of colonialism and the mass exploitation, torture and murder that was slavery.</p> <h2><b>The language of ‘bare life’ and the ‘war on terror’</b></h2> <p>The funeral of Stephon Clark follows the court decision not to charge the officers that killed <a href="https://edition.cnn.com/2018/03/27/us/alton-sterling-investigation/index.html">Alton Stirling</a> in Baton Rouge in 2016. This fits the pattern in which the killing of black men in particular appears to have no legal consequence. From the perspective of the families these are homicides or murders. From the perspective of political activists these are akin to state executions or assassinations. But the language is important here. Murder assumes some sort of legal redress, while execution suggests some intended legal or institutional support. </p> <p>Although I am convinced the US legal system and its police force remain deeply racist and that these killings might be deemed sanctioned executions from that perspective, these events are much closer to the production of what Giorgio Agamben in his book <i>Homo Sacer</i> called <a href="http://criticallegalthinking.com/2015/07/02/sovereign-exception-notes-on-the-thought-of-giorgio-agamben/">“bare life”</a>. </p> <p>This concept refers to what happens in a state of emergency when the sovereign who is seen as the guarantor of the law during peacetime suspends it in response to an external or internal threat. Ordinarily we call this “emergency powers”, defined by the suspension or withdrawal of the usual legal protections. </p> <p>At such times, this suspension of the law allows people to be incarcerated, harmed or killed without any legal redress. Instead of people being protected they are exposed as “bare life” and subjected to violence without any mediation. For Agamben, the ultimate space of this exceptional politics is the concentration camp. From the perspective of the law, “bare life” literally doesn’t matter, which is why the name of the latest black civil rights movement is so pertinent.</p> <p>Since the announcement of the “War on Terror” in September 2001, this exceptional politics or state of emergency has become our normal condition. Not only did the US support the use of torture and use the full force of digital warfare to launch pre-emptive strikes, it also undermined domestic civil rights with far-reaching surveillance legislation under the euphemistically named <a href="https://www.aclu.org/issues/national-security/privacy-and-surveillance/surveillance-under-patriot-act">PATRIOT Act</a> that gave the state unparalleled access to personal data collected through search engines, social media, and mobile phone records. </p> <p>After the catastrophic invasion of Iraq in 2003 in which the entire Iraqi population was treated as “bare life”, and where specific sites of sovereign power like <a href="https://video.search.yahoo.com/yhs/search;_ylt=A0geK99Ua71a0UcAAWoPxQt.?p=Middle+East+eye+man+under+the+hood+abu+Ghraib&amp;fr=yhs-Lkry-SF01&amp;fr2=piv-web&amp;hspart=Lkry&amp;hsimp=yhs-SF01&amp;type=ANYS_A09AW_ext_bsf#id=1&amp;vid=598a0cb404b2f4eb812505b7c8ee3195&amp;action=view">Abu Ghraib</a> became notorious, US foreign policy took a dramatic shift under Obama. Although, the earlier invasion had already been marked by the use of <a href="http://www.nbcnews.com/id/3855079/ns/world_news-mideast_n_africa/t/digital-warfare-systemhuntsiraq-rebels/#.WsLr22Z7FLA">digital technologies</a>, Obama decided that the will of America would be delivered through the drone <a href="https://understandingempire.wordpress.com/2-0-a-brief-history-of-u-s-drones/">apparatus</a>; a collection of pilotless armoured planes, human operators, data banks, screens, algorithms, computers and the Internet. Given that the Internet was a military invention, the drone apparatus is the ultimate fantasy in the military desire for decentred, flexible, multi-theater, “casualty-free” warfare.</p> <h2><b>‘Five Eyes’ and the ‘kill box’ come home </b></h2> <p>This apparatus – made international through the <a href="https://theintercept.com/2018/03/01/nsa-global-surveillance-sigint-seniors/">“Five Eyes” programme</a> – works by using the collection of data from the web, email, digital social networks, as well as locative media and visual surveillance technologies to track down known suspects or detect people deemed to be potential terrorists, or what is termed “pre-insurgent”. </p> <p>Of course, because “pre-insurgency” is epidermally biased it is not hard to imagine how easy it is to become a target if you are brown or black and live in North Africa or the Middle East. These targets that are found by the apparatus itself are fed back into the “kill chain”, a hierarchy of human command that signs off on their killing. In the language of the apparatus, the system produces spaces known as <a href="https://www.cnet.com/news/life-in-the-kill-box-eye-in-the-sky-targets-the-ethics-of-drone-strikes/">“kill boxes”</a> in which any legal right the target has under International Law as a civilian or under the Geneva Convention as a soldier/warfighter is removed, allowing them to be killed with impunity. Instead of large, static spaces like Guantanamo Bay in which the politics of the exception has been brought to bear on “foreign” bodies, the “kill box” is a micro, momentary, and fleeting space that produces “bare life” and through which contemporary sovereign power operates.</p> <p>Surveillance <a href="https://www.youtube.com/watch?v=FeL8DAnjk40">footage</a> of Stephon Clark’s garden taken from the helicopter called out to assist in the reported incident in which he was an innocent bystander brought to mind the operations of the drone apparatus, and how perfectly – and with a dreadful, terrifying symmetry – the military euphemism of the “kill box” fits with the domestic execution of US sovereign power. </p> <p>Like the state of emergency that underpins its foreign policy, a state of emergency also operates domestically. This is a self-induced crisis brought about by its gun cult, the persistence of white supremacy, and its system of gross economic inequality. </p> <p>Nevertheless, in a country that prides itself on the rule of law, this politics of ‘exception’ continually opens up these micro spaces in which black life is reduced to bare life and no one need be held to account. These micro, temporary kill boxes that open up on US streets with a frightening regularity are, of course, the latest in a long history of violence against black people and appear within a social space already racially biased. </p> <p>Just like the enemy abroad, black people at home are seen as “pre-insurgent”, as if the fear of slave revolts still hangs heavy over US culture. Nearly all of the killings and the excuses for the killings carry this mark of the “pre-insurgent”. All the time we hear, “we thought he had a gun” or “we thought he was reaching for a gun”. From the position of white supremacy a person of colour is always about to be a <a href="https://www.vox.com/identities/2017/3/17/14945576/black-white-bodies-size-threat-study">threat</a>.</p><p><span class='wysiwyg_imageupload image imgupl_floating_none caption-xlarge'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/500209/Curtis KIll video screenshot.jpg" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/500209/Curtis KIll video screenshot.jpg" alt="" title="" class="imagecache wysiwyg_imageupload caption-xlarge imagecache imagecache-article_xlarge" style="" width="460"/></a> <span class='image_meta'><span class='image_title'>Screen shot.</span></span></span></p> <h2><b>Black Lives Matter</b></h2> <p>The Black Lives Matter Movement went viral after the acquittal of Trayvon Martin’s killer in 2013. The hashtag #BlackLivesMatter has since become an international sign for justice, and has demonstrated how important social media and the web are for organizing political resistance, in an age in which the logic of the exception and the state of emergency are becoming the new normal. </p> <p>It is beholden upon anyone with a concern for our fast dwindling democratic protections to support this movement.</p><fieldset class="fieldgroup group-sideboxs"><legend>Sideboxes</legend><div class="field field-read-on"> <div class="field-label"> 'Read On' Sidebox:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> <p><a href="https://opendemocracy.net/hri"><img src="//cdn.opendemocracy.net/files/smallhribanner.jpg" alt="" /></a></p> <p>More from the <a href="https://opendemocracy.net/hri">Human Rights and the Internet</a> partnership.</p> </div> </div> </div> <div class="field field-related-stories"> <div class="field-label">Related stories:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> <a href="/daniele-archibugi/targeted-killings-through-drones-are-war-crimes">Targeted killings through drones are war crimes</a> </div> <div class="field-item even"> <a href="/uk/brexitinc/adam-ramsay/cambridge-analytica-is-what-happens-when-you-privatise-military-propaganda">Cambridge Analytica is what happens when you privatise military propaganda</a> </div> <div class="field-item odd"> <a href="/cathy-oneil-leo-hollis/weapons-of-maths-destruction">Weapons of maths destruction </a> </div> <div class="field-item even"> <a href="/opensecurity/stephen-graham/foucault%E2%80%99s-boomerang-new-military-urbanism">Foucault’s boomerang: the new military urbanism</a> </div> <div class="field-item odd"> <a href="/opensecurity/carly-nyst/un-privacy-report-five-eyes-remains">The UN privacy report: Five Eyes remains</a> </div> <div class="field-item even"> <a href="/bassam-gergi/need-for-national-action-on-us-gun-reform">The bleeding need for national action on US gun reform</a> </div> </div> </div> </fieldset> <div class="field field-country"> <div class="field-label"> Country or region:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> United States </div> </div> </div> <div class="field field-rights"> <div class="field-label">Rights:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> CC by NC 4.0 </div> </div> </div> digitaLiberties hri United States Neal Curtis Wed, 04 Apr 2018 08:42:55 +0000 Neal Curtis 117006 at https://www.opendemocracy.net Manifesto on algorithmic humanitarianism https://www.opendemocracy.net/dan-mcquillan/manifesto-on-algorithmic-humanitarianism <div class="field field-summary"> <div class="field-items"> <div class="field-item odd"> <p class="Compact">The nature of machine learning operations mean they will actually deepen some humanitarian problematics and introduce new ones of their own. This banality of machine learning is also its power.</p> </div> </div> </div> <p><a href="https://opendemocracy.net/hri"><img src="//cdn.opendemocracy.net/files/finalbannerhri2_0.jpg" alt="HRI" width="460px" /></a></p> <p><span class='wysiwyg_imageupload image imgupl_floating_none caption-xlarge'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/500209/PA-24825892_3.jpg" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/500209/PA-24825892_3.jpg" alt="lead " title="" class="imagecache wysiwyg_imageupload caption-xlarge imagecache imagecache-article_xlarge" style="" width="460"/></a> <span class='image_meta'><span class='image_title'>Small grocery shop in The Jungle refugee camp in Calais, 2015. Debets/Press Association. All rights reserved.</span></span></span></p> <h2> intro </h2><ol><li>Artificial Intelligence (AI) is undergoing a period of massive expansion.</li><li>This is not because computers have achieved human-like consciousnesss but because of advances in machine learning, where computers learn from training data how to classify new data.</li><li>At the cutting edge are the neural networks that have learned to recognise human faces or play Go.</li><li>Recognising patterns in data can be used as a predictive tool, and AI is being applied to echocardiograms to predict heart disease</li><li>to workplace data to predict if employees are going to leave</li><li>and to social media feeds to detect signs of incipient depression or suicidal tendencies.</li><li>One activity that seems distant from AI is humanitarianism;</li><li>the organisation of on-the-ground aid to fellow human beings in crisis due to war, famine or other disaster.</li><li>But humanitarian organisations, too, will adopt AI because it seems able to answer questions at the heart of humanitarianism</li><li>such as 'who should we save?' and 'how can we be effective at scale?'</li><li>It resonates strongly with existing modes of humanitarian thinking and doing</li><li>in particular the principles of neutrality and universality.</li><li>The way machine learning consumes big data and produces predictions</li><li>suggests it can both grasp the enormity of the humanitarian challenge and provide a data-driven response.</li><li>But the nature of machine learning operations mean they will actually deepen some humanitarian problematics</li><li>and introduce new ones of their own.</li><li>Thinking about how to avoid this raises wider questions about emancipatory technics</li><li>and what else needs to be in place to produce machine learning for the people.</li></ol><h2>maths</h2><ol><li>There is no intelligence in Artificial Intelligence</li><li>nor does it really learn, even though it's technical name is machine learning</li><li>it is simply mathematical minimisation</li><li>Like at school, fitting a straight line to a set of points</li><li>you pick the line that minimises the differences overall</li><li>Machine learning does the same for complex patterns</li><li>it fits input features to known outcomes by minimising a cost function</li><li>the fit is a model that can be applied to new data to predict the outcome</li><li>The most influential class of machine learning algorithms are neural networks</li><li>which is what startups call 'deep learning'</li><li>They use backpropagation: a minimisation algorithm that produces weights in different layers of neurons</li><li>anything that can be reduced to numbers and tagged with an outcome can be used to train a model</li><li>the equations don't know or care if the numbers represent Amazon sales or earthquake victims</li><li>This banality of machine learning is also its power</li><li>it's a generalised numerical compression of questions that matter</li><li>there is no comprehension within the computation</li><li>the patterns are correlation not causation</li><li>the only intelligence comes in the same sense as military intelligence; that is, targeting</li><li>But models produced by machine learning can be hard to reverse into human reasoning</li><li>why did it pick this person as a bad parole risk? what does that pattern of weights in the 3rd layer represent? we can't necessarily say.</li></ol><h2>reasoning</h2><ol><li>Machine learning doesn't just make decisions without giving reasons, it modifies our very idea of reason</li><li>that is, it changes what is knowable and what is understood as real</li><li>It operationalises the two-world metaphysics of neoplatonism</li><li>that behind the world of the sensible is the world of the form or the idea.</li><li>A belief in a hidden layer of reality which is ontologically superior,</li><li>expressed mathematically and apprehended by going against direct experience.</li><li>Machine learning is not just a method but a machinic philosophy</li><li>What might this mean for the future field of humanitarian AI?</li><li>It makes machine learning prone to what Miranda Fricker calls epistemic injustice</li><li>She meant the social prejudice that undermines a speaker's word</li><li>but in this case it's the calculations of data science that can end up counting more than testimony</li><li>The production of opaque predictions with calculative authority</li><li>will deepen the self-referential nature of the humanitarian field</li><li>while providing a gloss of grounded and testable interventions</li><li>Testing against unused data will produce hard numbers for accuracy and error</li><li>while making the reasoning behind them inaccessible to debate or questioning</li><li>Using neural networks will align with the output driven focus of the logframe</li><li>while deepening the disconnect between outputs and wider values</li><li>Hannah Arendt said many years ago that cycles of social reproduction have the character of automatism.</li><li>The general threat of AI, in humanitarianism and elsewhere, is not the substitution of humans by machines but the computational extension of existing social automatism</li></ol><h2>production</h2><ol><li>Of course the humanitarian field is not naive about the perils of datafication</li><li>We all know machine learning could propagate discrimination because it learns from social data</li><li>Humanitarian institutions will be more careful than most to ensure all possible safeguards against biased training data</li><li>but the deeper effect of machine learning is to produce new subjects and to act on them</li><li>Machine learning is performative, in the sense that reiterative statements produce the phenomena they regulate</li><li>Humanitarian AI will optimise the impact of limited resources applied to nearly limitless need</li><li>by constructing populations that fit the needs of humanitarian organisations</li><li>This is machine learning as biopower</li><li>it's predictive power will hold out the promise of saving lives</li><li>producing a shift to preemption</li><li>but this is effect without cause</li><li>The foreclosure of futures on the basis of correlation rather than causation</li><li>it constructs risk in the same way that twitter determines trending topics</li><li>the result will be algorithmic states of exception</li><li>According to Agamben, the signature of a state of exception is ‘force-of’</li><li>actions that have the force of law even when not of the law</li><li>Logistic regression and neural networks generate mathematical boundaries</li><li>but cybernetic exclusions will have effective force by allocating and withholding resources</li><li>a process that can't be humanised by having a humanitarian-in-the-loop</li><li>because it is already a technics, a co-constituting of the human and the technical</li></ol><h2>decolonial</h2><ol><li>The capture, model and preempt cycle of machine learning will amplify the colonial aspects of humanitarianism</li><li>unless we can develop a decolonial approach to its assertions of objectivity, neutrality and universality</li><li>We can look to standpoint theory, a feminist and post-colonial approach to science</li><li>which suggests that positions of social and political disadvantage can become sites of analytical advantage</li><li>this is where our thinking about machine learning &amp; AI should start from</li><li>but I don't mean by soliciting feedback from humanitarian beneficiaries</li><li>Participation and feedback is already a form of socialising subjects</li><li>and with algorithmic humanitarianism every client interaction will be subsumed into training data</li><li>They used to say 'if the product is free, you are the product'</li><li>but now, if the product is free, you are the training data</li><li>training for humanitarian AI and for the wider cybernetic governance of resilient populations</li><li>Machine learning can break out of this spiral through situated knowledge</li><li>as proposed by Donna Haraway as a counterweight to the scientific ‘view from nowhere’,</li><li>a situated approach that is not optional in its commitment to a particular context</li><li>How does machine learning look from the standpoint of Haiti's post-earthquake rubble or from an IDP camp</li><li>No refugee in a freezing factory near the Serbian border with Croatia is going to be signing up for Andrew Ng's MOOC on machine learning any time soon</li><li>How can democratic technics be grounded in the humanitarian context?</li></ol><h2>people's councils</h2><ol><li>It may seem obvious that if machine learning can optimise Ocado deliveries then it can help with humanitarian aid</li><li>but the politics of machine learning are processes operating at the level of the pre-social</li><li>One way to counter this is through popular assemblies and people's councils</li><li>bottom-up, confederated structures that implement direct democracy</li><li>replacing the absence of a subject in the algorithms with face-to-face presence</li><li>contesting the opacity of parallel computation with open argument</li><li>and the environmentality of algorithms with direct action</li><li>The role of people's councils is not to debate for its own sake</li><li>but the creation of alternative structures, in the spirit of Gustav Landauer's structural renewal</li><li>An emancipatory technics is one that co-constitutes active agents and their infrastructures</li><li>As Landauer said, people must 'grow into a framework, a sense of belonging, a body with countless organs and sections'</li><li>as evidenced in Calais, where people collectively organised warehouse space, van deliveries and cauldrons to cook for 100s, while regularly tasting tear gas</li><li>I suggest that solidarity is an ontological category prior to subject formation</li><li>collective activity is the line of flight from a technological capture that extends market relations to our intentions</li><li>It is a politics of becoming – a means without end to counter AI's effect without cause</li></ol><h2>close</h2><ol><li>In conclusion</li><li>as things stand, machine learning and so-called AI will not be any kind of salvation for humanitarianism</li><li>but will deepen the neocolonial and neoliberal dynamics of humanitarian institutions</li><li>But no apparatus is a closed teleological system; the impact of machine learning is contingent and can be changed</li><li>it's not a question people versus machines but of a humanitarian technics of mutual aid</li><li>In my opinion this requires a rupture with current instantiations of machine learning</li><li>a break with the established order of things of the kind that Badiou refers to as an Event</li><li>the unpredictable point of excess that makes a new truth discernible</li><li>and constitutes the subjects that can pursue that new truth procedure</li><li>The prerequisites will be to have a standpoint, to be situated, and to be committed</li><li>it will be as different to the operations of Google as the Balkan aid convoys of the 1990s were to the work of the ICRC</li><li>On the other hand, if an alternative technics is not mobilised,</li><li>the next generation of humanitarian scandals will be driven by AI.</li></ol> <p>&nbsp;</p><p><i>Presented at the symposium on 'Reimagining Digital Humanitarianism', Goldsmiths, University of London, Feb 16th 2018</i>. More details of the symposium can be <a href="https://www.gold.ac.uk/calendar/?id=11362">found here.</a></p><fieldset class="fieldgroup group-sideboxs"><legend>Sideboxes</legend><div class="field field-read-on"> <div class="field-label"> 'Read On' Sidebox:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> <p><a href="https://opendemocracy.net/hri"><img src="//cdn.opendemocracy.net/files/smallhribanner.jpg" alt="" /></a></p> <p>More from the <a href="https://opendemocracy.net/hri">Human Rights and the Internet</a> partnership.</p> </div> </div> </div> <div class="field field-related-stories"> <div class="field-label">Related stories:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> <a href="/anita-gurumurthy-amrita-vasudevan/snowden-to-cambridge-analytica-making-case-for-social-value-of-pri">Snowden to Cambridge Analytica – making the case for the social value of privacy</a> </div> <div class="field-item even"> <a href="/neal-curtis/dreadful-symmetry-kill-boxes-racism-and-us-sovereign-power-in-digital-age">Dreadful symmetry: kill boxes, racism and US sovereign power in the digital age</a> </div> <div class="field-item odd"> <a href="/cathy-oneil-leo-hollis/weapons-of-maths-destruction">Weapons of maths destruction </a> </div> </div> </div> </fieldset> <div class="field field-topics"> <div class="field-label">Topics:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> Civil society </div> <div class="field-item even"> Conflict </div> <div class="field-item odd"> Ideas </div> <div class="field-item even"> International politics </div> <div class="field-item odd"> Internet </div> </div> </div> <div class="field field-rights"> <div class="field-label">Rights:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> CC by NC 4.0 </div> </div> </div> digitaLiberties hri Civil society Conflict Ideas International politics Internet Dan McQuillan Wed, 04 Apr 2018 07:08:38 +0000 Dan McQuillan 117024 at https://www.opendemocracy.net Snowden to Cambridge Analytica – making the case for the social value of privacy https://www.opendemocracy.net/anita-gurumurthy-amrita-vasudevan/snowden-to-cambridge-analytica-making-case-for-social-value-of-pri <div class="field field-summary"> <div class="field-items"> <div class="field-item odd"> <p class="Standard">Constitutionally inculcated rights and morality are slowly being undone “by the use of automated processes to assess risk and allocate opportunity”.</p> </div> </div> </div> <p><a href="https://opendemocracy.net/hri"><img src="//cdn.opendemocracy.net/files/finalbannerhri2_0.jpg" alt="HRI" width="460px" /></a></p> <p class="Standard"><span class='wysiwyg_imageupload image imgupl_floating_none caption-xlarge'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/500209/Zuck-Zuck.png" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/500209/Zuck-Zuck.png" alt="lead " title="" class="imagecache wysiwyg_imageupload caption-xlarge imagecache imagecache-article_xlarge" style="" width="460"/></a> <span class='image_meta'><span class='image_title'>An image of Mark Zuckerberg in Shadow Ink app. 2016. Wikicommons/ Annika Laas. Some rights reserved.</span></span></span>After days of prodding by the media, Mark Zuckerberg has offered a <a href="https://www.theguardian.com/technology/2018/mar/21/mark-zuckerberg-response-facebook-cambridge-analytica">mea culpa</a>, apologizing for the <a href="https://www.techdirt.com/blog/?tag=damage+control">“breach of trust between Facebook and the people who share their data with us and expect us to protect it”.</a> </p> <p class="Standard">The news that Facebook shared user data with a number of organizations including Cambridge Analytica seems to reflect the paradox of surveillance society – that <a href="https://www.opendemocracy.net/digitaliberties/elinor-carmi/whose-data-is-it-anyway">our personal data is never safe</a>, but share it we must.</p> <p class="Standard">So once again, in a Snowdenish moment, we are hit by the revelation that Cambridge Analytica conducted behavioural modeling and psychographic profiling (creating personality profiles by gauging motives, interests, attitudes, beliefs, values etc.) based on data it collected, to successfully target – allegedly – Americans prior to the recent presidential election. </p> <p class="Standard">Meanwhile, in countries like India, political parties have <a href="http://www.business-standard.com/article/politics/bjp-and-congress-accuse-each-other-for-hiring-cambridge-analytics-118032101154_1.html">accused each other</a> of hiring Cambridge Analytica for their own election campaigns.</p> <h2 class="Standard"><strong>The soothsayer</strong></h2> <p class="Standard"><a href="https://www.facebook.com/full_data_use_policy">Facebook </a><a href="https://www.facebook.com/full_data_use_policy">collects</a> all kinds of social data about its users, like their relationship status, place of work, colleagues, last time they visited their parents, songs they like listening to, as well as other kinds of information such as device data, websites visited from the platform etc. </p> <p class="Standard">This may be information that is shared by the user or what their friends may share about them on the platform. That aside, let us not forget that Facebook has bought WhatsApp and Instagram and can tap into data from those platforms, apart from the data it buys from data brokers.</p> <p class="Standard">Data that is collected is used to draw up the profile of users – a detailed picture of the persona that emerges by piecing together known activity and aptitude and generating predictions about possible proclivities and predispositions. <span class="mag-quote-center">That big data can be put to use in ways that reinforce ‘social and cultural segregation and exclusion’ is fairly well accepted now.</span></p> <p class="Standard">The mechanics of big data thus recreate the sum total of user traits and attributes – without necessarily verifying them per user. What follows then is the clustering of users into hyper segments with similar attributes for micro-targeting ads. </p> <p class="Standard">You may merely be ‘liking’ an article on <a href="https://news.nationalgeographic.com/2018/03/northern-white-rhino-male-sudan-death-extinction-spd/">the l</a><a href="https://news.nationalgeographic.com/2018/03/northern-white-rhino-male-sudan-death-extinction-spd/">ast male white Rhino</a>, but Facebook <a href="http://www.pnas.org/content/110/15/5802.full">wi</a><a href="http://www.pnas.org/content/110/15/5802.full">ll</a><a href="http://www.pnas.org/content/110/15/5802.full"> </a><a href="http://www.pnas.org/content/110/15/5802.full">predict</a> with a fair amount of accuracy your political affiliation and sexual orientation, using algorithmic modelling to nudge you to buy the thing you are most likely to buy. Hyper-segmentation based on social media profiling can also be used to create a consumer base for political messaging, as has been suggested in the case of Cambridge Analytica.&nbsp; </p> <h2 class="Standard"><strong>The target</strong></h2> <p class="Standard"><a href="https://www.opendemocracy.net/burcu-kilic-renata-avila/digital-giants-trading-away-our-right-to-privacy">Many digital corporations, including Uber, Twitter and Microsoft sell their data to third parties who build apps and provide services on top of it</a>. With machine learning, the targeting of individuals assumes new dimensions; it becomes possible to do <a href="https://www.researchgate.net/publication/270107608_Advertising_Microtargeting_and_Social_Media">nanotargeting</a>, that is, to zoom in precisely on one individual. </p> <p class="Standard">A fintech startup in India rejected an applicant because they could uncover that she had actually filed for a loan on behalf of her live-in partner who was unemployed. The boyfriend’s loan request had been rejected earlier. The <a href="https://economictimes.indiatimes.com/small-biz/startups/need-small-loans-these-fintech-startups-are-tracking-your-moves/articleshow/59553055.cms">start-up’s machine</a><a href="https://economictimes.indiatimes.com/small-biz/startups/need-small-loans-these-fintech-startups-are-tracking-your-moves/articleshow/59553055.cms"> learning algorithm</a> had used GPS and social media data – both of which the duo had given permissions for while downloading the app – to make the connection that they were in a relationship.</p> <p class="Standard">That big data can be put to use in ways that reinforce ‘<a href="https://edps.europa.eu/sites/edp/files/publication/15-11-19_big_data_en.pdf">social and cultural segregation and exclusion</a>’ is fairly well accepted now. This slippery slope from micro-targeting to the social allocation by algorithms of opportunities and privileges poses serious concerns. A <a href="https://www.propublica.org/article/facebook-doesnt-tell-users-everything-it-really-knows-about-them">ProPublica investigation from 2016 coll</a>ected more than 52,000 unique attributes that Facebook used to classify users into micro target-able groups. It then went on to buy Facebook advertisement for housing that demonstrated how it was possible to exclude African-Americans, Hispanics and Asian-Americans. As <a href="http://raley.english.ucsb.edu/wp-content/Engl800/Pasquale-blackbox.pdf">Frank Pasquale notes, constitutionally inculcated rights and morality is slowly being undone “by the use of automated processes to assess risk and allocate opportunity”.</a></p> <h2 class="Standard"><strong>The public sphere</strong></h2> <p class="Standard">This brings us to the question of what the social role of digital intelligence means for the future of democracy. Elections play a vital role in a robust democracy such as India. We seek to safeguard their free and fair nature through regulations that impose <a href="http://indianexpress.com/article/explained/exit-polls-and-why-they-are-restricted-by-the-panel-dainik-jagran-editor-arrest-4527055/">restriction</a><a href="http://indianexpress.com/article/explained/exit-polls-and-why-they-are-restricted-by-the-panel-dainik-jagran-editor-arrest-4527055/">s</a><a href="http://indianexpress.com/article/explained/exit-polls-and-why-they-are-restricted-by-the-panel-dainik-jagran-editor-arrest-4527055/"> on exit polling</a> or call out parties for unduly influencing voters through the distribution of freebies. </p> <p class="Standard">Wouldn’t then, a nudging of voters through intimate knowledge of their behaviour be a threat to this socio-political hygiene we seek to maintain? Can we allow the replacement of the will of the people by a market democracy in which the masses can be gamed? In the wake of the scandal, <a href="http://www.firstpost.com/politics/cambridge-analytica-row-ec-to-coordinate-with-enforcement-agencies-to-prevent-unlawful-electoral-activities-4403117.html">the Election Commission </a><a href="http://www.firstpost.com/politics/cambridge-analytica-row-ec-to-coordinate-with-enforcement-agencies-to-prevent-unlawful-electoral-activities-4403117.html">in India</a> has said that it will come up with recommendations on how to prevent such unlawful activities. <span class="mag-quote-center">Can we allow the replacement of the will of the people by a market democracy in which the masses can be gamed?</span></p> <p class="Standard">However, beyond electoral fairness, there are severe repercussions for the sanctity of the public sphere in the rapidly unfolding role of algorithms. When people know that online behaviour is monitored, they carefully moderate how they interact online, a phenomenon referred to as <a href="https://www.socialcooling.com/">social cooling.</a></p> <h2 class="Standard"><strong>The collusion</strong></h2> <p class="Standard">The Cambridge Analytica episode bears a close resemblance to the <a href="https://www.opendemocracy.net/digitaliberties/carly-nyst-rosemary-bechler/after-snowden-can-technology-save-our-digital-liberties">Snowden disclosure of the unholy nexus of the state, private corporations, and uninhibited surveillance</a>. India has already succeeded in building a ‘cradle to grave’ panspectron by seeding citizens’ unique biometric identifier across data bases. The Aadhaar unique ID project has allowed for an <a href="https://books.google.co.in/books?id=J9f1AwAAQBAJ&amp;printsec=frontcover&amp;dq=posthumanism&amp;hl=en&amp;sa=X&amp;ved=0ahUKEwi-1-S6tILaAhWJQY8KHfWmA5wQ6AEILjAB#v=onepage&amp;q=posthumanism&amp;f=false">informatization of life</a> whereby “...the human body is reduced to a set of numbers that can be stored, retrieved and reconstituted across terminals, screens and interfaces”. </p> <p class="Standard">With this biometric, the body can never disassociate with its data, and may be recalled, whenever convenient, sans ‘the individual’. To add psychographic data akin to a ‘behavioural’ biometric to this mix is to endow a ‘God’s eye view’ of society to the state, one that the state is bound to abuse to determine the human condition. </p> <p class="Standard">For instance, China’s profoundly disturbing, <a href="https://www.wired.com/story/age-of-social-credit/">Sesame Credit</a> uses citizen data including data from everyday transactions, biometric data, etc., to dole out instant karma.</p> <p class="Standard">From the other end, corporations who already collect behavioural data are <a href="https://scroll.in/article/823274/how-private-companies-are-using-aadhaar-to-deliver-better-services-but-theres-a-catch">keen on accessing Aadhaar data</a>, for this will allow them to trade data around a unique data point to attain, much like the state, a 360 degree view of their customers.&nbsp; </p> <p class="Standard">Facebook has faced criticism in the past for experimenting with users’ emotions, using unethical <a href="https://www.theguardian.com/technology/2014/jun/29/facebook-users-emotions-news-feeds">manipulati</a><a href="https://www.theguardian.com/technology/2014/jun/29/facebook-users-emotions-news-feeds">on of</a><a href="https://www.theguardian.com/technology/2014/jun/29/facebook-users-emotions-news-feeds"> </a><a href="https://www.theguardian.com/technology/2014/jun/29/facebook-users-emotions-news-feeds">information</a> to influence the moods of users. The plausibility of nano-surveillance raises fundamental philosophical questions about society and human agency, calling attention to the urgent task of reining in the data capitalists. <span class="mag-quote-center">The plausibility of nano-surveillance raises fundamental philosophical questions about society and human agency, calling attention to the urgent task of reining in the data capitalists.&nbsp; </span></p> <h2 class="Standard"><strong>The norm</strong></h2> <p class="Standard">While digital corporations claim to audit how third parties may use the data they have shared, <a href="https://gadgets.ndtv.com/social-networking/features/facebook-cambridge-analytica-scandal-fallout-on-other-tech-companies-1827408">monitoring is lax</a>. India, for instance, does have rules on <a href="http://www.wipo.int/edocs/lexdocs/laws/en/in/in098en.pdf">data shari</a><a href="http://www.wipo.int/edocs/lexdocs/laws/en/in/in098en.pdf">ng</a>; however, these pertain to a predefined list of data types. Traditionally, data protection legislations have focused on ‘personally identifiable information’ (PII), but with technological advances - <em>a la</em> big data analysis and Artificial Intelligence – <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1909366">what is or is not PII</a> is contested. The law therefore needs to be re-imagined to suit contemporary techniques of data analysis.</p> <p class="Standard">Besides the fact of unencumbered data sharing, that Facebook was able to collect such vast amounts and varied kinds of data is itself unsettling. To prevent the frightening prospect of future society being reduced to an aggregate of manipulated data points, it may well be necessary to determine that certain kinds of data will not be collected and certain types of data processing will not be done. </p> <p class="Standard">Restrictions on collection and use can be sector specific, based on well debated <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2567042">social norms</a> and constitutionally driven. &nbsp;For example, Delhi High Court recently upheld the judgment on the grounds of the right to health, that excluding genetic disorders from insurance policies is illegal.</p> <p class="Standard">It is time we moved from individual centered notions of privacy where the ‘user’ is constantly asked to barter the right for entitlements, credits, and convenience<span style="text-decoration: line-through;">. </span>Theories about how individuals must be empowered to monetise the plentiful data they generate and get easier credit, better healthcare, better skills and welfare benefits are a recipe for a disempowered society left to the whims of neo-liberal market democracy. The <a href="http://www.firstpost.com/tech/news-analysis/big-brother-getting-bigger-the-privacy-issues-surrounding-aadhaar-are-worrying-3700365.html">social value of privacy</a> needs to be spotlighted, for it urges us to look not only at the individual’s rights over data, but the social benefits that we derive from their recognition.</p> <p class="Standard">In so far as societies are the products of behavioural modelling, Zuckerberg’s apology does not really count.</p> <p class="Standard"><em>An <a href="http://www.firstpost.com/politics/cambridge-analytica-row-ec-to-coordinate-with-enforcement-agencies-to-prevent-unlawful-electoral-activities-4403117.html">earlier version of this article</a> first appeared in Firstpost on March 23, 2018. </em></p><fieldset class="fieldgroup group-sideboxs"><legend>Sideboxes</legend><div class="field field-read-on"> <div class="field-label"> 'Read On' Sidebox:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> <p><a href="https://opendemocracy.net/hri"><img src="//cdn.opendemocracy.net/files/smallhribanner.jpg" alt="" /></a></p> <p>More from the <a href="https://opendemocracy.net/hri">Human Rights and the Internet</a> partnership.</p> </div> </div> </div> <div class="field field-related-stories"> <div class="field-label">Related stories:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> <a href="/neal-curtis/dreadful-symmetry-kill-boxes-racism-and-us-sovereign-power-in-digital-age">Dreadful symmetry: kill boxes, racism and US sovereign power in the digital age</a> </div> <div class="field-item even"> <a href="/dan-mcquillan/manifesto-on-algorithmic-humanitarianism">Manifesto on algorithmic humanitarianism</a> </div> </div> </div> </fieldset> <div class="field field-country"> <div class="field-label"> Country or region:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> India </div> <div class="field-item even"> United States </div> </div> </div> <div class="field field-topics"> <div class="field-label">Topics:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> Civil society </div> <div class="field-item even"> Culture </div> <div class="field-item odd"> Democracy and government </div> <div class="field-item even"> International politics </div> <div class="field-item odd"> Internet </div> </div> </div> <div class="field field-rights"> <div class="field-label">Rights:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> CC by NC 4.0 </div> </div> </div> digitaLiberties hri United States India Civil society Culture Democracy and government International politics Internet Amrita Vasudevan Anita Gurumurthy Wed, 04 Apr 2018 06:18:30 +0000 Anita Gurumurthy and Amrita Vasudevan 117022 at https://www.opendemocracy.net Cambridge Analytica: the outrage is the real story https://www.opendemocracy.net/marcus-gilroy-ware/cambridge-analytica-outrage-is-real-story <div class="field field-summary"> <div class="field-items"> <div class="field-item odd"> <p>The bitter pill many refuse to swallow shows the difference between the world we think we’re in, and the one we really inhabit.</p> </div> </div> </div> <p><span class='wysiwyg_imageupload image imgupl_floating_none caption-xlarge'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/500209/640px-Edward_Christopher_„Ed“_Sheeran_at_Southside_Festival_2014_in_Neuhausen_ob_Eck.jpg" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/500209/640px-Edward_Christopher_„Ed“_Sheeran_at_Southside_Festival_2014_in_Neuhausen_ob_Eck.jpg" alt="lead " title="" class="imagecache wysiwyg_imageupload caption-xlarge imagecache imagecache-article_xlarge" style="" width="460" /></a> <span class='image_meta'><span class='image_title'>Ed Sheeran on stage at the Southside Festival in Germany, June 2014. Wikicommons/ Markus Hillgärtner. Some rights reserved.</span></span></span>Every so often, moments come along when what seemed like a tinfoil-hat conspiracy theory is confirmed as true, and people are forced to say goodbye to the world they thought they lived in and adjust to the one they really lived in all along. </p> <p>Widespread outrage in response to recent disclosure of new details about Cambridge Analytica has all the appearance of such a moment. These types of events have become more common recently: whether it is a new wave of terrorist attacks on European soil, preventable tragedies such as the Grenfell Tower disaster, major electoral outcomes such as the British vote to leave the European Union and Donald Trump’s victory, or technology whistleblowing events such as this one, the pace at which we are continually having to re-adjust to the world we actually live in seems alarmingly high, and yet all were predictable. </p> <p>No doubt, it is a dismaying picture that confronts us: British company SCL Group, operating under the brand name Cambridge Analytica with the supervision of Steve Bannon, obtained data collected from Facebook by Cambridge University academic Alexandr Kogan, and used systems built by data scientist and whistleblower-to-be Chris Wylie to train its microtargeting algorithms to nudge scores of already-angry voters towards electing Donald Trump and leaving the European Union – a set of experiments largely bankrolled by US hedge-fund billionaire Robert Mercer, 90% owner of Cambridge Analytica.</p> <p>Public reaction to this complex picture has been reminiscent of the last time there was widespread outrage about social media in political life – the revelations made by Edward Snowden in June 2013. And it’s almost uncanny timing that just as that social media-related whistleblowing scandal was made public and the world was meeting Snowden for the first time, Chris Wylie was beginning his employment with SCL Group and the next outrage was being quietly set in motion.</p> <p>What many people came face to face with in that moment was that social media were not the innocent frivolity we thought they were, lest Facebook’s more-than-$100bn initial public offering on the NASDAQ hadn’t already told us this.</p> <p>It turned out that some of the very same powerful entities that have long structured the world – in this case the state security agencies that underpin some of the world’s governments – were intimately connected to our innocuous social media tomfoolery. When you told Facebook you liked Ed Sheeran, or checked in to let your friends know that you were at the London Zoo penguin enclosure, GCHQ and the NSA will know as well, if they are interested. In the minds of populations across the world, social media has changed irrevocably. <span class="mag-quote-center">We are always resistant in these moments of readjustment, and this time will be no exception.</span></p> <p>Or did it? We carried on using social media <em>en masse</em> – in fact our use of them increased. Writing in London Review of Books in August 2017, shortly after Facebook crossed its 2 billion user threshold, <a href="https://www.lrb.co.uk/v39/n16/john-lanchester/you-are-the-product">John Lanchester observed</a> that it was not only the number of users that was increasing, but the degree of engagement: “In the far distant days of October 2012, when Facebook hit one billion users, 55 per cent of them were using it every day. At two billion, 66 per cent are.” We are always resistant in these moments of readjustment, and this time will be no exception.</p> <h2><strong>The limits of consent</strong></h2> <p>A common, if contrary, response from some quarters when Snowden’s leak was in the news was: why is it that we are happy for Facebook to know our whereabouts, but freak out when governments have the same information? One answer might be consent: it’s ok for Facebook to have our data precisely because we choose to give it to them, unlike with the security services. And that’s a question and answer that we might apply directly to the present case of Cambridge Analytica and Facebook data.</p> <p>But there’s a serious flaw in this logic. Our relationship with Facebook was never a straightforward one in which we simply entrust them with our data, and they keep it safe like some kindly uncle taking care of a bag of sweeties. If Facebook was ever that way, it was a long time ago when only a few Harvard students knew what it was, but even then <a href="https://www.esquire.com/uk/latest-news/a19490586/mark-zuckerberg-called-people-who-handed-over-their-data-dumb-f/">Mark Zuckerberg was referring</a> to fellow-students as “dumb fucks” for being so naïve. It is helpful to remember this naïveté now, because it is a persistent feature of social media’s power.</p> <p>The Cambridge Analytica story isn’t only about social media. It’s got bribery, honey traps, corruption and many other concerning elements that long precede the anthropomorphised bunny rabbit gifs and food porn of social media. Once again, besides the shock revelation that seemingly democratic votes are vulnerable to well-organised propaganda efforts, the surprise seems to be that these more recognisable elements of white-collar deviancy have turned out to be in cahoots with the digital technologies we have come to trust, and intimately integrate into our own lives. </p> <p>But what did we really think would happen when the worst aspects of Silicon Valley, a cynical Etonian establishment, reactionary Anglo-American nationalism and hedge-fund capital found each other? As Mark Fisher once said, “Many of what we call ‘conspiracies’ are the ruling class showing class solidarity.” <span class="mag-quote-center">“Many of what we call ‘conspiracies’ are the ruling class showing class solidarity.”</span></p> <p>Perhaps a more important question to ask is: why do we carry on being shocked when social media’s centrality in our attention and emotional lives doesn’t go well for us? </p> <p>For some reason, technology companies and their products are treated differently to other corporations and their products. When we deal with Coca-Cola Company, Phillip Morris or MacDonalds, we have an idea of whom we’re dealing with. At least when you’re buying a Coca-Cola you know it can melt your teeth (never mind all the other things sugar does to the body), and when you buy cigarettes that they are likely to give you lung cancer. Nobody thinks Big Macs are good for them. </p> <p>The last few decades show a clear story of how we prized these facts out of the grasp of the corporations that wanted them concealed or de-emphasised, and adjusted our expectations accordingly. But somehow large numbers of people have continued to think that when they use Facebook their situation as a consumer is materially better. As Aral Balkan, Shoshanna Zuboff and others have highlighted, it isn’t better – it’s actually worse.</p> <p>Facebook’s entire product is manipulation; the exploitation of its users’ emotional reactions by those with something to push. That is how it makes nearly $200k profit per quarter <em>per employee</em>, more than any other technology company. </p> <p>When Facebook’s customers were the Coca-Colas, MacDonalds and Unilevers of the world, nobody appeared to mind as long as they could carry on mindlessly scrolling through a stream of emotionally stimulating media as a means of distracting themselves from the hopeless emptiness of Anglo-American late capitalism. </p> <p>But when the inevitable happened and the same system was used to influence political change by people who had already been doing so by other means for 25 years, suddenly #DeleteFacebook is such a mainstream idea that BBC Radio 4 is asking people ­– ironically via its Facebook page&nbsp;– whether they intend carrying it out.</p> <p>The Cambridge Analytica story is no more a shock than Facebook <a href="https://www.propublica.org/article/facebook-advertising-discrimination-housing-race-sex-national-origin">allowing its advertisers</a> to sell things – including housing – only to white people is a shock. It’s no more a shock than the fact that according to the UN, Facebook had a “determining role” in <a href="https://www.reuters.com/article/us-myanmar-rohingya-facebook/u-n-investigators-cite-facebook-role-in-myanmar-crisis-idUSKCN1GO2PN">spreading hatred</a> against the heavily persecuted Rohingya people of Rakhine state in Myanmar either. These things are only a shock if you’re unaware of what technology, capital, and a complete lack of ethics produce when, inevitably, they combine.</p> <h2><strong>When the majority isn’t right</strong></h2> <p>There is one person however, who should rightly be congratulated in this moment, for whom the only shock is probably that the world finally listened: Observer reporter Carole Cadwalladr. Again and again her tenacious reporting has been the bitter pill many refused to swallow that showed the difference between the world we think we’re in, and the one we really inhabit. It is in this space that genuine democracy is most vulnerable, and yet ironically, the lesson from her reporting is that a majority of wishful thinkers are not always right.</p><fieldset class="fieldgroup group-sideboxs"><legend>Sideboxes</legend><div class="field field-related-stories"> <div class="field-label">Related stories:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> <a href="/mary-fitzgerald-peter-york-carole-cadwalladr-james-patrick/dark-money-deep-data-voicing-dissent">Dark Money Deep Data</a> </div> <div class="field-item even"> <a href="/digitaliberties/people-before-party-having-actual-conversations-in-digital-era">From Obama to Cambridge Analytica: how did we get here? (Podcast)</a> </div> <div class="field-item odd"> <a href="/uk/brexitinc/adam-ramsay/cambridge-analytica-is-what-happens-when-you-privatise-military-propaganda">Cambridge Analytica is what happens when you privatise military propaganda</a> </div> <div class="field-item even"> <a href="/can-europe-make-it/antonis-vradis/don-t-call-it-echo-chamber-it-s-spatial-contract">Don’t call it an echo chamber – it’s a spatial contract</a> </div> </div> </div> </fieldset> <div class="field field-rights"> <div class="field-label">Rights:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> CC by NC 4.0 </div> </div> </div> digitaLiberties Can Europe make it? Marcus J. Gilroy-Ware Thu, 29 Mar 2018 06:57:37 +0000 Marcus J. Gilroy-Ware 116935 at https://www.opendemocracy.net Cambridge Analytica is what happens when you privatise military propaganda https://www.opendemocracy.net/uk/brexitinc/adam-ramsay/cambridge-analytica-is-what-happens-when-you-privatise-military-propaganda <div class="field field-summary"> <div class="field-items"> <div class="field-item odd"> <p>You can't understand the Cambridge Analytica scandal until you understand what its parent company does.</p> </div> </div> </div> <p dir="ltr"><span class='wysiwyg_imageupload image imgupl_floating_none 0'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/553846/640px-UStanks_baghdad_2003.JPEG" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/553846/640px-UStanks_baghdad_2003.JPEG" alt="" title="" width="460" height="300" class="imagecache wysiwyg_imageupload 0 imagecache imagecache-article_xlarge" style=""/></a> <span class='image_meta'><span class='image_title'>US tanks arriving in Baghdad in 2003, by Technical Sergeant John L. Houghton, Jr., United States Air Force, public domain.</span></span></span></p><p dir="ltr">"The Gulf War Did Not Take Place". This audacious claim was made by the French philosopher Jean Baudrillard in March 1991, only two months after NATO forces had rained explosives on Iraq, shedding the blood of more than a hundred thousand people.</p><p dir="ltr">To understand Cambridge Analytica and its parent firm, Strategic Communication Laboratories, we need to get our heads round what Baudrillard meant, and what has happened since: how military propaganda has changed with technology, how war has been privatised, and how imperialism is coming home.</p><p dir="ltr">Baudrillard's argument centred on the fact that NATO's action in the Gulf was the first time audiences in Western countries had been able to watch a war live, on rolling TV news – CNN had become the first 24-hour news channel in 1980. Because camera crews were embedded with American troops, by whom they were effectively censored, the coverage had little resemblance to the reality of the bombardment of Iraq and Kuwait. The events known to Western audiences as "The Gulf War"&nbsp;–&nbsp;symbolised by camera footage from 'precision' missiles and footage of military hardware&nbsp;–&nbsp;are more accurately understood as a movie directed from the Pentagon. They were so removed from the gore-splattered reality that it's an abuse of language to call them the same thing. Hence, the "Gulf War" did not take place.</p><p dir="ltr"> <iframe frameborder="0" height="315" width="560" allow="autoplay; encrypted-media" src="https://www.youtube.com/embed/RhpgCaPoBaE"></iframe> <i>(You can see a classic example of this footage courtesy of the Smithsonian Channel)</i></p><p dir="ltr">Not long after Baudrillard’s iconic essay was published, Strategic Communications Laboratories was founded. "SCL Group provides data, analytics and strategy to governments and military organisations worldwide" reads the first line of its website. "For over 25 years, we have conducted behavioural change programmes in over 60 countries &amp; have been formally recognised for our work in defence and social change.”</p><p dir="ltr">Of course, military propaganda was nothing new. And nor is the extent to which it has evolved alongside changes in media technology and economics. The film Citizen Kane tells a fictionalised version of the first tabloid (or, as Americans call it, 'yellow journalism') war: how the circulation battle between William Randolph Hearst's New York Journal and Joseph Pulitzer's New York World arguably drove the US into the 1889 Spanish American War. It was during this affair that Hearst reportedly told his correspondent, "You furnish the pictures and I'll furnish the war", as parodied in Evelyn Waugh's <i>Scoop</i>. But after the propaganda disaster of the <a href="https://en.wikipedia.org/wiki/Tet_Offensive">Tet Offensive</a> in Vietnam softened domestic support for the war, the military planners began to devise new ways to control media reporting.</p><p dir="ltr"><span class='wysiwyg_imageupload image imgupl_floating_none 0'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/553846/640px-Cholon_after_Tet_Offensive_operations_1968.jpg" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/553846/640px-Cholon_after_Tet_Offensive_operations_1968.jpg" alt="" title="" width="460" height="302" class="imagecache wysiwyg_imageupload 0 imagecache imagecache-article_xlarge" style=""/></a> <span class='image_meta'><span class='image_title'>Civilians sort through the ruins of their homes in Cholon, the heavily damaged Chinese section of Saigon. By Meyerson, Joel D, Wikimedia</span></span></span></p><p dir="ltr">As a result, when Britain went to war with Argentina over the Falklands in 1982, they pioneered a new technique for media control: embedding journalists with troops. And, as former BBC war reporter Caroline Wyatt <a href="http://www.bbc.co.uk/blogs/collegeofjournalism/entries/36887f1a-a3a8-3005-a642-1ce7dcccf60b">blogged</a>, "The lessons from embedding journalists with the Royal Navy during the Falklands war were taken up enthusiastically by military planners in both Washington and London for the First Gulf War in 1991."</p><p dir="ltr">The UK defence secretary during the Falklands War when the use of embedded journalists was pioneered was John Nott (who backed Brexit). As my colleague Caroline Molloy pointed out to me, his son-in-law is Tory MP Hugo Swire, former minister in both the Northern Ireland Office and the Foreign Office. Swire's <a href="http://www.thepeerage.com/p49322.htm" title="http://www.thepeerage.com/p49322.htm">cousin</a>&nbsp;–&nbsp;with whom he would have overlapped at Eton&nbsp;–&nbsp;is Nigel Oakes, founder of Strategic Communications Laboratories. It's not a conspiracy, just that the ruling class are all related.</p><p dir="ltr">But back to our history: by the time of the 2003 Iraq War, communications technology had moved on again. As the BBC's Caroline Wyatt explains in the same blog, "satellite communications are now much more sophisticated, meaning we almost always have our own means of communicating with London. That offers a crucial measure of independence, even if reports still have to be cleared for 'op sec' [operational security]. The almost total control by the military of the means of reporting in the Falklands would be unthinkable in most warzones today."</p><p dir="ltr">In February 2004, another major disruption in communications technology began: Facebook was founded. And with it came a whole new propaganda nightmare.</p><p dir="ltr">At the same time as this history was unfolding, though, something else vital was happening: neoliberalism.</p><p dir="ltr">Looked at one way, neoliberalism is the successor to geographical imperialism as the "most extreme form of capitalism". It used to be that someone with a small fortune to invest could secure the biggest return by paying someone else to sail overseas, subjugate or kill people (usually people of colour) and steal them and/or their stuff. But they couldn't keep expanding forever&nbsp;–&nbsp;the world is only so big. And so eventually, wealthy Western investors started to shift much of their focus from opening new markets in 'far off lands' to marketising new parts of life at home. Neoliberalism is also therefore this process of marketisation: of shifting decisions from one person one vote, to one pound (or dollar or Yen or Euro) one vote. Or, as Will Davies puts it: "<a href="http://blogs.lse.ac.uk/europpblog/2017/04/30/essay-populism-and-the-limits-of-neoliberalism-by-william-davies/">the disenchantment of politics by economics</a>”.</p><p dir="ltr">The first Iraq War&nbsp;–&nbsp;the one that “did not take place”&nbsp;–&nbsp;coincided with a key stage in this process: the rapid marketisation (read 'asset stripping') of the collapsing Soviet Union, and so the successful encirclement of the globe by Western capital. The second Iraq War was notable for the acceleration of another key stage: the encroachment of market forces into the deepest corner of the state. During the invasions of Iraq and Afghanistan, according to the campaign group War on Want, private military companies "burst onto the scene".</p><h2 dir="ltr">The privatisation of war</h2><p dir="ltr">In a 2016 report,&nbsp;<a href="https://waronwant.org/Mercenaries-Unleashed">War on Want</a> describes how the UK became the world centre for this mercenary industry. You might know G4S as the company which checks your gas meter, but they are primarily the world's largest mercenary firm, involved in providing 'security' in war zones across the planet (don’t miss my colleagues Clare Sambrook and Rebecca Omonira-Oyekanmi’s <a href="https://opendemocracy.net/shinealight/g4s-securing-whose-world">excellent investigations</a> of their work in the UK).</p><p dir="ltr">In Hereford alone, near the SAS headquarters, there are 14 mercenary firms, according to War on Want's report. At the height of the Iraq war, around 80 private companies were involved in the occupation. In 2003, when UK and US forces unleashed "shock and awe" both on the Iraqi people and on their own populations down cable TV wires, the Foreign Office spent £12.6m on British private security firms, according to official figures highlighted <a href="https://www.theguardian.com/business/2016/feb/03/britain-g4s-at-centre-of-global-mercenary-industry-says-charity">by the Guardian</a>. By 2012, that figure had risen to £48.9m. In 2015, G4S alone secured a £100m contract to provide security for the British embassy in Afghanistan.</p><p dir="ltr"><span class='wysiwyg_imageupload image imgupl_floating_none 0'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/553846/Screen Shot 2018-03-28 at 16.34.46.png" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/553846/Screen Shot 2018-03-28 at 16.34.46.png" alt="" title="" width="460" height="165" class="imagecache wysiwyg_imageupload 0 imagecache imagecache-article_xlarge" style=""/></a> <span class='image_meta'></span></span></p><p dir="ltr">And just as the fighting was privatised, so too was the propaganda. In 2016,<a href="https://www.thebureauinvestigates.com/stories/2016-10-02/fake-news-and-false-flags-how-the-pentagon-paid-a-british-pr-firm-500m-for-top-secret-iraq-propaganda"> the Bureau of Investigative Journalism</a> revealed that the Pentagon had paid around half a billion dollars to the British PR firm Bell Pottinger to deliver propaganda during the Iraq war. Bell Pottinger, famous for shaping Thatcher’s image, included among its clients Asma Al Assad, wife of the Syrian president. Part of their work was making fake Al Qaeda propaganda films. (The firm was <a href="https://www.theguardian.com/media/2017/sep/12/bell-pottinger-goes-into-administration">forced to close last year</a> because they made the mistake of deploying their tactics against white people).</p><p dir="ltr">Journalist Liam O’Hare<a href="http://bellacaledonia.org.uk/2018/03/20/scl-a-very-british-coup/"> has revealed</a> that Mark Turnbull, the SCL and Cambridge Analytica director who was filmed alongside Alexander Nix in the Channel4 sting, was employed by Bell Pottinger in Iraq in this period.</p><p class="mag-quote-left" dir="ltr">The psychological operations wing of our privatised military: a mercenary propaganda agency.</p><p dir="ltr">Like Bell Pottinger, SCL saw the opportunity of the increasing privatisation of war. In his 2006 book “<a href="https://books.google.co.uk/books?id=K1oiAQAAMAAJ&amp;q=%22strategic+communications+laboratories%22&amp;dq=%22strategic+communications+laboratories%22&amp;hl=en&amp;sa=X&amp;ved=0ahUKEwjThLW06o7aAhUkJ8AKHdSyBiAQ6AEINDAC">Britain’s Power Elites: The Rebirth of the Ruling Class</a>”, Hywel Williams wrote “It therefore seems only natural that a political communications consultancy, Strategic Communications Laboratories, should have now launched itself as the first private company to provide 'psyops' to the military.” &nbsp;</p><p dir="ltr">While much of what SCL has done for the military is secret, we do know (thanks, again, to<a href="http://bellacaledonia.org.uk/2018/03/20/scl-a-very-british-coup/"> O’Hare</a>) that it’s had contracts from the UK and US departments of defence amounting to (at the very least) hundreds of thousands of dollars. And a document from the National Defence Academy of Latvia that I<a href="http://www.naa.mil.lv/~/media/NAA/AZPC/Publikacijas/DSPC%20PP%201%20-%20NATO%20StratCom.ashx"> managed to dig out</a>, entitled “NATO strategic communication: more to be done?” tells us that they were operating in Afghanistan in 2010, and gives some clues about what they were up to:</p><p dir="ltr">“more detailed qualitative data gathering operation was being conducted in Maiwand Province by a British company, Strategic Communication Laboratories (SCL) is almost unique in the international contractor community in that it has a dedicated, and funded, behavioural research arm located in the prestigious home of British Science and research, The Royal Institute, London.”</p><p dir="ltr">In simple terms, the SCL Group – Cambridge Analytica’s parent firm – is the psychological operations wing of our privatised military: a mercenary propaganda agency.</p><p dir="ltr">The skills they developed in the context of warzones shouldn’t be overplayed, but nor should they be underplayed. As far as we can tell, just as the Pentagon used simple tools like choosing where to embed journalists during the Gulf War to spin its version of events, so they mastered the tools of modern communication: Facebook, online videos, data gathering and microtargeting. Such tools aren’t magic (and Anthony Barnett <a href="https://opendemocracy.net/anthony-barnett/how-should-we-think-about-roles-cambridge-analytica-facebook-russia-and-shady-billio">writes well</a> about the risks of implying that they are). They don’t on their own explain either Brexit or Trump (I wrote <a href="https://opendemocracy.net/uk/brexitinc/adam-ramsay/remainers-dont-use-our-investigations-as-excuse">a plea</a> last year that Remainers in the UK don’t use our investigations as an excuse for failing to engage with the real reasons for the Leave vote). I wouldn’t even use the word “rigging” to describe the impact of these propaganda firms. But they are important.</p><p dir="ltr">As the<a href="https://www.channel4.com/news/data-democracy-and-dirty-tricks-cambridge-analytica-uncovered-investigation-expose"> Channel 4 undercover investigation</a> revealed, this work has often been carried out alongside more traditional smear tactics, and&nbsp;–&nbsp;as Chris Wylie explained&nbsp;–&nbsp;in partnership with another nexus in this world: Israel’s conurbation of private intelligence firms, a part of a burgeoning military industrial complex in the country which Israeli activist and writer<a href="https://www.jstor.org/stable/j.ctt183pct7"> Jeff Halper argues</a> is a key part of the country’s “parallel diplomacy” drive.</p><p dir="ltr">(Of course, this isn't unique to the UK and Israel. Until Cambridge Analytica achieved global infamy last week, the most prominent mercenary propaganda firm in the world was Peter Theil's company<a href="https://en.wikipedia.org/wiki/Palantir_Technologies"> Palantir</a> (named after the all-seeing eye in Lord of the Rings). Theil, founder of PayPal (with Elon Musk) and an executive of Facebook, wrote a notorious<a href="https://www.cato-unbound.org/2009/04/13/peter-thiel/education-libertarian"> essay in 2009</a> arguing that female enfranchisement had made democracy untenable and that someone should therefore invent the technology to destroy it. Palantir’s most prominent clients are the United States Intelligence Community, and the US Department of Defence. Cambridge Analytica whistleblower Chris Wylie claimed this week that his firm had worked with Palantir. It’s also noteworthy that one of Palantir's shareholders is Field Marshal Lord Guthrie, former head of the British Army, and adviser to<a href="https://opendemocracy.net/uk/brexitinc/adam-ramsay/who-are-veterans-for-britain"> Veterans for Britain, one of the groups which funnelled money to AggregateIQ</a> ahead of the European referendum. Guthrie also works for Acanum, one of the leading private intelligence agencies, who, in common with Cambridge Analytica's partners<a href="https://www.blackcube.com/board/"> Black Cube</a>,<a href="https://www.timesofisrael.com/meir-dagan-corporate-spy/"> listed</a> Meyer Dagam, the former head of Mossad, as one of their advisers, until he died in 2016. Again, it's not a conspiracy, it's just that these guys all know each other. But I digress.)</p><p dir="ltr">Back to SCL: why are NATO's mercenary propagandists getting involved in the US presidential election and&nbsp;–&nbsp;if the growing body of evidence about the link between Cambridge Analytica and AggregateIQ is to be believed&nbsp;–&nbsp;Brexit?</p><p dir="ltr">The obvious answer is surely partly true. They could make money doing so, and so they did. If you privatise war, don't be surprised if military firms start using the tools of war on 'their own' side. When Eisenhower warned of the Military Industrial Complex, he was thinking about physical weapons. But, just as unregulated semi-automatics invented for soldiers end up going off in American schools, it shouldn't be any kind of surprise that the weapons of information war are going off in Anglo-American votes.</p><p dir="ltr">But in a more general sense, this whole history is exactly what Brexit was about for many of the powerful people who pushed for it. As we’ve been investigating the secret donation which paid for the DUP Brexit campaign, we keep coming across this web of connections. Priti Patel worked for Bell Pottinger in Bahrain. Richard Cook, the front man for the secret donation to the DUP, set up a business in 2013 with the former head of Saudi intelligence and a Danish man involved in running guns to Hindu radicals who told us he was a spy. David Banks, who ran Veterans for Britain, worked in PR in the Middle East for four years – and Veterans for Britain more generally is full of these contacts.</p><p dir="ltr">I could go on. My suspicion is that this isn’t because there’s some kind of conspiracy revolving around a group of ex-spooks. It’s about the fact that power comes from networks of people, and the wing of the British ruling class which was in and around the military is moving rapidly into the world of privatised war. And those people have a strong ideological and material interest in radical right politics.</p><h2 dir="ltr">"The most corrupt country on Earth"</h2><p dir="ltr">Another way to see it is like this: Britain has lost most of its geographical empire. And most of our modern politics is about the ways in which different groups struggle to come to terms with that fact. For a large portion of the ruling establishment, this involves attempting to reprise the glory days by placing the country at the centre of two of the nexuses which define the modern era. </p><p dir="ltr">The UK and its Overseas Territories have already become by far the most significant network of tax havens and secrecy areas in the world, making us the global centre for money laundering and therefore, as Roberto Saviano, the leading expert on the mafia argues, the <a href="https://www.theguardian.com/books/2016/may/29/roberto-saviano-london-is-heart-of-global-financial-corruption">most corrupt country on earth</a>. And just as countries with major oil industries have major oil lobbies, the UK has a major money laundry-lobby.</p><p dir="ltr">Pesky EU regulations have long frustrated the dreams of these people, who wish our island nation to move even further offshore and become even more of a tax haven. And so for some Brexiteers&nbsp;–&nbsp;this money laundry lobby&nbsp;–&nbsp;there was always strong incentive to back a Leave vote: European Research Group statements going back 25 years show as much.</p><p dir="ltr">But what the Cambridge Analytica affair reminds us of is that this is not just about the money laundry lobby (nor the <a href="https://www.opendemocracy.net/uk/adam-ramsay/big-agriculture-s-brexiteers-are-pulling-wool-over-our-eyes">agrochemical lobby</a>). Another group with a strong interest in pushing such deregulation, dimming transparency, hyping Islamophobia in America and turning peoples against each other is our flourishing mercenary complex – one of the only other industries in which Britain leads the world. And so it's no surprise that its propaganda wing has turned the skills it's learned in war towards its desired political outcomes.</p><p dir="ltr">In his essay, Baudrillard argued that his observations about the changes in military propaganda told us something about the then new post-Cold War era. Only two years after Tim Berners Lee invented the World Wide Web, he wrote a sentence which, for me, teaches us more about the Cambridge Analytica story than much of the punditry that we've seen since: "just as wealth is no longer measured by the ostentation of wealth but by the secret circulation of capital, so war is not measured by being unleashed but by its speculative unfolding in an abstract, electronic and informational space."</p><p dir="ltr">Cambridge Analytica is what happens when you privatise your military propaganda operation. It walked into the space created when social media killed journalism. It is yet another example of tools developed to subjugate people elsewhere in the world being used on the domestic populations of the Western countries in which they were built. It marks the point at which neoliberal capitalism reaches its zenith, and ascends to<a href="https://www.opendemocracy.net/uk/jennifer-cobbe/problem-isn-t-just-cambridge-analytica-or-even-facebook-it-s-surveillance-capitali"> surveillance capitalism</a>. And the best possible response is to create a democratic media which can’t be bought by propagandists.</p><fieldset class="fieldgroup group-sideboxs"><legend>Sideboxes</legend><div class="field field-related-stories"> <div class="field-label">Related stories:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> <a href="/anthony-barnett/how-should-we-think-about-roles-cambridge-analytica-facebook-russia-and-shady-billio">How should we think about Cambridge Analytica, Facebook, Russia and shady billionaires</a> </div> <div class="field-item even"> <a href="/uk/jennifer-cobbe/problem-isn-t-just-cambridge-analytica-or-even-facebook-it-s-surveillance-capitali">The problem isn’t just Cambridge Analytica or Facebook – it’s “surveillance capitalism”</a> </div> <div class="field-item odd"> <a href="/David-Burnside-Putin-Russia-DUP-Brexit-Donaldson-Vincent-Tchenguiz">Is there a link between Cambridge Analytica and the DUP’s secret Brexit donors?</a> </div> <div class="field-item even"> <a href="/uk/edward-wilson/from-falklands-to-brexit-cut-price-jingoism">From the Falklands to Brexit: cut-price Jingoism</a> </div> </div> </div> </fieldset> <div class="field field-rights"> <div class="field-label">Rights:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> CC by NC 4.0 </div> </div> </div> uk digitaLiberties Can Europe make it? uk DUP Dark Money Brexit Inc. Adam Ramsay Wed, 28 Mar 2018 16:44:30 +0000 Adam Ramsay 116936 at https://www.opendemocracy.net ‘Cambridge Analytica’: surveillance is the DNA of the Platform Economy https://www.opendemocracy.net/digitaliberties/ivan-manokha/cambridge-analytica-surveillance-is-dna-of-platform-economy <div class="field field-summary"> <div class="field-items"> <div class="field-item odd"> <p>The current social mobilization against Facebook resembles the actions of activists who, in opposition to neoliberal globalization, smash a McDonald’s window during a demonstration. </p> </div> </div> </div> <p><span class='wysiwyg_imageupload image imgupl_floating_none caption-xlarge'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/500209/PA-35616000.jpg" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/500209/PA-35616000.jpg" alt="lead " title="" class="imagecache wysiwyg_imageupload caption-xlarge imagecache imagecache-article_xlarge" style="" width="460" /></a> <span class='image_meta'><span class='image_title'>Chief Executive of Cambridge Analytica (CA)Alexander Nix, leaves the offices in central London, as the data watchdog applies for a warrant to search computers and servers used by CA amid concerns at Westminster about the firm's activities. Dominic Lipinski/Press Association. All rights reserved. </span></span></span>On March 17, <a href="https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election">The Observer of London</a> and <a href="https://www.nytimes.com/2018/03/17/us/politics/cambridge-analytica-trump-campaign.html">The New York Times</a> announced that Cambridge Analytica, the London-based political and corporate consulting group, had harvested private data from the Facebook profiles of more than 50 million users without their consent. The data was collected through a Facebook-based quiz app called thisisyourdigitallife, created by Aleksandr Kogan, a University of Cambridge psychologist who had requested and gained access to information from 270,000 Facebook members after they had agreed to use the app to undergo a personality test, for which they were paid through Kogan’s company, Global Science Research. </p> <p>But as Christopher Wylie, a twenty-eight-year-old Canadian coder and data scientist and a former employee of Cambridge Analytica, stated in a <a href="https://www.youtube.com/watch?time_continue=63&amp;v=FXdYSQ6nu-M">video interview</a>, the app could also collect all kinds of personal data from users, such as the content that they consulted, the information that they liked, and even the messages that they posted. </p> <p>In addition, the app provided access to information on the profiles of the friends of each of those users who agreed to take the test, which enabled the collection of data from more than 50 million. </p> <p>All this data was then shared by Kogan with Cambridge Analytica, which was working with Donald Trump’s election team and which allegedly used this data to target US voters with personalised political messages during the presidential campaign. As Wylie, told The Observer, “we built models to exploit what we knew about them and target their inner demons.”</p> <h2><strong>‘Unacceptable violation’</strong></h2> <p>Following these revelations the Internet has been engulfed in outrage and government officials have been quick to react. On March 19, Antonio Tajani President of the European Parliament Antonio Tajani, stated in a <a href="https://twitter.com/EP_President/status/975683240777453569">twitter message</a> that misuse of Facebook user data “is an unacceptable violation of our citizens’ privacy rights” and promised an EU investigation. On March 22, Wylie communicated in a <a href="https://twitter.com/chrisinsilico/status/976594279425630209?s=12">tweet</a> that he accepted an invitation to testify before the US House Intelligence Committee, the US House Judiciary Committee and UK Parliament Digital Committee. On the same day Israel’s Justice Ministry <a href="https://www.timesofisrael.com/israel-to-probe-facebook-over-cambridge-analytica-data-breach/">informed Facebook</a> that it was opening an investigation into possible violations of Israelis’ personal information by Facebook.</p> <p>While such widespread condemnation of Facebook and Cambridge Analytica is totally justified, what remains largely absent from the discussion are broader questions about the role of data collection, processing and monetization that have become central in the current phase of capitalism, which may be described as ‘platform capitalism’, as suggested by the Canadian writer and academic Nick Srnicek in his recent <a href="https://www.amazon.co.uk/Platform-Capitalism-Theory-Redux-Srnicek/dp/1509504877">book</a>. </p> <p>Over the last decade the growth of platforms has been spectacular: today, the top 4 enterprises in <a href="https://www.forbes.com/powerful-brands/list/">Forbes’s list</a> of most valuable brands are platforms, as are eleven of the top twenty. Most recent IPOs and acquisitions have involved platforms, as have most of the major successful startups. The list includes Apple, Google, Microsoft, Facebook, Twitter, Amazon, eBay, Instagram, YouTube, Twitch, Snapchat, WhatsApp, Waze, Uber, Lyft, Handy, Airbnb, Pinterest, Square, Social Finance, Kickstarter, etc. Although most platforms are US-based, they are a really global phenomenon and in fact are now playing an even more important role in developing countries which did not have developed commercial infrastructures at the time of the rise of the Internet and seized the opportunity that it presented to structure their industries around it. Thus, in China, for example, many of the most valuable enterprises are platforms such as Tencent (owner of the WeChat and QQ messaging platforms) and Baidu (China’s search engine); Alibaba controls 80 percent of China’s e-commerce market through its Taobao and Tmall platforms, with its Alipay platform being the largest payments platform in China.</p> <p>The importance of platforms is also attested by the range of sectors in which they are now dominant and the number of users (often numbered in millions and, in some cases, even billions) regularly connecting to their various cloud-based services. Thus, to name the key industries, platforms are now central in Internet search (Google, Yahoo, Bing); social networking (Facebook, LinkedIn, Instagram, Snapchat); Internet auctions and retail (eBay, Taobao, Amazon, Alibaba); on-line financial and human resource functions (Workday, Upwork, Elance, TaskRabbit), urban transportation (Uber, Lyft, Zipcar, BlaBlaCar), tourism (Kayak, Trivago, Airbnb), mobile payment (Square Order, PayPal, Apple Pay, Google Wallet); and software development (Apple’s App Store, Google Play Store, Windows App store). Platform-based solutions are also currently being adopted in more traditional sectors, such as industrial production (GE, Siemens), agriculture (John Deere, Monsanto) and even clean energy (Sungevity, SolarCity, EnerNOC).</p> <h2><strong>User profiling – good-bye to privacy</strong></h2> <p>These platforms differ significantly in terms of the services that they offer: some, like eBay or Taobao simply allow exchange of products between buyers and sellers; others, like Uber or TaskRabbit, allow independent service providers to find customers; yet others, like Apple or Google allow developers to create and market apps. </p> <p>However, what is common to all these platforms is the central role played by data, and not just continuous data collection, but its ever more refined analysis in order to create detailed user profiles and rankings in order to better match customers and suppliers or increase efficiency. </p> <p>All this is done in order to use data to create value in some way another (to monetize it by selling to advertisers or other firms, to increase sales, or to increase productivity). Data has become ‘the new oil’ of global economy, a new commodity to be bought and sold at a massive scale, and with this development, as a former Harvard Business School professor Shoshana Zuboff <a href="http://www.faz.net/aktuell/feuilleton/debatten/the-digital-debate/shoshana-zuboff-secrets-of-surveillance-capitalism-14103616.html">has argued</a>, global capitalism has become ‘surveillance capitalism’. </p> <p>What this means is that platform economy is a model of value creation which is completely dependant on continuous privacy invasions and, what is alarming is that we are gradually becoming used to this. </p> <p>Most of the time platform providers keep track of our purchases, travels, interest, likes, etc. and use this data for targeted advertising to which we have become accustomed. We are equally not that surprised when we find out that, for example, <a href="https://www.nytimes.com/2017/07/25/technology/roomba-irobot-data-privacy.html?smprod=nytcore-ipad&amp;smid=nytcore-ipad-share">robotic vacuum cleaners collect data</a> about types of furniture that we have and share it with the likes of Amazon so that they can send us advertisements for pieces of furniture that we do not yet possess. </p> <p>There is little public outcry when we discover that Google’s ads are racially biased as, for instance, a Harvard professor Latanya Sweeney <a href="https://www.bostonglobe.com/business/2013/02/06/harvard-professor-spots-web-search-bias/PtOgSh1ivTZMfyEGj00X4I/story.html">found by accident</a> performing a search. We are equally hardly astonished that companies such as Lenddo <a href="https://www.youtube.com/watch?v=0bEJO4Twgu4">buy access to people’s social media</a> and browsing history in exchange for a credit score. And, at least in the US, people are becoming accustomed to the use of algorithms, developed by private contractors, by the justice system to take decisions on sentencing, which often result in equally unfair and <a href="https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing">racially biased decisions</a>. </p> <p>The outrage provoked by the Cambridge Analytica is targeting only the tip of the iceberg. The problem is infinitely larger as there are countless equally significant instances of privacy invasions and data collection performed by corporations, but they have become normalized and do not lead to much public outcry. </p> <h2><strong>DNA</strong></h2> <p>Today surveillance is the DNA of the platform economy; its model is simply based on the possibility of continuous privacy invasions using whatever means possible. In most cases users agree, by signing the terms and conditions of service providers, so that their data may be collected, analyzed and even shared with third parties (although it is hardly possible to see this as express consent given the size and complexity of these agreements - for instance, it took 8 hours and 59 minutes for an actor hired by the consumer group Choice to <a href="https://www.youtube.com/watch?v=sxygkyskucA">read</a> Amazon Kindle’s terms and conditions). In other instances, as in the case of Kogan’s app, the extent of the data collected exceeds what was stated in the agreement. </p> <p>But what is important is to understand that to prevent such scandals in the future it is not enough to force Facebook to better monitor the use of users’ data in order to prevent such leaks as in the case of Cambridge Analytica. The current social mobilization against Facebook resembles the actions of activists who, in opposition to neoliberal globalization, smash a McDonald’s window during a demonstration. </p> <p>What we need is a total redefinition of the right to privacy (which was codified as a universal human right in 1948, long before the Internet), to guarantee its respect, both offline and online. </p> <p>What we need is a body of international law that will provide regulations and oversight for the collection and use of data. </p> <p>What is required is an explicit and concise formulation of terms and conditions which, in a few sentences, will specify how users’ data will be used. </p> <p>It is important to seize the opportunity presented by the Cambridge Analytica scandal to push for these more fundamental changes.</p><fieldset class="fieldgroup group-sideboxs"><legend>Sideboxes</legend><div class="field field-related-stories"> <div class="field-label">Related stories:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> <a href="/uk/jennifer-cobbe/problem-isn-t-just-cambridge-analytica-or-even-facebook-it-s-surveillance-capitali">The problem isn’t just Cambridge Analytica or Facebook – it’s “surveillance capitalism”</a> </div> <div class="field-item even"> <a href="/uk/jeremy-gilbert/epochal-election-welcome-to-era-of-platform-politics"> An epochal election: welcome to the era of platform politics</a> </div> <div class="field-item odd"> <a href="/digitaliberties/juan-ortiz-freuler/battle-for-decentralized-internet-navigating-troubled-waters-to-g">The Cambridge Analytica scandal is a drop of water trickling down the visible top of an iceberg. Focus on decentralizing power</a> </div> </div> </div> </fieldset> <div class="field field-country"> <div class="field-label"> Country or region:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> UK </div> <div class="field-item even"> United States </div> </div> </div> <div class="field field-topics"> <div class="field-label">Topics:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> Civil society </div> <div class="field-item even"> Conflict </div> <div class="field-item odd"> Democracy and government </div> <div class="field-item even"> Economics </div> <div class="field-item odd"> International politics </div> <div class="field-item even"> Internet </div> </div> </div> <div class="field field-rights"> <div class="field-label">Rights:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> CC by NC 4.0 </div> </div> </div> digitaLiberties digitaLiberties United States UK Civil society Conflict Democracy and government Economics International politics Internet Ivan Manokha Fri, 23 Mar 2018 15:42:27 +0000 Ivan Manokha 116843 at https://www.opendemocracy.net From Obama to Cambridge Analytica: how did we get here? (Podcast) https://www.opendemocracy.net/digitaliberties/people-before-party-having-actual-conversations-in-digital-era <div class="field field-summary"> <div class="field-items"> <div class="field-item odd"> <p>Where did the controversial 'influence campaigns' come from? Two Obama volunteers look back at the revolution they started in 2008 – and how a grassroots effort in Virginia – people before party – could be key to vanquishing Trump.</p> </div> </div> </div> <p><iframe frameborder="0" height="120" width="100%" src="https://www.mixcloud.com/widget/iframe/?hide_cover=1&amp;feed=%2FopenDemocracy%2Fpeople-before-party-having-actual-conversations-in-the-digital-era-opendemocracy%2F"></iframe></p> <p>Intro music:&nbsp;<a href="http://freemusicarchive.org/music/Kuso/Sursu_and_Tabriz/03-KUSO-Dinoavion">Dinoavion -&nbsp;Kuso </a>-&nbsp;<a href="http://creativecommons.org/licenses/by-nc/3.0">CC A-NC 3.0</a></p><p><i>Kellen Squire was a blue collar disillusioned Republican and a new Dad. Chris Blask had just sold a cyber-security company and was similarly impressed with what Obama had to say. The Hillary Clinton/ Obama wars is where Kellen and Christopher first came into contact with each other... Podcast - 53 minutes.</i></p><p>“I ended up spending 15 hours a day, seven days a week focusing on this Obama Rapid Response (ORR) thing. On the Obama website you could just join, create blogs and get involved with the ORR, whose core group had 200 members, but there were hundreds maybe thousands of state, regional, local, municipal rapid response teams who were all connected and could all communicate with each other.”</p> <p>How much was there innovation tech-wise? "In short, little, but important. The best comparison was between the Hillary Clinton campaign website and the Obama campaign website at that time. They were both using existing technologies that could have been used by anybody, but the Obama campaign had adopted it in a very open fashion. Volunteers could do little things, large things, big things, whatever they wanted." </p><p>"The Clinton campaign was much more traditional. You would request access to something and someone would maybe approve to let you do something. But from the technology perspective, it was just a point along the continuing evolutionary line that we are on. By that point, some savvy folks early in the Obama campaign would have found existing platforms they could tweak, where they could have large memberships doing the kinds of things we are used to in social media now that weren’t generally done at the time.”</p> <p>“Part of what drove me, because I was so taken by the message that Senator Obama had, was that I was trying to understand where the breakdown was and why Democrats were so acrimonious with each other. We’d go into it with Clinton supporters because we were such unapologetic Obama supporters, and we were trying to bridge the divide and find out what was separating us …"</p> <p>"When Chris was talking about influence campaigns – well that can be influence by both sides. I think 2008 was probably the first time we saw the start of what in 2016 is a science – using trolls and botts and sock-puppet accounts to try and create a false consensus to drive the conversation any way you want. I think Chris and I were fighting against that as well…"</p> <p>"There was a lady who was ostensibly a Hillary Clinton supporter but was always framing her arguments and trying to organise it in such a way as to cause as much strife as possible. Like, 'By God if you don’t support Hillary Clinton, burn the whole thing down!' A lot of us were fighting very diplomatically, and at some point a lot of us were thinking, “How much of this is real and organic, and how much of it is being astroturfed in!?” It led to the PUMA movement&nbsp; – Party Unity My A***! …."</p><p><span class='wysiwyg_imageupload image imgupl_floating_none caption-xlarge'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/500209/PA-32026879.jpg" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/500209/PA-32026879.jpg" alt="lead " title="" width="460" height="339" class="imagecache wysiwyg_imageupload caption-xlarge imagecache imagecache-article_xlarge" style=""/></a> <span class='image_meta'><span class='image_title'>Democratic Senator Barack Obama makes a campaign stop at the Marriott while campaigning for the Iowa Caucus in Coralville, Iowa on January 2, 2008. Laura Cavanaugh/ Press Association. All rights reserved.</span></span></span></p><p>“Already then you could see the subtle trolls. Concerned trolls, like “Oh I’m really signed up, but I’m just concerned that” … some nuance, right? But there was a huge range. A troll could be some friend of yours who is a bit of a jerk, who is always posting something to get people riled up… &nbsp;Or it could be someone who has a nefarious intent and is listening to the conversation you are having to inject other issues, and you wouldn’t commonly think they were trolling… Influence campaigns – can I yell “fire!” in this theatre? – as a way of dealing that achieves an adversarial goal, these go way back in history. In those days we were seeing this evolve for the first time in a largescale political environment ….”.</p><div class="field field-rights"> <div class="field-label">Rights:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> CC by NC 4.0 </div> </div> </div> digitaLiberties Can Europe make it? Mary Fitzgerald Christopher Blask Kellen Squire Fri, 23 Mar 2018 09:54:28 +0000 Kellen Squire, Christopher Blask and Mary Fitzgerald 116819 at https://www.opendemocracy.net The Cambridge Analytica scandal is a drop of water trickling down the visible top of an iceberg. Focus on decentralizing power https://www.opendemocracy.net/digitaliberties/juan-ortiz-freuler/battle-for-decentralized-internet-navigating-troubled-waters-to-g <div class="field field-summary"> <div class="field-items"> <div class="field-item odd"> <p>We need open and robust debates. We cannot afford anything less than this. Too much is at stake. Part Three.</p> </div> </div> </div> <p><span class='wysiwyg_imageupload image imgupl_floating_none caption-xlarge'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/500209/Screen Shot 2018-03-15 at 10.14.09.png" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/500209/Screen Shot 2018-03-15 at 10.14.09.png" alt="lead " title="" class="imagecache wysiwyg_imageupload caption-xlarge imagecache imagecache-article_xlarge" style="" width="460" /></a> <span class='image_meta'><span class='image_title'>Screenshot of MaidSafe - The New Decentralized Internet website.</span></span></span><em>Cambridge Analytica is on the cover of every newspaper. The company managed to get hold of millions of data points of very sensitive data from Facebook users. Most reporters focus on the meaning of consent in the digital age and Facebook's inability to enforce it. Most reporters covering the Cambridge Analytica story are missing out on the big picture. The scale of the operation was only possible because Facebook has too much data about too many people. Cambridge Analytica is a cautionary tale about the risks of centralizing data and control over the flows of information. The internet and the web were designed to decentralize data and power. Cambridge Analytica's use of Facebook is an example of what a system with a single point of failure leads to. <br /><br />This piece strives to show the bigger picture: How big players – exerting power over internet access, device, platform, and data markets – have become a liability. Cambridge Analytica is but a drop of water trickling down the visible top of an iceberg. The plot will thicken. We need to talk... now. </em><strong>Juan Ortiz Freuler. </strong><em><br /></em></p><p>Many claim the internet is broken. As I’ve argued in these articles – <a href="https://opendemocracy.net/digitaliberties/juan-ortiz-freuler/who-s-to-blame-internet-on-defendants-bench">here</a> and <a href="https://opendemocracy.net/digitaliberties/juan-ortiz-freuler/present-and-future-of-centralized-internet">here</a> – these claims are often examples of misdirected anger. The social contract is broken. Inequality is rising, and the tensions associated with injustice are spilling into online space. Since the internet facilitates the collection of structured data and statistical analysis, it allows us to measure and reveal the overarching social tensions as never before. Media and unsavvy researchers often take a narrow focus that places the blame on the messenger, instead of talking about the broader problems that underpin the symptoms of the sick society their investigations reveal. <span class="mag-quote-center">Many claim the internet is broken… The social contract is broken.</span></p> <p>The internet, with its capacity to facilitate communication, aggregate opinion, and coordinate by the thousands in real time, is arguably the most powerful tool at our disposal to solve the social issues at hand. The internet has made it easier for women to coordinate around the #MeToo movement, as it has enabled the growth of Black Lives Matter, to mention two recent examples. Rape, misogyny and racially targeted police violence are not new issues, but the internet provided a platform for these covered-up conversations to take place.</p> <p>From the development of written language to the printing press; from the telegraph to the web, accessing and sharing knowledge has fuelled humankind’s progress and development. &nbsp;Much of what was considered revolutionary only decades ago is mistakenly taken for granted today.</p> <p>The problem with misdirected anger is that it leads to misdirected policies that could undermine the internet’s capacity to catalyze much-needed social change. We need to ensure that when we think about internet policy we think about it with a political lens: how can we ensure the internet will enable us as citizens to share ideas freely, coordinate around common interests, and act in defense of our rights and interests? How can we ensure that people are afforded these conversations as a right today and in the future? How can we ensure these protections even in scenarios where the powers-that-be feel profoundly challenged by people’s capacity to coordinate? <span class="mag-quote-center">How can we ensure these protections even in scenarios where the powers-that-be feel profoundly challenged by people’s capacity to coordinate? </span></p> <p>If we accept that the internet has become a key tool for politics in this broad sense of the term, we can see the internet is indeed facing a problem. A problem that is often neglected for being less tangible, but that underlies much of what concerns the public about the internet. A problem that not only reflects but can reinforce current social problems, and frustrate the goal of ensuring meaningful political participation: centralization.</p> <h2><strong>Centralization and decentralization</strong></h2> <p>Centralization is the process through which intermediaries have reshaped the internet and the web, placing themselves as gatekeepers of information. In the context of an increasingly centralized web the ethos of “move fast and break things” that promoted and spurred bold innovations a decade ago has become deeply problematic. Each ‘mistake’ on the centralized internet of today causes harm to thousands if not millions.&nbsp; And <a href="https://opendemocracy.net/digitaliberties/juan-ortiz-freuler/present-and-future-of-centralized-internet">technological developments</a> are increasing the powers intermediation affords the corporations that now employ what used to be a crowd of free-coders. </p> <p>We the people cannot afford the risks this entails to the internet of tomorrow, and its ability to deliver social change. Decentralization is about creating architectural barricades to this process so that power remains distributed across the network. </p> <p>The battle for the net takes place today and everyday. There are no straightforward solutions. Every turn implies hard choices. It is therefore time to involve as many people as possible in this process about thinking about solutions. Unsurprisingly, we need to be aware not only of the power these intermediaries exercise over politics, academia, and the private sector, but how delving into certain of these topics havs become interestingly and unacceptably taboo. <span class="mag-quote-center">Decentralization is about creating architectural barricades to this process so that power remains distributed across the network. </span></p> <p>If we hope to protect the citizens of tomorrow from expected and unexpected scenarios we need to get creative and bold today. And we need the mass of netizens on board. We need open and robust debates. We cannot afford anything less than this. Too much is at stake.</p> <p>If the reason for much of the misdirected anger is that the centralization process is less tangible than the symptoms it might trigger, perhaps a first step must be to make this underlying layer more visible and part of our public discourse. </p> <p>The closed environments in which technology is being developed by private companies, and its metaphors – such as “the Cloud”– which have been used to over-simplify the internet’s architecture, have done nothing but obscure the key political battleground of this century. The intermediaries have the upper hand unless we can shed some light over this structure.</p> <h2><strong>The Neutrality Pyramid</strong></h2> <p>The pyramid below has the humble purpose of re-stating the physical existence of intermediaries, and their power. It shows some of the distinctive layers in which gatekeeping is being exercised today, and which could affect users’ ability to share ideas and produce meaningful change tomorrow.</p> <p>The pyramidal structure suggests that, from a user perspective, different actors exercise various types of control over our ability to deliver a message. &nbsp;Re-aligning incentives for these intermediaries to work in favour of society’s goals might require developing a multi-pronged strategy, with tailored and targeted approaches for each level of the pyramid. </p><p> If an ISP decides that no data packets containing certain keywords should be delivered, then it doesn’t matter what device we have, or what platform we rely on: the message will not be delivered. If a device does not allow the use of certain apps, then certain tools may become unavailable, and so on. The lower an actor is placed on the pyramid, the greater the risk that they pose to the open internet and the open web as tools for social change.</p><p><span class='wysiwyg_imageupload image imgupl_floating_none caption-xlarge'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/500209/Part 3. 1 (1).jpg" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/500209/Part 3. 1 (1).jpg" alt="" title="" class="imagecache wysiwyg_imageupload caption-xlarge imagecache imagecache-article_xlarge" style="" width="460" /></a> <span class='image_meta'></span></span></p><ol><li><strong><em>Seeing the pyramid</em></strong>: As users and responsible consumers we need to be aware of exactly <em>who</em> each of these intermediaries are and <em>how</em> they manage their role as intermediaries. If they do not respect our rights, we should shift to more decent providers or services. </li><li><em><strong>Observing behaviors within each layer</strong></em>: As a community we need to promote enforceable rules to ensure that each level of the pyramid will be kept from abusing its intermediary powers. Public committees should be set up to assess the degree of horizontal integration and its impact on innovation and competition. Control over personal data and public discourse is increasingly in the hands of a few private companies, and this tendency unchecked leads towards an even bleaker future.</li><li><em><strong>Observing dynamics between layers</strong></em>: As a community we need to ensure each intermediary stays within its segment of the pyramid, ruling out any further vertical integration, and promoting the re-fragmentation of companies that have integrated across these layers over the past decades. <span class="mag-quote-center">Public committees should be set up to assess the degree of horizontal integration and its impact on innovation and competition. </span></li></ol><p>This is not a new fight. A handful of avant-garde activists and innovators are already onto it. But it is ultimately up to us (the mass of citizens, users, and consumers) to signal to representatives and markets alike that we want change. </p> <p><strong>–&nbsp; Personal control over personal data</strong> </p> <p>On the one hand, new blockchain-based platforms like Filecoin, Sia, Storj and MaidSafe seek to decentralize data storage by offering crypto-coins for players who put their latent storage capacity on the market. On the other hand, Tim Berners-Lee, inventor of the Web, is developing <a href="https://solid.mit.edu/">Solid</a> (Social Linked Data), through which he seeks to complete the original web ideal by decoupling data from the applications that silo it today. Data will be owned and stored by the people, and applications will <a href="https://ruben.verborgh.org/blog/2017/12/20/paradigm-shifts-for-the-decentralized-web/">compete</a> on how they visualize the data, and enhance user experience. An effective implementation would automatically create cross-platform interoperability, making platform neutrality less of a problem. </p> <p>Think about how you can send emails from a Gmail account to an Outlook one, but you can’t tweet to a Facebook user. Silos are socially inefficient but continue to exist because they allow big companies to ensure we don’t leave their walled gardens. You social graph should be yours to keep.</p><p><strong>–&nbsp; Platform neutrality</strong> </p> <p>Last year the EU fined Google for giving unfair and prominent placement of their own comparison shopping services. India has recently followed this decision, and fined Google based on the same behaviour. </p> <p><strong>–&nbsp; Device neutrality</strong> </p> <p>Whereas in Russia Android was fined for continuing to pre-install its associated Google Apps, in 2014 South Korea ruled pre-installed apps should be removable, and the EU started studying the effects of pre-installed apps in 2016. </p> <p>More recently, a Member of the Italian Parliament, Stefano Quintarelli, has been promoting a bill since 2015 that would grant users the right to use any software they like, from sources other than the official – vertically integrated – store. Now the French telecom regulator seems to be picking up that idea as well.</p> <p><strong>–&nbsp; Net Neutrality</strong> </p> <p>Perhaps the most well known of all the layers of the pyramid. Regulators in India, EU and elsewhere have effectively pushed against the pressure exerted by ISPs to keep the owners of the infrastructure from discriminating between the content that travels through the network. As the basis of the pyramid, failure to ensure neutrality of the net would arguably collapse the rest of the layers.&nbsp; </p><p><span class="mag-quote-center">Silos are socially inefficient but continue to exist because they allow big companies to ensure we don’t leave their walled gardens.&nbsp; </span></p> <p>The battle to ensure the internet remains a tool for citizens to create a more just society will be our constant companion throughout the next decade. The battle is uphill. With each day that goes by without a thorough debate on our rights, the odds of winning the battle get slimmer. </p> <p>The sketch outlined in these pieces suggests difficult trade-offs. Many questions remain. Yet we should not feel paralyzed by the grave asymmetry of information between us and the intermediaries. Intermediaries rely on the opacity of their systems strategically, and continuously leverage it, to stall conversations about the risk they represent to us and our political system. I hope these pieces illuminate a space around which we can gather and think out loud. </p><fieldset class="fieldgroup group-sideboxs"><legend>Sideboxes</legend><div class="field field-related-stories"> <div class="field-label">Related stories:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> <a href="/digitaliberties/juan-ortiz-freuler/who-s-to-blame-internet-on-defendants-bench">Who’s to blame? The internet on the defendant&#039;s bench</a> </div> <div class="field-item even"> <a href="/digitaliberties/juan-ortiz-freuler/present-and-future-of-centralized-internet">The present and future of a centralized internet </a> </div> <div class="field-item odd"> <a href="/anthony-barnett/how-should-we-think-about-roles-cambridge-analytica-facebook-russia-and-shady-billio">How should we think about Cambridge Analytica, Facebook, Russia and shady billionaires</a> </div> <div class="field-item even"> <a href="/uk/laurie-macfarlane/oldest-sins-in-newest-ways">The oldest sins in the newest ways</a> </div> </div> </div> </fieldset> <div class="field field-topics"> <div class="field-label">Topics:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> Civil society </div> <div class="field-item even"> Democracy and government </div> <div class="field-item odd"> Internet </div> <div class="field-item even"> Net neutrality </div> </div> </div> <div class="field field-rights"> <div class="field-label">Rights:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> CC by NC 4.0 </div> </div> </div> digitaLiberties digitaLiberties Civil society Democracy and government Internet Net neutrality Juan Ortiz Freuler Tue, 20 Mar 2018 14:19:54 +0000 Juan Ortiz Freuler 116661 at https://www.opendemocracy.net How should we think about Cambridge Analytica, Facebook, Russia and shady billionaires https://www.opendemocracy.net/anthony-barnett/how-should-we-think-about-roles-cambridge-analytica-facebook-russia-and-shady-billio <div class="field field-summary"> <div class="field-items"> <div class="field-item odd"> <p>An authoritarian surveillance state is being built in the US, while a massive land grab for power, by billionaires via our data, subverting British democracy, is well under way.&nbsp;</p> </div> </div> </div> <p class="AB"><em><span class='wysiwyg_imageupload image imgupl_floating_none caption-xlarge'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/500209/24416222648_abf9964e52_z.jpg" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/500209/24416222648_abf9964e52_z.jpg" alt="lead " title="" class="imagecache wysiwyg_imageupload caption-xlarge imagecache imagecache-article_xlarge" style="" width="460" /></a> <span class='image_meta'><span class='image_title'>Alexander Nix, left, CEO, Cambridge Analytica, and Matthew Freud, Founder & Chairman, Freuds, on Centre Stage during day three of Web Summit 2017 at Altice Arena in Lisbon. Flickr/Sam Barnes/Web Summit. Some rights reserved.</span></span></span>The scandal deepens. What were the roles of Cambridge Analytica, the abuse of Facebook data, the permissiveness of Mark Zuckerberg’s company, shady funders and Russian bots in Trump’s election, Brexit and other dark abuses of democracy? One part of the story is the extraordinary passivity of the corporate media in face of glaring evidence. Another, the courageous role of reporters and ‘mavericks’ such as the Observer’s Carole Cadwalladr and the UK’s ByLine (with whom openDemocracy partners). Cadwalladr’s riveting <a href="https://www.theguardian.com/news/2018/mar/17/data-war-whistleblower-christopher-wylie-faceook-nix-bannon-trump">interview with Christopher Wylie</a> being just the latest example. Their persistence helped to break the complicity and have now brought in bigger news organisations like the New York Times and Channel 4. </em></p><p class="AB"><em>They, at last, are providing the resources needed to expose more of the truth and force legislators and regulators to act – or at least to appear to act, what will actually result remains to be seen. This breakthrough also opens the way for the larger argument to take place about what such corruptions mean and how they relate to the social and economic influences on voters and the political choices we are offered. Here, American in-depth analysis is outstanding, two recent examples being by Tamsin Shaw in the <a href="http://www.nybooks.com/articles/2018/04/05/silicon-valley-beware-big-five/">New York Review of Books</a>, Jane Mayer in <a href="https://www.newyorker.com/magazine/2018/03/12/christopher-steele-the-man-behind-the-trump-dossier">the New Yorker </a>and Lily Hay Newman <a href="https://www.wired.com/story/eternalblue-leaked-nsa-spy-tool-hacked-world/">in Wired</a> (on how NSA hacks are widely available); all essential reading. I made a modest contribution to such coverage last December in the <a href="http://www.nybooks.com/daily/">NYRB Daily</a> under the title ‘Democracy and the Machinations of Mind Control’ which has kindly given permission for us to republish it here given the renewed relevance of the issues.</em> Anthony Barnett</p> <p>&nbsp;</p><p>The British are catching up with an American awareness of the intertwined political influence of the secretive super-rich, social media, and the Kremlin. In America, illicit support for Trump has been investigated by intelligence agencies, Justice Department officials, and major media organizations. Uncovering election interference in Brexit-Britain has been a more freelance business. About a year ago, Carole Cadwalladr, a regular contributor to <em>The Observer </em>newspaper, <a href="https://www.theguardian.com/technology/2016/dec/04/google-democracy-truth-internet-search-facebook" target="_blank">started researching</a> the “right-wing fake news ecosystem” and its capture of web searches through Google especially. This line of inquiry has also been followed by <em>ByLine</em>, a crowdfunded investigative journalism initiative, which hosts a regular&nbsp;<a href="https://www.byline.com/journalist/jamespatrick/column" target="_blank">column</a>&nbsp;by J.J. Patrick, who has been mapping the scale and penetration of Russian trolls and bots sowing hatred and division via social media.</p> <p><a href="https://www.theguardian.com/politics/2017/feb/26/robert-mercer-breitbart-war-on-media-steve-bannon-donald-trump-nigel-farage" target="_blank">Cadwalladr’s reporting</a> led her to uncover the part played by Cambridge Analytica in the Brexit referendum. This company, London-based but US-owned (principally by the <a href="https://www.nytimes.com/2017/03/06/us/politics/cambridge-analytica.html" target="_blank">hedge-fund billionaire Robert Mercer</a>, who was <a href="https://www.newyorker.com/magazine/2017/03/27/the-reclusive-hedge-fund-tycoon-behind-the-trump-presidency" target="_blank">one of Donald Trump’s biggest donors</a>), generated the <a href="https://motherboard.vice.com/en_us/article/mg9vvn/how-our-likes-helped-trump-win" target="_blank">“220 million” data sets</a> of US voters’ details that underpinned Trump’s Facebook campaign. This employed so-called black ads only seen by targeted voters, a process that bypasses and undermines the shared political community essential for democracy. Cadwalladr found that the firm had also <a href="https://www.vox.com/policy-and-politics/2017/10/16/15657512/cambridge-analytica-trump-kushner-flynn-russia" target="_blank">acted on behalf of</a> the Vote Leave campaign in Britain – though Cambridge Analytica denied elements of her reporting.</p> <p><a href="https://www.theguardian.com/politics/2017/feb/26/robert-mercer-breitbart-war-on-media-steve-bannon-donald-trump-nigel-farage" target="_blank">In a follow-up article</a>, she described how “a website called CNSnews.com… dominated Google’s search algorithm,” flooding it with reports that established media outlets are “fake” and “dead”; <a href="http://www.chicagotribune.com/news/nationworld/politics/ct-mercers-bannon-trump-20170318-story.html" target="_blank">this site was backed</a>, too, by Mercer’s foundation. Cadwalladr also met with Andy Wigmore, who had been the director of communications for Nigel Farage, the former head of the UK Independence Party (UKIP) and leading Leave campaigner who has subsequently emerged as a Trump acolyte. Cadwalladr learned that Farage was friends with Mercer and, as Wigmore told her, that Mercer had directed Cambridge Analytica to help the Brexit campaign. According to the UK’s election law, all gifts in kind must be declared for their monetary worth and none can come from overseas donors. The UK’s Electoral Commission is now investigating this apparent double breach; Cambridge Analytica, meanwhile, is pursuing legal action against <em>The Observer</em>.</p> <p>In March, Farage was spotted going into the Ecuadorian Embassy in London, where WikiLeaks founder Julian Assange has taken refuge. As Farage left the embassy, a <a href="https://www.buzzfeed.com/marieleconte/wait-what?utm_term=.temnABqK7#.illNpgdjE" target="_blank">BuzzFeed News journalist asked</a> what he was doing there. Farage replied that he could not remember. In <a href="https://www.theguardian.com/technology/2017/may/07/the-great-british-brexit-robbery-hijacked-democracy" target="_blank">an overview</a>&nbsp;in May, Cadwalladr pieced together various ties between the Trump campaign, Nigel Farage, and Russian “influence” efforts (including the alleged leaking of hacked information to WikiLeaks). British democracy, she concluded, had been “hijacked”:</p> <p>There are three strands to this story. How the foundations of an authoritarian surveillance state are being laid in the US. How British democracy was subverted through a covert, far-reaching plan of coordination enabled by a US billionaire. And how we are in the midst of a massive land grab for power by billionaires via our data. Data which is being silently amassed, harvested and stored. Whoever owns this data owns the future.</p> <p>As Cadwalladr was developing her thesis about this new machinery of political subversion, the UK editor of <em>openDemocracy</em>, Adam Ramsay, made a discovery of his own (I was the first editor of <em>openDemocracy</em> but was not involved with this story). With Peter Geoghegan, Ramsay <a href="https://www.opendemocracy.net/uk/peter-geoghegan-adam-ramsay/you-aren-t-allowed-to-know-who-paid-for-key-leave-campaign-adverts" target="_blank">showed</a> how large sums of money were sent to the Vote Leave campaign during the EU referendum via a small, hard-line Loyalist party in Northern Ireland, the Democratic Unionist Party (DUP). (By curious serendipity, Prime Minister Theresa May was forced to form a coalition government with the DUP after her Conservative Party lost its parliamentary majority in the general election of June 2017.) The loophole-ridden regulations governing British elections permit Northern Irish parties the unique privilege of not having to declare the source of their donations. A policy once justified by security concerns during the Troubles was abused by as-yet unidentified Brexit supporters to channel a secret, roughly <a href="http://www.independent.co.uk/news/uk/politics/election-dup-brexit-donations-saudi-arabia-tale-tories-theresa-may-a7782681.html" target="_blank">half-million-dollar donation</a> through the DUP to be spent mostly in mainland Britain.</p> <p>In September<em>, openDemocracy</em> <a href="https://www.opendemocracy.net/uk/brexitinc/peter-geoghegan-adam-ramsay/new-email-release-shows-how-leave-campaigners-used-vast-loo" target="_blank">followed up with further reporting</a> on a story originally broken last year by the satirical and muck-raking magazine <em>Private Eye</em>. A twenty-three-year-old fashion student had set up his own campaign for Brexit, which he called “BeLeave.” During the period immediately before a referendum, such operations must register with the Electoral Commission. They are permitted a maximum expenditure of £700,000 (about $935,000), while the designated lead campaign on each side is permitted up to £7 million ($9.35 million). Vote Leave led for the Brexit side and as it reached its limit, it gave £625,000 ($835,000) to the tiny BeLeave, that apparently paid it to AggregateIQ, a Canadian data analysis company that was assisting Vote Leave. AggregateIQ is, again, linked to <a href="https://www.theguardian.com/technology/2017/may/07/the-great-british-brexit-robbery-hijacked-democracy" target="_blank">Robert Mercer</a>. The protests that followed this <em>openDemocracy</em> report led, at length, to the Electoral Commission’s opening an inquiry into the payment;&nbsp;<em>openDemocracy</em> also published an <a href="https://www.opendemocracy.net/uk/brexitinc/adam-ramsay/how-did-arron-banks-afford-brexit" target="_blank">analysis</a> of the dubious finances of Arron Banks, the <a href="http://www.independent.co.uk/news/uk/politics/arron-banks-brexit-referendum-finance-rules-break-investigation-ukip-donor-leave-eu-a8031086.html" target="_blank">major British funder</a> of UKIP and its anti-immigrant call for Brexit. On the basis of Banks’s multimillion-pound funding of Brexit causes, <a href="https://www.nytimes.com/2017/10/19/world/europe/russia-brexit-arron-banks.html?_r=0" target="_blank">one lawmaker called for</a> the Electoral Commission to investigate whether Russian meddling was involved in the Leave campaign. Banks has dismissed reports of Russian money as “bollocks.”</p><p><span class='wysiwyg_imageupload image imgupl_floating_none caption-xlarge'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/500209/Screen Shot 2018-03-20 at 13.12.36.png" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/500209/Screen Shot 2018-03-20 at 13.12.36.png" alt="" title="" class="imagecache wysiwyg_imageupload caption-xlarge imagecache imagecache-article_xlarge" style="" width="460" /></a> <span class='image_meta'><span class='image_title'>Screen shot: Sun, June 13, 2016.</span></span></span></p> <p>As Cadwalladr <a href="https://www.theguardian.com/politics/2017/nov/25/vote-leave-dominic-cummings-online-guru-mystery-letter-dark-ads">continues to report on</a> the effects of Vote Leave’s “dark campaign” and its funding, she acknowledges others’ arguments that Brexit was also caused by, for example, “rising inequality, frustration with elites, economic uncertainty.” I would add to those factors the resurgence of a particular English nationalism based on the dream of a resurgent “Great Britain,” which was seduced by the pro-Brexit campaign slogan “Take back control.” Nationalist sentiment of this sort will not be undermined by any revelations about Russian trouble-making or covert support from American billionaires – any more than Trump’s base seems likely to abandon the president over what the investigation of Special Counsel Robert Mueller may discover.</p> <p>In both the US and the UK, investigations into the deployment of these shadowy forces are still in progress. In close contests, every influence counts. There is, therefore, an understandable temptation to emphasize that without secretive billionaires, or the Russians, or Facebook, the outcomes of the Brexit referendum and the US presidential election would have been different. And as elections are likely to carry on being close-run, it is important to track down and expose systemic manipulation. But it does not follow that slush funds, algorithms, and alleged conspiracies were primary causes of the electoral shocks of 2016. <a href="http://www.cnn.com/2016/12/21/politics/donald-trump-hillary-clinton-popular-vote-final-count/index.html" target="_blank">Nearly 63 million Americans voted</a> for Trump, although Hillary Clinton outspent him by half a billion dollars. In the UK, 52 percent of voters backed Brexit. A widespread revolt against elite entitlement and genuine resentment against a rigged system are the most important explanations in both cases.</p> <p>Trump, at least, can be voted out of office in three years’ time. Britain’s referendum decision to quit the European Union will not be so easily reversed. Should the UK leave the EU on schedule at the end of March 2019, impoverishment and humiliation are likely; even a successful Brexit, if such is possible, will pitch the UK into permanent competition with the Continent. Either outcome is repugnant for large majorities of voters in London, Scotland, and Northern Ireland. With the stakes so high, anything that undermines the legitimacy of Brexit fills its Remain-voting opponents with hopes of a reprieve. This could be a dangerous delusion.</p> <p>The emerging picture of efforts to manipulate the outcomes of the US election and the Brexit referendum leads to an awkward paradox. For the first time in a long time, voters who recognized the rigged nature of the system voted in large enough numbers to overthrow “the swamp” of “politics as usual”; at the same time, the system itself was perhaps more rigged than ever, thanks to the new-fangled methods. While it is vital to expose how these worked, it is even more important also to develop a politics that validates voters’ legitimate repudiation of a corrupt establishment, rather than dismisses them as ignorant and gullible. The risk of exaggerating the effect of novel methods of subversion is that it will only reinforce cynicism about politics and government in general—and that would be a win for billionaires like Robert Mercer, and their friends and helpers like Nigel Farage, and all they stand for. <span class="mag-quote-center">Voters who recognized the rigged nature of the system voted in large enough numbers to overthrow “the swamp” of “politics as usual”; at the same time, the system itself was perhaps more rigged than ever, thanks to the new-fangled methods.</span></p> <p>This is the trap from which democracy in Britain and America must now extricate itself. There will have to be a credible alternative and not a return to the status quo that led to the revolts of 2016. In Britain, the advocates of Brexit captured a wish for self-government with their slogan “take back control”—a desire for democratic accountability that must be freed from the grasp of demagogy, not derided. As for the US, <a href="https://www.politico.com/story/2016/06/full-transcript-trump-job-plan-speech-224891" target="_blank">Trump pledged</a> in Pennsylvania that he would speak for “the millions of our workers with nothing but poverty and heartache.” By all means, mock his hypocrisy, but the only way to combat his influence effectively will be by a politics that <em>does</em> speak for millions of workers.</p> <p>It is possible to spring the trap. Behind both Brexit and Trump was a widespread repudiation of entitlement. Part of its energy in Britain has now gathered around a resurgent Labour Party, which made unexpected gains in June’s general election despite vicious attacks from the right-wing press on its leader, Jeremy Corbyn. In the US, the current of opposition and resistance is running through the #MeToo wave of revulsion at sexual harassment and male abuse of power. A groper-in-chief president faces his own public reckoning, as more and more voices – this week, a <a href="https://www.usatoday.com/story/opinion/2017/12/12/trump-lows-ever-hit-rock-bottom-editorials-debates/945947001/">blistering denunciation</a> from the editorial board of <em>USA Today –</em>call out his presumption of the right to belittle and humiliate. Trump remains in office, and Brexit proceeds, but unearned entitlement is everywhere on the run. The enemies of democracy – from oligarchs to billionaires – have reason to be fearful.</p><p><em>This piece was<a href="http://www.nybooks.com/daily/2017/12/14/democracy-and-the-machinations-of-mind-control/"> originally published</a> in the New York Review of Books on December 14, 2017.</em></p><div class="field field-rights"> <div class="field-label">Rights:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> CC by NC 4.0 </div> </div> </div> digitaLiberties Anthony Barnett Tue, 20 Mar 2018 13:34:56 +0000 Anthony Barnett 116765 at https://www.opendemocracy.net The problem isn’t just Cambridge Analytica or Facebook – it’s “surveillance capitalism” https://www.opendemocracy.net/uk/jennifer-cobbe/problem-isn-t-just-cambridge-analytica-or-even-facebook-it-s-surveillance-capitali <div class="field field-summary"> <div class="field-items"> <div class="field-item odd"> <p>We’ve ended up with an internet built not for us – but for corporations, political parties, and the state’s increasingly nebulous ‘security’ demands. We need to better understand this problem so that we can challenge it.</p> </div> </div> </div> <p><span class='wysiwyg_imageupload image imgupl_floating_none 0'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/549093/big-brother-2783030_1920.jpg" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/549093/big-brother-2783030_1920.jpg" alt="" title="" width="460" height="259" class="imagecache wysiwyg_imageupload 0 imagecache imagecache-article_xlarge" style="" /></a> <span class='image_meta'></span></span><em>Image: Big Brother Monitoring. <a href="https://pixabay.com/en/big-brother-monitoring-2783030/">Master Tux</a>, Creative Commons.</em></p><p>Over the weekend, allegations emerged surrounding the use of <a href="https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election">Facebook user data by a data analytics firm called Cambridge Analytica</a>. But while they have allegedly broken Facebook’s rules, the real problem is Facebook’s business model. And it’s a model that isn't unique to Facebook. It originated with Google, which realised that the data gathered as people used its search engine could be analysed to predict what they wanted and deliver targeted advertising, and it’s also employed by most ‘free’ online services.</p><p>This isn't just a problem with Facebook. It's a problem with the internet as it exists today.</p> <p>‘Surveillance capitalism’ was the term coined in 2015 by Harvard academic Shoshanna Zuboff to describe this large-scale surveillance and modification of human behaviour for profit. It involves predictive analysis of big datasets describing the lives and behaviours of tens or hundreds of millions of people, allowing correlations and patterns to be identified, information about individuals inferred, and future behaviour to be predicted. Attempts are then made to influence this behaviour through personalised and dynamic targeted advertising. This is refined by testing numerous variations of adverts on different demographics to see what works best. Every time you use the internet you are likely the unwitting subject of dozens of experiments trying to figure out how to most effectively extract money from you.</p> <p>Surveillance capitalism monetises our lives for their profit, turning everything that we do into data points to be packaged together as a profile describing us in great detail. Access to that data profile is sold on the advertising market. But it isn’t just access to our data profile that is being sold – it’s access to the powerful behavioural modification tools developed by these corporations, to their knowledge about our psychological vulnerabilities, honed through experimentation over many years. In effect, through their pervasive surveillance apparatus they build up intricate knowledge of the daily lives and behaviours of hundreds of millions of people and then charge other companies to use this knowledge against us for their benefit.</p> <p>And, as increasing numbers of people are realising, surveillance capitalism doesn't just benefit corporations. It benefits political organisations as well – shadowy ones like Cambridge Analytica, yes, but also the mainstream political parties and candidates. The Obama campaign of 2008 is often described as the first ‘big data’ campaign, but it was in 2012 that his team truly innovated. The <a href="http://swampland.time.com/2012/11/07/inside-the-secret-world-of-quants-and-data-crunchers-who-helped-obama-win/">Obama team’s operations were sophisticated enough that they were able to target voters</a> that the Romney campaign, by their own admission, didn’t even know existed. Their use of analytics-driven microtargeting allowed them to run a highly effective digital campaign and set an example which has been followed repeatedly since. </p> <p class="mag-quote-left">if twentieth century campaigns had magnifying glasses and baseball bats, those of the twenty–first century have telescopes, microscopes and scalpels</p> <p>Today, tools like <a href="http://www.adweek.com/digital/facebook-lookalike-audiences-help-advertisers-reach-users-similar-to-current-customers-others-in-their-database/">Facebook’s Custom Audiences and Lookalike Audiences</a>, which allow advertisers – including political organisations – to upload lists of people, match them with their Facebook profile, filter in the profiles of similar people who aren’t on their list, and target them all, mean that political campaigns can greatly extend the reach of their carefully-crafted messaging. </p><p>As <a href="http://firstmonday.org/article/view/4901/4097">Zeynep Tufekci, a professor of sociology at the University of North Carolina, says,</a> if twentieth century political targeted campaigns had magnifying glasses and baseball bats<em>, </em>those of the twenty–first century have acquired<em> </em>telescopes, microscopes and scalpels in the shape of algorithms and analytics. Campaigns can deliver different arguments to different groups of voters, so no two people may ever see the same set of adverts or arguments. This takes political campaigning from being a public process to being a private, personalised affair, helped along by access to the apparatus of surveillance capitalism.</p> <p>Facebook has conducted its own research on the effectiveness of targeted political messaging using its platform. In the 2010 US midterms it found that it was able to increase a user’s likelihood of voting by around 0.4% per cent by telling them that their friends had voted and encouraging them to do the same. It repeated the experiment in 2012 with similar results. That might not sound like much, but on a national scale it translates to around 340,000 extra votes. George Bush won the 2000 election by a few hundred votes in Florida. Donald Trump won in part because he managed to gain 100,000 key votes in the rust belt. </p> <p>And in countries like the UK, where elections are often decided by relatively few marginal constituencies in which political parties focus their efforts, small numbers matter – one study of last year’s election suggest that the <a href="https://www.express.co.uk/news/politics/816679/election-2017-news-theresa-may-tories-majority-jeremy-corbyn-labour">Conservative Party was just 401 votes short of an overall majority</a>. Accordingly, in 2013 <a href="http://www.bbc.co.uk/news/uk-politics-23551323">the Conservatives hired Obama’s 2012 campaign manager</a>, and both the Vote Leave campaign and the Labour Party have boasted about their data operations. The Information Commissioner’s Office, which oversees data protection and privacy regulation in the UK, is investigating the use of these practices here. The new EU General Data Protection Regulation, when it comes into force in May, promises to provide something of a brake.</p><p class="mag-quote-right">the Government is exploring an agreement with the US that would give British intelligence agencies better access to these databases.</p> <p>But there's also a third group who benefits from the troves of data that surveillance capitalism corporations have gathered about the minutiae of the daily lives of billions of people – the state. The <a href="https://opendemocracy.net/digitaliberties/didier-bigo/paradox-at-heart-of-snowden-revelations">Snowden revelations</a> in 2013 about GCHQ and the NSA’s activities made headlines around the world. Much of the focus was on programmes which involved, among other things, weakening encryption standards, installing backdoors in otherwise secure networking equipment, and placing interceptors on internet backbone cables so as to siphon off the data passing through. These programmes rake in billions of records every day, with GCHQ’s stated aim being to compile a <a href="https://arstechnica.com/information-technology/2015/09/gchq-tried-to-track-web-visits-of-every-visible-user-on-internet/">profile of the internet habits of every user on the web</a>. </p> <p>There was, however, another element that was largely overlooked – data sharing between surveillance capitalism and state security and intelligence agencies. In the US, tech companies have long been forced to hand over data about their users to the NSA. <a href="https://www.theguardian.com/world/2014/sep/11/yahoo-nsa-lawsuit-documents-fine-user-data-refusal">When Yahoo refused, they were threatened with a $250,000 fine, every day</a>, with the fine doubling every week that their non-compliance continued. Faced with financial ruin, they acquiesced. In the UK, the <a href="https://opendemocracy.net/digitaliberties/jane-duncan/britain-s-investigatory-powers-bill-gift-to-securocrats-everywhere">Investigatory Powers Act, commonly known as the “snooper’s charter”</a>, grants the security and intelligence agencies legal authority to acquire personal datasets from technology companies in bulk, and the Government is <a href="https://www.ft.com/content/09153a74-b5bc-11e7-aa26-bb002965bce8">exploring an agreement with the US</a> that would give British intelligence agencies better access to these databases.</p> <p>These are concerning surveillance practices that raise difficult questions about the relationship between the citizen and the state. And since 2013 these questions have been articulated by many – not least by the ECJ, which <a href="http://www.bbc.co.uk/news/uk-politics-38390150">ruled in 2016 that indiscriminate communications data retention is incompatible with a free and democratic society</a>. This led to the Government's recent consultation on revisions to the parts of the Investigatory Powers Act that allow the Government to require ISPs to retain records of the browsing history of every user in the UK and provide them to security and intelligence agencies, police, and a range of other public authorities upon request and without a warrant or other direct judicial oversight. A challenge brought by Privacy International to the bulk personal dataset powers contained in the <a href="https://www.theguardian.com/world/2017/sep/08/snoopers-charter-tribunal-eu-judges-mass-data-surveillance">Investigatory Powers Act was referred to the ECJ</a> in September.</p> <p class="mag-quote-left">This is how the internet of today has been built. Not for us – for them.</p> <p>Surveillance capitalism – with smartphones, laptops, and the increasing numbers of ‘internet of things’ devices making up its physical infrastructure, watching and tracking everything we do, and the public as willing participants – increases the capacity of corporations, political organisations, and the state to track, influence, and control populations at scale. This is of benefit to those corporations, political organisations, and state agencies economically, politically, and in pursuit of the increasingly nebulous demands of ‘security’. This is how the internet of today has been built. Not for us – for them. This is the future that we've sleepwalked into. We need to look beyond Cambridge Analytica and Facebook. It’s time for a wider debate about the role of surveillance in our increasingly digital society.</p><fieldset class="fieldgroup group-sideboxs"><legend>Sideboxes</legend><div class="field field-related-stories"> <div class="field-label">Related stories:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> <a href="/digitaliberties/agne-pix-bruce-schneier/surveillance-is-business-model-of-internet">&quot;Surveillance is the business model of the internet&quot;</a> </div> <div class="field-item even"> <a href="/mary-fitzgerald-peter-york-carole-cadwalladr-james-patrick/dark-money-deep-data-voicing-dissent">Dark Money Deep Data</a> </div> <div class="field-item odd"> <a href="/uk/sam-jeffers/how-can-we-better-regulate-elections-in-digital-age">How can we better regulate elections in the digital age?</a> </div> </div> </div> </fieldset> <div class="field field-rights"> <div class="field-label">Rights:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> CC by NC 4.0 </div> </div> </div> uk digitaLiberties uk Jennifer Cobbe Tue, 20 Mar 2018 08:26:28 +0000 Jennifer Cobbe 116761 at https://www.opendemocracy.net How can we better regulate elections in the digital age? https://www.opendemocracy.net/uk/sam-jeffers/how-can-we-better-regulate-elections-in-digital-age <div class="field field-summary"> <div class="field-items"> <div class="field-item odd"> <p>Our politicians need to empower our electoral and information regulators to tackle the challenges ahead. Sam Jeffers sets out some starting principles and some radical suggestions.</p> </div> </div> </div> <p><span class='wysiwyg_imageupload image imgupl_floating_none 0'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/549093/facebook.jpg" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/549093/facebook.jpg" alt="" title="" width="460" height="307" class="imagecache wysiwyg_imageupload 0 imagecache imagecache-article_xlarge" style="" /></a> <span class='image_meta'></span></span><em>Image: PA Images. All rights reserved.</em></p><p>The <a href="https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election">Cambridge Analytica revelations this weekend</a> wreck the credibility of Alexander Nix’s company and of Facebook. Cambridge Analytica will probably close down. Facebook will struggle to rebuild trust in its ability to carry political communication. As a user, it’s a reminder you really have no idea who’s behind a message or how it found its way to you. It’s not hard to see the company now deciding that politics isn’t worth the trouble it’s giving them (due to the trouble they’ve given it).</p> <p>Because Facebook can’t be trusted, the company, platforms like it and the wider universe of those who sell data and technology solutions to campaigns, should have their role in political communication regulated. These revelations, among other things, prove they can’t oversee themselves. Facebook itself probably knows this by now.</p> <p>These aren’t the acts of ‘bad apples’ - they’re a result of their business models, where selling access to voters in ever more specific ways has led to a race to the bottom of marketing ethics (note also the responsibility of their clients).</p> <p>But the need for oversight should put fear in the heart of every election, media and data regulator in the world. The US Federal Election Commission failed this specific test – but others would have done the same.</p> <p>In the UK, Ofcom, The Electoral Commission and ICO all have a role to play in ensuring elections are trustworthy. But they’re worried about being seen as ‘political’ and this, ironically, tends to give political campaigns a free rein. Many of the challenges posed by modern elections fall between them, or through them, because they aren’t set up to co-ordinate their work, and can’t imagine a policy response because they aren’t trying to understand the near future.</p> <p>We need some initial guiding principles for new forms of regulation. These principles must do several things. They need to recognise the risk of fragmentation of the electorate. Fragmentation makes it easier to drive a wedge between people, which is bad for getting millions of them to live together. </p><p>They need to ensure that the tools being used can be explained to voters using simple language.&nbsp;They need so support appropriate monitoring and data gathering during campaigns. We should avoid a future where we’re worrying about things from a couple of years ago, with little data to understand how a previous election was fought.</p><p>And the principles need to keep things simple for the regulators themselves. To make things slower. Presume you don’t understand, and that new things may have a big impact for good, or bad. Work to understand them. Like medicines regulators do. </p> <p>These are just proposals. A debate about the unintended consequences of these principles would be very useful as a counterbalance, and to help refine them. But working from the principles helps you generate ideas like:</p> <p><strong>“Drug testing” campaigns data.</strong> A regulator should be able to demand instant access to a campaign’s systems and check that data was legally acquired and is being legitimately used.</p> <p><strong>Limiting microtargeting.</strong> An example might be to restrict segment sizes to a certain proportion of the electorate in a given campaign - for example, no smaller than the average parliamentary constituency. It’s unproven whether hyper-targeted messaging that preys on our psychology and emotions is an effective campaign technique. But do we want to find out? I’m not persuaded it’s something we need to have a better democracy.</p> <p><strong>Demanding radical transparency. </strong>Campaigns should lodge their campaign plans and the techniques they want to use with regulators in advance of campaigns and update them in a timely way throughout them.</p> <p><strong>Reserving the right to outlaw or delay the use of specific techniques for the purposes of political communication.</strong> Bots, AI-assisted advertising products, almost anything that impersonates human communication but isn’t a human. These are the things that tempt campaigns and consultants because they (usually) promise unparalleled accuracy for less money. Regulators should make campaigns wait to use them until their effects are known.</p> <p>Currently, the regulators’ policy work on new campaigning techniques is inadequate to ensure the integrity of campaigns. They lack guidance. Their staff lack the resources and skills, and their impulse to do much about these issues is weak, as they try to avoid the appearance of being politicised.</p> <p>All of that must change. The recent past creates a worrying precedent for what’s wrong with democratic oversight in the internet era. But the near future holds much worse. Politicians, who ultimately give regulators their orders and resources, must start to act.</p><fieldset class="fieldgroup group-sideboxs"><legend>Sideboxes</legend><div class="field field-related-stories"> <div class="field-label">Related stories:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> <a href="/digitaliberties/jimmy-tidey/facebook-needs-to-face-up-to-new-political-reality">Facebook needs to face up to the new political reality</a> </div> <div class="field-item even"> <a href="/democraciaabierta/carolina-mila/democracy-needs-us-less-social-media-and-more-social-interaction">Democracy needs us: less social media and more social interaction</a> </div> <div class="field-item odd"> <a href="/uk/leighton-andrews/why-regulators-like-ofcom-are-dropping-ball-on-fake-news-dark-advertising-and-ex">Why regulators like Ofcom are dropping the ball on ‘Fake News’, dark advertising and extremism</a> </div> </div> </div> </fieldset> <div class="field field-rights"> <div class="field-label">Rights:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> CC by NC 4.0 </div> </div> </div> uk digitaLiberties uk Sam Jeffers Mon, 19 Mar 2018 12:00:45 +0000 Sam Jeffers 116736 at https://www.opendemocracy.net The present and future of a centralized internet https://www.opendemocracy.net/digitaliberties/juan-ortiz-freuler/present-and-future-of-centralized-internet <div class="field field-summary"> <div class="field-items"> <div class="field-item odd"> <p>Understanding the risks future technology and intermediaries might pose to the internet as a tool for social change. Part Two.</p> </div> </div> </div> <p><span class='wysiwyg_imageupload image imgupl_floating_none caption-xlarge'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/500209/PA-33818961.jpg" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/500209/PA-33818961.jpg" alt="lead lead " title="" class="imagecache wysiwyg_imageupload caption-xlarge imagecache imagecache-article_xlarge" style="" width="460" /></a> <span class='image_meta'><span class='image_title'>Amazon's Echo - Alexa Voice Service presented at the IFA in Berlin, Germany, 1 September 2017. Photo: Britta Pedersen/Press Association. All rights reserved.</span></span></span>In my <a href="https://www.opendemocracy.net/digitaliberties/juan-ortiz-freuler/who-s-to-blame-internet-on-defendants-bench">previous article</a> I argued that the growing concerns regarding the internet’s social impact is rooted in many different causes: from underlying social problems, to bad science, to bad incentive structures put in place by big platforms, to the ongoing process of centralization that affects the web (the content layer that sits on top of the internet). </p> <p>I defined centralization as the process through which intermediaries reshape the architecture of the web, increasing their gatekeeping power over the information that circulates through it.</p> <p>I argued that the centralization of the web is creating the single point of failure that the original decentralized architecture sought to avoid, and that this should be the key concern of policy-makers hoping to deal with citizen concerns regarding the online. <span class="mag-quote-center">The centralization of the web is creating the single point of failure that the original decentralized architecture sought to avoid…&nbsp; this should be the key concern.</span></p> <p>The culture of <em>fail fast and iterate </em>that has boosted innovation over the past decades has become highly problematic. In a centralized architecture problems are no longer localized and easy to neutralize. In a centralized architecture failure spreads too quickly, and can cause a lot of harm. </p> <h2><strong>Constant evolution</strong></h2> <p>How does centralization take place? The web’s architecture is <em>always and only becoming</em>. It is in constant evolution. Each link that is made, each server that is set up to host content is part of this process. </p> <p>But some actors have bigger wrenches than others. There are gatekeepers at a network, device, browser, and platform level. They have the capacity to influence the decisions of millions of people who produce and consume content, and – through them – how the entangled web evolves, and how we understand the world we live in.</p> <p>These brokers are not merely replacing the traditional media in their role as information brokers. Their power is qualitatively superior. Whereas traditional media managed a one-way stream of information (media--&gt;consumer), new information brokers also harvest a LOT of real-time data about the information recipients (new media &lt;--&gt;user). They can leverage this process and direct users to certain content instead of others, or limit their access to links all-together. </p> <p>This can be more subtle than the usual censorship cases. See for example what happens when you post a link on Instagram, one of the rising social networks owned by Facebook.</p> <h2><strong>Intermediation continues to grow in breadth and depth, fuelling the process of centralization </strong></h2> <p>Intermediation is not in itself a bad thing. Search engines, for example, have become a key ally in enabling the web to scale by helping users find relevant information in the ever-growing web of content. But it can, nevertheless have problematic effects. </p> <p>There are several ways in which intermediation can take place. It can be structurally embedded, such as through algorithms that automatically sort information on behalf of the user, or as part of the interfaces that wrap the content that is being transmitted from one user to another over a platform. </p> <p>Intermediation can also operate <em>within</em> the previously mentioned structure in somewhat organic ways, such as when users unknowingly interact with networks of <em>bots </em>(automated accounts)<em> </em>controlled by a single user or group of users, or armies of <em>trolls </em>paid to disseminate specific information, or disrupt dialogue. In these cases, the bots and trolls act as intermediaries for whoever owns or created them.</p> <p><strong><em>But how did we get to this point where centralization is giving the internet a bad name?</em></strong></p><p><strong>Intermediation, centralization and inequality</strong></p><p><span class='wysiwyg_imageupload image imgupl_floating_none 0'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/555700/unnamed_2.jpg" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/555700/unnamed_2.jpg" alt="" title="" width="460" height="227" class="imagecache wysiwyg_imageupload 0 imagecache imagecache-article_xlarge" style="" /></a> <span class='image_meta'></span></span></p><p>Part of it is an “organic” cycle whereby the more central a player is the more personal data it can collect, enabling such player to further optimize intermediation services. This optimization and personalization can in turn make services more attractive to users, pushing competitors out of the market, and thus organically reducing the range of providers to which users can migrate. An example of the rich-get-richer dynamic. </p> <p>The other key dynamic occurs beyond the set of existing rules, and I would call it outright illegitimate. That is, intermediaries often leverage their position as a tool to prioritize their own services, allowing them to further increase their market share.</p> <h2><strong>The perils of centralization and a way forward</strong></h2> <p>New technological developments – such as smart assistants, and augmented and virtual reality – will likely increase the breadth and depth of intermediation. This in turn threatens to accelerate and further entrench the process of centralization. </p><p>Whereas in the past, the user was presented with a list of websites of interest, smart assistants increasingly skip that step and provide the user with specific contents or services, without providing the bigger picture. </p> <p>A winner takes all approach. &nbsp;With AR and VR the user is placed in an even more passive role, where she might be "force-fed" information in more seamless ways than through today's online advertising. Whoever operates the code will blend the curated digital world into the physical environment in which our species evolved over millions of years. No contours on your phone, TV or cinema-screen. No cover on your book to remind you of the distinction between worlds. </p><p><span class='wysiwyg_imageupload image imgupl_floating_none caption-xlarge'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/500209/4 (1)_0.jpg" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/500209/4 (1)_0.jpg" alt="" title="" class="imagecache wysiwyg_imageupload caption-xlarge imagecache imagecache-article_xlarge" style="" width="460" /></a> <span class='image_meta'></span></span>New technologies are offering intermediaries the possibility of taking personalisation to a new level. Smart assistants such as Siri, Google Assistant, and Alexa are making agreements with companies producing smart devices (cars, refrigerators, thermostats, etc) to allow users to control these swarms of smart objects through these smart assistants. </p> <p>This means the smart assistant gets access to more quantity and quality data about users. Developments in technologies such as AR and VR are capable of further isolating people into algorithmically curated silo-worlds, where information flows are managed by the owners of these algorithms. <span class="mag-quote-center">This means the smart assistant gets access to more quantity and quality data about users. </span></p> <p>This would reduce the probability of people facing random or unanticipated encounters with information – such as a protest on the streets or a bus conversation with a stranger about her daily struggles – that might allow people to connect and empathize with others in ways that are relevant for social movements. </p> <p>Having further isolated and uncommunicated groups would erode the common set of experiences and knowledge required to nurture a sense of belonging and trust within the broader society, which is key for the coordination of big social projects, and to ensure a fair distribution of the benefits of such coordination. </p> <p>Those in control of the information silos are gaining too much control over what conversations and encounters can and will take place. As intermediaries’ power increases it becomes evident that whatever fixes they try to implement they are met with distrust and criticism. At the root of these reactions there seems to be a sense that these corporations have outgrown the shell society had granted them, and the legitimacy of the power they exercise today is slim (if any).</p><p><span class='wysiwyg_imageupload image imgupl_floating_none caption-xlarge'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/500209/3 (2).jpg" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/500209/3 (2).jpg" alt="" title="" class="imagecache wysiwyg_imageupload caption-xlarge imagecache imagecache-article_xlarge" style="" width="460" /></a> <span class='image_meta'></span></span><strong><em>Will this impact person-to-person communication?</em></strong></p> <p>The internet has not merely reduced the cost of one-to-one person communication; it has offered a qualitative leap in communications. Whereas the newspaper, radio and TV enabled one-to-many communications, and telephone facilitated one-to-one communications, the internet has facilitated group communications, sometimes referred to as <em>many-to-many</em> communications. </p> <p>This is what we observe in places like Twitter and chat rooms, where thousands if not millions of people interact with millions in real time. The deployment of effective many-to-many communications often relies on curatorial algorithms to help people find relevant conversations or groups. This means that some of the challenges faced in the realm of search affect person to person communications. </p> <p>Yet centralization also poses a distinct set of risks for these communications, amongst them, risks to the integrity of <em>signifiers</em> (representations of meaning, such as symbols or gestures), and their <em>signified</em> (meaning<em>)</em>.</p> <h2><strong>Intermediation in person-to-person communications</strong></h2><p><strong><span class='wysiwyg_imageupload image imgupl_floating_none caption-xlarge'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/500209/2_0.jpg" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/500209/2_0.jpg" alt="" title="" class="imagecache wysiwyg_imageupload caption-xlarge imagecache imagecache-article_xlarge" style="" width="460" /></a> <span class='image_meta'></span></span></strong><strong>A. The intermediary’s responsibility to respect the integrity of a message</strong></p> <p>Millennials might notice that when texting with a new lover it is often the case that a word or emoji is misinterpreted. This often leads to an unnecessary quarrel, and we need to meet up physically to clear things up. <em>Oh, no! That’s not what I <span>meant</span>...What I wanted to say is...</em> Conveying meaning is not simple, and we often require a new medium or set of symbols to explain and correct what went wrong. </p> <p>Now imagine that someone could tamper with your messages, and you might not have that physical space to fix things... And that it’s not your lover you are communicating with, but the electorate or a group of protesters.</p> <p>The internet facilitates engagement by bringing people<em> closer</em> <em>together</em>. The apparent collapse of the physical space between users is achieved by slashing down the time between the moment in which a message is and received, until it's close enough to real time. And for millions of years the only type of real-time communications we’ve had as a species involved physical presence. This illusion often makes us forget that someone is managing the physical infrastructure through which the message is transported. It is fundamental that any and all parties who control these channels respect the integrity of the message that is being delivered. <span class="mag-quote-center">It is fundamental that any and all parties who control these channels respect the integrity of the message that is being delivered. </span></p> <p>Centralization which leaves communication channels under the control of a handful of actors could effectively limit parties from exchanging certain signifiers. If virtual and augmented reality are the future of communications, then we should bear in mind not merely spoken or written language will be sent over communication channels. These communications will include a wide array of signals for which we still have poorly defined signifiers. This includes body gestures and – potentially – other senses, such as smell and taste. To get a sense of the complexity of the task ahead of us, think about the gap between experiencing a movie through <em>descriptive noise captioning</em> and the standard hearing experience of the same content. </p><p><span class='wysiwyg_imageupload image imgupl_floating_none caption-xlarge'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/500209/Screen Shot 2018-03-13 at 11.41.05.png" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/500209/Screen Shot 2018-03-13 at 11.41.05.png" alt="" title="" class="imagecache wysiwyg_imageupload caption-xlarge imagecache imagecache-article_xlarge" style="" width="460" /></a> <span class='image_meta'><span class='image_title'>Screenshot: George Costanza, Youtube.</span></span></span>In the past the debate was focused over the legitimacy of the frames traditional intermediaries such as newspapers applied to political events and discourse. With new intermediaries come new risks. By reducing (or eliminating) the availability of alternative mediums of communication between parties, centralization could limit the sender’s ability to double-check with the receiver whether or not a message’s signifiers were appropriately received. <span class="mag-quote-center">Distributed archives… based on Bitcoin’s blockchain model, offer a glimpse of hope in this battle.</span></p> <p>Distributed archives, such as those currently being developed based on Bitcoin’s blockchain model, offer a glimpse of hope in this battle. It must be noted that the phase between the message’s production and its transcription onto the ledger is subject to some of the same risks of meddling present in our current model. Blockchain as such could nevertheless protect the message's integrity from ex-post tampering. </p> <p><strong>B. The effect of centralization on the fluidity of the decoding process</strong></p> <p>A second issue affecting person-to-person communication is the process through which the relationship between signifier and signified comes to be (<em>point B on the diagram</em>). </p> <p>The process of information consumption is not automatic or passive. The receiver plays a role in the decoding process. The word <em>cat </em>triggers a different set of connections to a cat owner, than it does to a person who is allergic to cats. &nbsp;</p> <p>The receiver constructs meaning by relying on her own experiences as well as recalling instances in which members of the community managed to coordinate a conversation by relying on what could be deemed the agreed-upon meaning of this concept. Through this process individuals and groups play an active part in the construction of reality. </p> <p>This active interpretation enables language to be fluid, relationships between signifier (symbol, such as a word) and signified (meaning) to shift over time. The system is open. The <em>noisiness</em> of the process through which we interpret and discuss our world provides the flexibility necessary for critical social changes to be possible. A <em>reflective capacity </em>comes embedded within language. <span class="mag-quote-center">A <em>reflective capacity </em>comes embedded within language. </span></p><p>With <em>cat</em> the process is quite straightforward. Now shift from <em>cat </em>to more abstract concepts – like <em>justice</em> and <em>war, </em>or <em>muslim </em>and <em>latinx</em> – and things get much trickier. Since people do not necessarily deal with these concepts directly, third parties – such as the mass media and the state education system – purvey greater control over their meaning. </p> <p>Much like the elites in charge of writing definitions in a dictionary, mass media often takes over the process of rooting the signifiers onto a broader set of signifiers to construct <em>meaning</em>. The process of constructing meaning is deeply political. </p> <p>Reiterated associations between <em>muslim</em> or <em>latinx </em>to negative concepts can over time lead people to trigger mental responses associated to the negative frame even when the frame itself is not present. </p> <p>A centralized web of content, where the few define which frames should be applied and distributed, becomes a liability – the opposite of the open space the web was meant to create, and which many of us still believe can democratize the exercise of power by redistributing widely the power to construct <em>meaning, </em>and therefore the way we understand our identity, our relationships, and the world we live in. <span class="mag-quote-center">A centralized web of content, where the few define which frames should be applied and distributed, becomes a liability.</span></p> <p>Let’s think how this might play out in 20 years. Many resources are currently devoted to the development of brain-computer interfaces. Brain-computer interfaces imply tending a bridge across the <em>air gap</em> that currently exists between people and their devices (as well as the intermediaries that manage traffic over the internet and onto these devices). </p> <p>Eliminating such air gaps might imply limits to the receiver’s capacity to diverge in the way she processes the signifier: the computer would arguably take over the decoding role, and with it our subject’s ability to decode and reconstruct signifiers into – mistakenly or purposefully – novel and critically transformative meanings. </p><p><span class='wysiwyg_imageupload image imgupl_floating_none caption-xlarge'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/500209/PA-34732987_0.jpg" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/500209/PA-34732987_0.jpg" alt="" title="" class="imagecache wysiwyg_imageupload caption-xlarge imagecache imagecache-article_xlarge" style="" width="460" /></a> <span class='image_meta'><span class='image_title'>A family plays a game of rock-paper-scissors with "Alexa," their Amazon Echo wireless speaker and voice command personal assistant. TNS/Press Association. All rights reserved.</span></span></span>Every step towards the large-scale roll out of these technologies strengthens incentives for intermediaries to ensure that they can operate these systems unchecked. </p><p><em>Next week: Part Three – </em><em>how we should coordinate to solve the mess we are getting into before it’s too late.</em><em></em></p><fieldset class="fieldgroup group-sideboxs"><legend>Sideboxes</legend><div class="field field-related-stories"> <div class="field-label">Related stories:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> <a href="/digitaliberties/juan-ortiz-freuler/who-s-to-blame-internet-on-defendants-bench">Who’s to blame? The internet on the defendant&#039;s bench</a> </div> <div class="field-item even"> <a href="/digitaliberties/juan-ortiz-freuler/battle-for-decentralized-internet-navigating-troubled-waters-to-g">The Cambridge Analytica scandal is a drop of water trickling down the visible top of an iceberg. Focus on decentralizing power</a> </div> </div> </div> </fieldset> <div class="field field-rights"> <div class="field-label">Rights:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> CC by NC 4.0 </div> </div> </div> digitaLiberties digitaLiberties Juan Ortiz Freuler Tue, 13 Mar 2018 12:15:19 +0000 Juan Ortiz Freuler 116639 at https://www.opendemocracy.net Silicon Inquiry https://www.opendemocracy.net/digitaliberties/wendy-liu/silicon-inquiry <div class="field field-summary"> <div class="field-items"> <div class="field-item odd"> <p>The Silicon Valley ideology is a morally destitute trap, writes Wendy Liu. Could its well-paid staff transform it?</p> </div> </div> </div> <p><span class='wysiwyg_imageupload image imgupl_floating_none caption-xlarge'><a href="//cdn.opendemocracy.net/files/imagecache/wysiwyg_imageupload_lightbox_preset/wysiwyg_imageupload/500209/5066004568_d13188b75f_z.jpg" rel="lightbox[wysiwyg_imageupload_inline]" title=""><img src="//cdn.opendemocracy.net/files/imagecache/article_xlarge/wysiwyg_imageupload/500209/5066004568_d13188b75f_z.jpg" alt="lead " title="" class="imagecache wysiwyg_imageupload caption-xlarge imagecache imagecache-article_xlarge" style="" width="460" /></a> <span class='image_meta'><span class='image_title'>Googleplex giant deserts, 2010. Wikicommons/ Molly Johnson. Some rights reserved.</span></span></span>WIRED Magazine recently published an article subtitled&nbsp;<a href="https://www.wired.com/story/the-other-tech-bubble/">Silicon Valley Techies Still Think They’re the Good Guys. They’re Not.</a>&nbsp;The article described a noticeable shift in public discourse on the tech industry:</p><blockquote><p>As headlines have exposed the troubling inner workings of company after company, startup culture no longer feels like fodder for gentle parodies about ping pong and hoodies. It feels ugly and rotten.</p></blockquote><p>How did we get here? How did we get to a point where even&nbsp;<a href="http://www.metamute.org/editorial/articles/californian-ideology">traditionally techno-evangelist</a>&nbsp;publications like WIRED have become so critical? And why is it that so many people inside the industry seemingly refuse to even see it, much less acknowledge their own culpability?</p><p>I don’t have all the answers, but I can tell my story. I used to be one of those people: as a software developer and startup founder, I once loved being part of the tech industry, and I really did think that it was making the world a better place. I used to think that all the criticism of the industry – of its toxic culture, of its outrageous salaries, of its saving-the-world mentality – was based in ignorance or even jealousy. Surely, I thought, it was unfounded, and tech would ultimately triumph over its detractors.</p><p>I no longer believe this. I’ve lost my faith in the industry, and with it, any desire to remain within it. All the perks in the world can’t make up for what tech has become: morally destitute, mired in egotism and self-delusion, an aborted promise of what it could have been. Now that I realise this, I can’t go back.</p><p>This inquiry, based on my personal experiences, is an attempt to explain why I left, and in the process shed some light on the technical and political composition of the industry.</p><h2>In the beginning</h2><p>I started building websites at the age of 12, probably due to having way too much time alone on a computer as a child. I was soon hooked on what I saw as the promise of the Internet: the possibility of a new world beyond my computer screen, which seemed much more interesting than the real-life world around me. I spent a colossal amount of time on the Internet during my teenage years, either steeping in its culture in places like 4chan and Reddit and&nbsp;<a href="http://bash.org/">bash.org</a>&nbsp;or building my own websites. Back then, the Internet was my saviour; it connected me to a larger community of like-minded people, held together by the twin strands of real-world alienation and shared competence with computers.</p><p>Those were heady days. My&nbsp;<a href="https://code.likeagirl.io/paul-graham-blocked-me-on-twitter-c28ca647c7f8">heroes</a>&nbsp;were people I thought of as the pioneers of the Internet, people like Neal Stephenson and Paul Graham and Eric S. Raymond whose writing on hacker culture made me feel like I was part of something bigger, something exciting. One post that particularly resonated with me was Jeff Atwood’s 2007 piece&nbsp;<a href="https://blog.codinghorror.com/the-two-types-of-programmers/">The Two Types of Programmers</a>, which asserted that most – 80% – of all programmers working in the industry aren’t actually very talented, but if you’re one of the lucky few reading this post, then you’re probably in the talented 20%. It’s hard to overstate the role that sort of ideology had on me, and probably countless others, in developing my worldview. All these brilliant and successful people were whispering to me through their writing, telling me that I was special, that I was capable, that I was just intrinsically&nbsp;<em>better</em>&nbsp;than those who didn’t have computer skills. And that if I kept developing these skills, I would be justly rewarded for my efforts.</p><h2>Don’t be evil</h2><p>So I plunged headfirst into tech. I started an undergraduate degree in computer science, and spent my free time working on whatever software development projects piqued my interest. When I got an internship offer from Google, in my third year of university, it felt like that reward was finally in sight.</p><p>In the summer of 2013, I interned as a software engineer in Google’s San Francisco office. My job was to maintain a web application for the capacity planning team, in order to help Google forecast demand for new servers. The code itself involved technologies that were either already familiar to me, or sufficiently documented, and so I didn’t have much trouble getting up to speed. By the end of the first day – which consisted mostly of independent, self-directed exploration of the codebase – I had grasped enough of the architecture to be able to make my own changes.</p><p>The highlight of the job was the level of autonomy involved. On a typical day, I would get in to work around 9:30 – just in time to catch the end of the catered breakfast – and get to my desk around 10. Once there, I would spend anywhere between 15 minutes to an hour catching up on everything I’d missed: email, Facebook, Twitter,&nbsp;<a href="http://thefw.com/take-a-peek-inside-googles-internal-meme-generator/">internal social networks</a>. The work itself was never interesting enough that I ever felt an urge to work on it as soon as I got in, and I was confident enough in my ability to get it done that I wasn’t worried about being reprimanded for not working more efficiently. Every once in a while, I’d have a one-on-one meeting with my boss to frame my progress in management-speak (in terms of&nbsp;<a href="https://en.wikipedia.org/wiki/Performance_indicator">KPIs</a>and&nbsp;<a href="https://en.wikipedia.org/wiki/OKR">OKRs</a>), but it was generally very hands-off; I was trusted to work on the list of outstanding features with minimal superversion.</p><p>The actual technical environment was neither especially challenging nor trivial. In fact, it wasn’t that different from what I would have encountered when working on my personal side projects. The real challenge was that the work itself just felt so&nbsp;<em>meaningless</em>. With my personal projects, I actually cared about what I was working on, whereas at Google, I was only working because the company was paying me to. In Marxist terms, I was alienated from my labour: forced to think about a problem I didn’t personally have a stake in, in a very typically corporate environment that drained all the motivation out of me. I remember thinking: is this it? Is this what my life will be like if I come back full-time? Is this really what I have to look forward to?</p><p>Google was supposed to be the&nbsp;<em>goal</em>, the reward people worked so hard for. And on the surface, it was everything you could have asked for: lots of autonomy, excellent compensation, a workspace that caters to your every need. So why did it feel so empty?</p><p>When I talked to other interns about this, the conversations never got very far. We’d concur that the work was kind of dull, and tentatively wonder if there was something better: maybe a different team within Google, or just a different company in the tech industry. Never did we connect our shared malaise to structural issues with the industry itself; it was much more natural to turn inward, and ask ourselves if our unhappiness was the result of&nbsp;<a href="https://jacobinmag.com/2018/01/under-neoliberalism-you-can-be-your-own-tyrannical-boss">personal failings</a>, a symptom of just not being cut out for the industry.</p><p>When I got my offer to return full-time after graduating, part of me didn’t want to take it, because I thought there&nbsp;<em>must</em>&nbsp;be something better, this can’t be&nbsp;<em>it</em>. This what can’t be what I’ve worked so hard for.</p><p>But I couldn’t find anything else.</p><p>So I resigned myself to it. And because I had nothing else to be inspired by, I focused on the material aspects. At the time, the standard Google software engineer salary for new university graduates in the Bay Area was estimated at $143,000 USD in total compensation: $100,000 base, $15,000 projected bonus, and over $100,000 worth of stock which would vest over 4 years. I calculated how much I would get in each paycheck and, months before even graduating, started budgeting for vacations and luxury purchases and nights out. After all, why not? What else was I supposed to look forward to, except mindless consumption? Wasn’t that the whole point of making that much money? Wasn’t that what drove people to work so hard in the first place?</p><h2>A way out?</h2><p>I never made it back to Google. Uninspired, despairing of a future that looked so hollow, I spent most of my last year at university looking for alternatives, a way to escape the barren path I saw in front of me. Eventually, I found such an escape – at least a temporary one – through startups.</p><p>It was at a hackathon that I first worked with some classmates who would eventually become my co-founders, during one sleep-deprived weekend at the University of Pennsylvania that culminated in us building a web app together. Even though the project was summarily abandoned, the buzz of working on a technical challenge with friends was enough of a high to get me seriously considering turning down Google to do a startup instead. I wanted to extend the hackathon spirit beyond that one weekend, as a way of avoiding the existential dread I associated with a full-time job at Google.</p><p>For a while, it worked. The first summer of that startup became one giant hackathon, a blur of late nights and empty coffee cups in the living room of my cofounder’s apartment. I remember being so intently focused on my laptop screen until the small hours of the morning, so utterly consumed by the esoteric technical challenges at hand that I didn’t want to sleep even though I knew I needed to, that I didn’t even care about the fact that we were working in an apartment filled with ants which my cofounder once misguidedly tried to kill with laundry detergent. The high I felt every time I overcame another technical barrier. Feeling like we were some sort of visionaries, building something so new, so exciting, that it was the only thing that mattered.</p><p>It took us over a year before we had a product that we could actually sell to anyone, and the revenue from our first sale was a grand total of $125. Even then, though, we stubbornly maintained our entrepreneurial hubris. We talked disparagingly about companies that only had $10 million in annual revenue – why even bother if you’re not going to aim bigger?; we scornfully turned down acquisition offers; we scoped out competitors with bigger teams and proven track records and decided we could beat them. We were a bunch of kids in an ant-ridden apartment with barely any funding and no clue what we were doing, and yet during the highs we really thought that our breakthrough was just around the corner.</p><p>The highs rarely lasted very long. In the lows, when deals kept falling through and we were embroiled in interminable cofounder arguments and we still couldn’t get rid of the endless ants, there were so many times that we almost gave up. I think the only thing that stopped us was the lack of any real alternative. We had thoroughly bought into the startup myth that starting a successful company is the only thing worth doing with your life. As a result, we had no real exit strategy; every way out just felt like failure.</p><p>So we stuck with it, even when we had to change our entire revenue model and thus lost any pretense of telos. Although we had originally envisioned ourselves as data science startup, it soon became clear that the only profitable avenue was to become an advertising technology startup, selling consumer data to help companies market their products more efficiently. We went to marketing technology conferences in which we forced ourselves to network with people we despised, people in shiny suits and branded lanyards talking about CAC and LTV and NDAs. Our days became a monotonous blend of 3-letter acronyms and sales decks and meetings with people who didn’t want to be there any more than we did.</p><p>Underneath it all was a desperate yearning for our startup to&nbsp;<em>mean</em>&nbsp;something. We needed all the stress and supplication and&nbsp;<a href="https://www.wired.com/2017/06/silicon-valley-still-doesnt-care-work-life-balance/">80 hour weeks</a>&nbsp;to be part of the heroic arc of our own entrepreneurship journey, so that one day we would look back on it all and say that it was worth it. After all, all the&nbsp;<em>real</em>&nbsp;entrepreneurs we knew had gone through the same route; we just had to&nbsp;<a href="https://www.youtube.com/watch?v=GtaxU6DZvLs">work harder</a>, and eventually we’d look back on all the&nbsp;<a href="https://techcrunch.com/2014/03/03/the-hard-thing-about-hard-things-ben-horowitzs-honest-and-real-take-on-entrepreneurship/">hard times</a>&nbsp;we’d endured and everything would finally make sense.</p><h2>No way out</h2><p>The worst trap is when you don’t realise that you’re trapped. We were our own bosses – not only did we&nbsp;<em>own</em>&nbsp;the means of production, we sort of&nbsp;<em>were</em>&nbsp;the means of production – and so we thought were free, certainly freer than our regularly-employed friends who were chained to their 9-5. Really, though, we were trapped by our own obstinacy, by our conviction that startups were the path to some sort of greater salvation. We were suffocating in this bubble where nothing seemed more important than our company, and because we were constantly facing existential threats, we were always in crisis mode, always fighting for air. We couldn’t afford to take the time to stop and think about why we were doing it in the first place.</p><p>I don’t remember exactly when it finally changed for me, or why. I just know that at a certain point, I decided that the startup dream I’d been chasing was bullshit and that I couldn’t bear to waste another second of my life on it. That just because I’d poured years of my life into my startup didn’t mean my efforts were intrinsically worthwhile. That there could be more to life than the continual pursuit of self-aggrandizement.</p><p>Eventually, this world stopped feeling like something I could be proud of. My faith in the overall goodness of the tech industry, already shaken by a cold hard look inside the world of advertising technology, had drowned, deluged by the futility of my own startup. I would read about people my age and younger raising millions of dollars for their hare-brained startup ideas and where I might have once felt envy, I could only feel disgust.</p><p>And I began to see the monstrosity lurking at the heart of the industry. Underneath the veneer of innovation and shininess and platitudes about making the world a better place is nothing but decay, a black wave of rot and regret. There’s something grotesque in such an inefficient allocation of resources: all this money trickling down from central banks to VCs to startup founders instead of addressing poverty or funding public services. All this money going towards renting exposed-brick offices in SoMa and creating landfill-destined branded startup t-shirts, just so that some clueless kids drunk on the startup Kool-Aid can delude themselves into thinking they’re making the world a better place for a few years.</p><p>Because it&nbsp;<em>is</em>&nbsp;a delusion, at least for the overwhelming majority. Even if you start out wanting to produce social value, you’ll soon come up against the structural incentives of the industry: startups tend not to get funded unless they’re rapacious, and so you’ll end up continually tweaking your business model in order to appease investors who&nbsp;<a href="https://techcrunch.com/2017/10/26/toxic-vc-and-the-marginal-dollar-problem/">expect massive returns</a>.</p><p>And soon enough, you’ve become an empty husk trudging along Sand Hill Road, inserting ever more ambitious and less socially useful goals into your pitch deck. You’ll resign yourself to the fact that you’re not even that excited about your startup idea anymore, but that’s okay, because once you sell this one you’ll finally have the cachet to do what you really want, even if you haven’t quite figured that out yet. For now, though, you’re going to give it a shot with your e-commerce or blockchain or adtech “play”.</p><p>As if it’s a&nbsp;<em>game</em>. Because, after all, it sort of&nbsp;<em>is</em>, if you’re a founder with the right connections. There are no real consequences to failure. If your startup runs out of money, or you have to sell for less than you raised, rest assured because you’ll probably be able to&nbsp;<a href="https://twitter.com/davemorin/status/944077486144438272">raise more money</a>&nbsp;in the future. As long as you have a half-decent slide deck and fit some rich person’s preconceived notions of a good founder, you’ll never have to get a&nbsp;<em>real</em>&nbsp;job; your material needs will be taken care of by the Silicon Valley Basic Income.</p><h2>The rank and file</h2><p>The wealth dynamics behind startups—the early stages of tech companies – are still present when these companies succeed and grow to the scale of Facebook and Google. Although founders and employees experience different material conditions, the outcomes are the same: they all spend their days thinking of ways to increase profits for a parasitical corporation. Employees, though, have less power over their own circumstances, trapped as they are by the&nbsp;<a href="https://medium.com/re-write/why-im-leaving-the-golden-handcuffs-of-silicon-valley-behind-to-educate-the-world-d1c8a334d1a3">golden handcuffs</a>&nbsp;of stock grants and dangled raises. Someone who’s worked at a place like Google in Silicon Valley can, after a couple of years, draw&nbsp;<a href="https://www.reddit.com/r/cscareerquestions/comments/7idcak/official_salary_sharing_thread_for_experienced/">over $300,000 USD annually</a>&nbsp;in total compensation. How do you walk away from that? When you’re submerged in a culture that values you primarily based on how much capital you can accumulate, how do you step away from something so manifestly rewarding?</p><p>Compounding the problem is the link between the material rewards and the idea of merit. Silicon Valley is brimming with the belief that those who make so much money must somehow&nbsp;<em>deserve</em>&nbsp;it, because they&nbsp;<em>worked</em>&nbsp;for it, in a weird revenge-of-the-nerds-type scenario. That’s the whole function of the “meritocracy” ethos within the industry: to justify the existing distribution of wealth and power, no matter how extreme. After all, software developers aren’t driven by solely by money; they’re just&nbsp;<a href="http://www.paulgraham.com/love.html">doing what they love</a>, and if they happen to be well-compensated in the process, they’d earned it by dint of their hard work and skill.</p><p>It’s painful, for me, to see things twisted this way. Because there really is something wonderful about being able to manipulate technology, an exhilarating intoxication when it’s just you versus the machine. That “aha!” moment when things finally click and it’s like you’ve unlocked the secrets of the cosmos. You feel like a kind of god, sometimes.</p><p>And that feeling is great, and more people should have the opportunity to master technology in that way. But technical expertise has morphed into a filter, an excuse for justifying existing patterns of inequality. Those who are on the top clearly just deserve to be so, by fiat; there’s no room to question the validity of the system.</p><p>What’s especially dangerous about the glorification of the brilliant technical worker is how it obscures the very real exploitation going on beneath the surface. These highly-paid technical employees, producing intellectual property, find their dialectical opposite in the low-paid employees who provide the material foundations for the company’s success. Similar to how women’s domestic work was (and still often is) invisible, this work is often done by contractors working under punishing conditions. Facebook has its army of moderators in the Philippines; Apple has its assemblers at Foxconn; Uber has its drivers; Deliveroo has its riders; Amazon has&nbsp;<a href="http://www.independent.co.uk/news/uk/home-news/amazon-workers-working-hours-weeks-conditions-targets-online-shopping-delivery-a8079111.html">its warehouse workers</a>. And all of these tech companies have the staff that directly caters to the highly-paid employees: cleaners, chefs, baristas, security guards.</p><p>There’s a clever strategy at work here, behind the scenes. This is&nbsp;<a href="http://www.publicseminar.org/2015/02/cog-cap/">cognitive capitalism’s</a>&nbsp;answer to the labour-capital crisis of the 70s, in which workers were able to attain enough leverage over key points in order to seriously disrupt production and thus exact concessions from capital in terms of better wages or working conditions. The trick here is to&nbsp;<em>bifurcate</em>&nbsp;labour. First, identify the workers who can contribute directly to the intangible assets and thus could potentially have leverage over production, and&nbsp;<em>treat them really well</em>. Pay them well, of course, with a high base salary and stock grants that refresh every year, but also provide them with free food,&nbsp;<a href="https://thepointmag.com/2017/examined-life/the-google-bus">private buses</a>, and lavish parties at&nbsp;<a href="http://www.sfgate.com/business/article/Why-tech-companies-spend-so-much-on-holiday-5954418.php">San Francisco’s City Hall</a>. Make them feel valued and special and like they’re part of something important.</p><p>In short, do whatever it takes to ensure they have&nbsp;<a href="https://michaelochurch.wordpress.com/2016/04/18/what-i-fought-for">no reason to organise</a>.</p><p>This is how these companies get away with treating a subset of their workers&nbsp;<a href="https://www.theguardian.com/technology/2017/sep/26/facebook-workers-housing-janitors-unique-parsha">so badly</a>: these workers rarely amass enough company-specific knowledge or access to have any significant bargaining power, and thus easily are replaced. The ones who do have leverage, on the other hand, you can count on to be sheepishly loyal to the corporations on account of their paychecks and the free office La Croix.</p><p>Whether or not that’s a deliberate strategy is debatable; your typical hiring manager is unlikely to be this class-conscious. The point is that it&nbsp;<em>works</em>. It has, at least so far, been an extremely successful method for preventing too much of a Polanyi-esque double movement. Thus these companies owe part of their continued success to their ability to contain labour through strategic stratification.</p><h2>Tech workers of the world</h2><p>It doesn’t have to be this way, though. In recent years, we’ve seen the rise of movements to organise tech workers by&nbsp;<a href="https://logicmag.io/tat-solidarity-forever/">building solidarity</a>&nbsp;between high- and low-paid workers. This could be a potent combination: highly-paid workers at crucial production points have&nbsp;<a href="https://www.theguardian.com/news/2017/oct/31/coders-of-the-world-unite-can-silicon-valley-workers-curb-the-power-of-big-tech">so much collective leverage</a>&nbsp;and could demand better conditions for their less well-paid peers, or call for more ethical business practices.</p><p>The challenge is to get those on top to&nbsp;<em>see</em>&nbsp;that, to see the extent of their collective power and thus responsibility. It’s to get these well-paid and well-treated workers to realise that they are&nbsp;<a href="https://www.jacobinmag.com/2017/08/silicon-valley-gentrification-tech-sharing-economy">still workers</a>&nbsp;and, moreover, human beings, whose interests won’t always be aligned with the interests of the corporation they work for. It’s to get them to take a sort of cognitive leap, from seeing the world as mostly fine and just trying to take their place in it, to realising how flawed it is and resolving to do their best to fix it, bit by bit.</p><p>I hope more of them realise what the industry they’re part of has done to the world, and reclaim that long-buried promise of technology to make the world a better place. I hope more of them realise that everything they care about—this abstract world of JavaScript frameworks and Apple Watches and stock options in which they’ve become immured—is just so much superstructure and that with every passing day, the material foundations of their existence are crumbling, crumbling like dirt into the San Francisco bay.</p><p><em>This article was originally published at <a href="http://notesfrombelow.org/article/silicon-inquiry#">Notes from Below</a>.</em></p><fieldset class="fieldgroup group-sideboxs"><legend>Sideboxes</legend><div class="field field-related-stories"> <div class="field-label">Related stories:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> <a href="/cathy-oneil-leo-hollis/weapons-of-maths-destruction">Weapons of maths destruction </a> </div> </div> </div> </fieldset> <div class="field field-rights"> <div class="field-label">Rights:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> CC by NC 4.0 </div> </div> </div> digitaLiberties digitaLiberties Wendy Liu Mon, 12 Mar 2018 16:43:08 +0000 Wendy Liu 116635 at https://www.opendemocracy.net