Print Friendly and PDF
only search openDemocracy.net

Do you agree?: What #MeToo can teach us about digital consent

The conversation around sexual consent could radically change the way we think of consent online.

It started in the beginning of April. It was late at night, and I was swiping mostly left on the famous dating app Binder. One guy sent a message inviting me to experience his “enormous talent”. Rolling my eyes to yet another tempting offer, I unmatched him. Bored and tired from these original solicitations, I decided to watch another Chelsea Handler comedy special online and go to sleep.

In the morning, when I opened Facebook I saw a new message from a person I didn’t recognise. “Hi hotstuff, did u see what I sent you yesterday? I’m free toni8, let’s meet! And here’s a preview pic to help ur imagination ;)”. Gross! Oh god I haven’t even have my coffee yet, how the hell did this guy find my personal Facebook account? Then I remembered, that for some reason, we have mutual friends. He must have searched my name and found me. I blocked and deleted the “talented” guy, thinking this is surely a one time thing. But it wasn’t, it was only the beginning. Suddenly, I started receiving messages from other guys: “hey, remember we dated that one time a decade ago? Let's stay in touch, here’s a pic in case you forgot ;)". An hour later: "hey, remember we talked a couple of years ago in a pub? Let's hang out, k?". Pissed off and annoyed I decided to close my Facebook account, I might not remember anyone’s birthday anymore, but I can’t handle this shit. But then the next hour, I received a message in my Gmail inbox. Then another message on Twitter and WhatsApp. They just kept coming, like zombies “haunting” me – “ghosting” was no longer a thing, apparently. Guys who I swiped right and left on, dated or even just talked to once in the past found all my online accounts, even my Hotmail. “THAT’S IT! I am deleting all my accounts!!! I’m going offline, they can’t find me here!”. Disconnected from everything, I sat in my living room and felt relieved. No more intrusions, I thought with a smile. And just when I was enjoying the silence, I heard a knock on the door...

Even if you don’t participate in the wonderous world of online dating, if you have lived in Europe in the past year this story may be familiar in an altogether different light. During April and May, Europeans have been harassed by multiple websites and services which they previously visited or used. Bombarded by these uninvited intrusions to their private lives, people were left scratching their heads and trying to remember when did they actually interact with these websites? Did they actually give their details? Why did the websites still have their contact details after such a long time? And how do we make it stop? Oh yes, we only need to consent.    

Those who have lived in Europe for some time might also feel a sense of deja vu. Since the European Union’s General Data Protection Regulation (GDPR) came into force on May 25, many websites have introduced pop-up boxes that ask for your consent to collect personal data. These requests are not dissimilar to those that started appearing on websites in 2009, following the revised EU e-Privacy Directive which required websites and third-party actors to get consent before sending tracking technologies (such as cookies, pixels and others) to people’s computers. In addition, companies had to give a clear explanation of the purpose of any cookies they use and allow users to reject them entirely. The definition of consent that was used in the Data Protection Directive was defined as “any freely given specific and informed indication of his wishes by which the data subject signifies his agreement to personal data relating to him being processed”. The solution that advertisers, publishers and tech companies decided on was a pop-up dialog box, stating that the website you are visiting will store a cookie on your device. People were supposedly empowered by clicking  ‘agree’, ‘consent’, ‘accept’ or ‘OK’ in response. Privacy campaigners rejoiced and the internet changed forever… right? Well, not quite. 

What does consent mean?

The consent pop-up box was a bad cosmetic treatment that was supposed to cover up how people’s data is used by companies, without making an actual change. It did nothing to stop data collection and it did not result in publishers and advertisers adding explanations of how cookies worked or what their purposes were. The business model adopted by most tech companies in the 2000s also remained unchanged. Websites continued to seemingly offer free services and content, while surreptitiously monetising both through the collection of vast amounts of user data. Internet users remained a captive audience; refusing to accept the consent pop-up boxes usually resulted in people being denied access.

With internet users left confused rather than empowered by these changes, in 2011 the Article 29 Working Party, the European Commission data protection advisory body, decided to clarify what consent online actually means. In a document clarifying the misunderstanding and flexibility of meanings of consent and how people can express it, the Article 29 Working Party identified several key characteristics: ‘indication’, ‘freely given’, ‘specific’, ‘explicit’ and ‘informed’.

Inspired by western liberal thought about individual freedom and autonomy, the European Commission’s definition of consent always assumes a rational person making decisions with all the information and facts available to them. But as historian Yuval Noah-Harari said in his recent Ted Talk “in the end, democracy is not based on human rationality, it is based on human feelings”. In an online context, to make an informed decision people need to know first how the online ecosystem works: which companies are collecting their data? What is the value of their data? What kind of data do those companies use and for what purposes? How might that affect them in the near and far future? For how long will that data be used? Will these data be used in other contexts and by other companies? And much more. But when even the CEOs of tech companies such as Mark Zuckerberg admit they do not fully understand how their systems work, how can we expect internet users to make informed decisions?

People make decisions according to their emotions, cultural background, education, cognitive abilities, financial situation, family history, different media representations they engage with, health condition, religious beliefs, gender identity and many other parameters. To assume that a decision can, in the words of EU legislation be “freely given” and “informed”, is misguided and simply wrong. As the 2016 US presidential election and 2016 Brexit referendum show, many important decisions are influenced by micro-targeting. As the recent report from the UK’s Digital, Culture, Media and Sport Committee about disinformation and fake news concludes: “relentless targeting of hyper-partisan views… play[s] to the fears and prejudices of people, in order to influence their voting plans and their behaviour”. Thanks to the design of online platforms, which conceal what happens in the back-end, these messages are tailored, personalised and targeted through computational procedures to influence people’s behaviour.

Many thought that following the Cambridge Analytica scandal people would leave Facebook and follow the #DeleteFacebook movement. But this didn’t happen. This is because, as Siva Vaidhyanathan, a professor at the Centre for Media and Citizenship argues, “for many people, deleting their accounts would amount to cutting themselves off from their social lives. And this has engendered a feeling that resistance is futile”. So why do we still get asked to consent to protect our online privacy and experience, when it is impossible to do so?

Binding contract

Consent has traditionally been used as part of a contract. You sign a contract for a house, job or insurance, as an indication that you agree to the conditions of the product, service or employment. But whereas these contracts are static and deal with one particular aspect, online contracts are far from it. In fact, it will take you days, if not weeks, to read the terms and conditions of all the contracts of the online services, platforms and apps you use. Even if you do read all these terms, and manage to understand all the legal jargon deployed, online services frequently change their terms without notifying people. In this way, people have no way of engaging with and understanding what they actually consent to.

But even if you do manage to make the time and read all the terms, and companies will follow the GDPR’s Article 12 which requires them to be transparent about their procedures, it is still not enough to make an ‘informed decision’. This inability to make sense of online contracts is what Mark Andrejevic, one of the most prominent scholars in surveillance studies, calls the data divide. As he argues, “putting the data to use requires access to and control over costly technological infrastructures, expensive data sets, and the software, processing power, and expertise for analysing them”. And as Zeynep Tufekci, a digital sociology professor, points out, given the constantly shifting nature of Facebook’s data collection “consent to ongoing and extensive data collection can be neither fully informed nor truly consensual — especially since it is practically irrevocable”. In short: we simply cannot understand how the data collected about us is used. We do not have the processing abilities and big tech resources to see the wider picture.

This inability to understand algorithmic procedures also renders the GDPR’s ‘Article 21 - the Right to Object’ quite useless. The right to object enables people to refuse the processing of their personal data, including common practices used by digital advertisers such as profiling. But how can you object to something when you do not understand how your data can be used to harm you? In order to object, people first need to be aware how their data is being used. Withdrawing consent according to the GDPR’s article 7 is also problematic for the same reasons, but also because companies make it very difficult to find the mechanisms that enable people to do so. So, once again, why are we still being asked to consent? Power and control.  

Default control

The current definition of online consent transfers responsibility to the individual under the guise of offering users choice. But the way it effectively works is as a control mechanism. As Becky Kazansky, a cyber security scholar and activist argues, this kind of ‘responsibilisation’ is “[e]ncouraging an emphasis on the individual as the primary locus of responsibility for protection from harm… [and has] the convenient effect of deflecting attention from its causes”. As in the offline domain, when you sign a contract you are responsible for abiding by the conditions, and if you do not then you are liable for breaching that contract. And yet the legal and tech narratives frame it as if people are empowered to make decisions and are able to control the way their data is used and will be used.

This line of thinking is predicated on the assumption that a person’s personal data is a tangible and clearly bounded, singular object that people supposedly have ownership and direct control over. But the way our data is assembled online is more like an ever-evolving “data-self” than a piece of personal property. After all, the data people create, or the traces they leave online, outlines some of their characteristics and behaviours. For example, your mobile phone knows where you have been, who you talked with, when you are awake, the websites you visited, the videos you watched, the songs you played, the food you ordered and much more. But as Erving Goffman argues in his famous book ‘The Presentation of Self in Everyday Life’ back in 1956, we perform our selves differently in different contexts. Importantly, we never reveal all the aspects of our lives. Our data-self is incomplete, inaccurate and consists of multiple messy representations.

How we present ourselves in different social media platforms

The way we present ourselves in different contexts is fluid, evolving and never fixed. Data points about us are enormous and ever-growing, and they can be reshuffled, recombined and assembled in multiple ways over stretches of time. But the way that consent is applied online is static; we are asked only once to consent to multiple procedures often with little or no choice. More than that, our data-self is attached to the profile that companies assemble on us, and is usually connected to our given birth name. This is partly done to improve commercial and advertising potential by connecting our data-self and offline self. But another reason is that it makes us legally responsible for our actions, something that benefits commercial and governmental bodies.

Some of you might wonder at this point: what’s the big deal? How can the processing of my data cause me any harm? The Time’s Up movement is relevant here, and this is also the reason why I opened with my fictional story. The string of sexual harassment cases brought to light by the movement has prompted an important discussion about the fluid nature of consent. Context is crucial to consent, we can change our opinion over time depending on how we feel in any given moment and how we evaluate the situation; the controversy over Aziz Ansari is a case in point. Just because you kissed someone or dated them does not mean you are interested in anything more than that. Consent is an ongoing negotiation and not a one-time signed contract.

The global response to the Weinstein scandal revealed how widespread sexual harassment is both within the film industry and at large. It became clear that rather than individual cases, sexual harassment is a structural problem. The power of these men was possible not only because of their hierarchical position but also because of a network of people, standards and norms. It is an environment that is designed for sexual harassment to exist and flourish. Ultimately, these actions are about directing, narrowing and controlling women’s agency.  

Although they are two different cases, the abuses of power that occur within the film industry and on online platforms share certain similarities. Both rely on a power structure that exploits people, and usually those who are less privileged and marginalised get hurt the most. In the past few years we have seen examples of how data is used to exploit poor people, people of colour, the LGBTQ community, activists and people with a chronic health condition. And even if you think you have nothing to hide, the diffusion of digital technologies, artificial intelligence and the internet of things combined with the privatisation of services, mean that all of us will become vulnerable in various ways. Importantly, the asymmetrical power relation that this architecture creates teaches people what is their position within a particular system. It is about controlling people’s actions, individualising their actions and importantly – narrowing their agency, controlling their data-self.  

Rather than empowering people to negotiate and decide on their own conditions of service, like people do when they sign contracts, we see a strategy to control our actions in these datafied environments. The contractual approach leaves people in a passive position, preventing them from having the opportunity to demand other things from online services. Under EU legislation discourse, our actions online are still dictated by the standardised and automated architectures provided by browsers, publishers and advertisers. In this way, the concept of control mechanisms, in the shape of the consent banner, is used against people not for people. The options available are pre-decided, limited and designed in a way that narrows and manages the way people could use and, ultimately, understand the internet.

The old with the new

Is a better internet possible, one in which privacy as a value and right is internalised by its architecture? By requiring that data protection is built into systems “by design and by default” ” as Article 25 indicates, the GDPR could be a first step towards this aim. But apparently, when it comes to technology companies respecting contracts, or laws, things get more flexible, and much darker. Calling out the design fail following the transposition of the GDPR, the Norwegian Consumer Council – Forbrukerrådet - released a report on June 27, 2018 that criticised tech companies for using “default settings and dark patterns, techniques and features of interface design meant to manipulate users… to nudge users towards privacy intrusive options”. During May and June 2018, the council examined the messages Facebook, Google and Microsoft sent to users in order to comply with the GDPR. Some of the “dark patterns” that report identified include: preselecting default settings with the least privacy friendly options, hiding and obscuring settings, making privacy options more cumbersome, and textual and colour nudges towards data sharing. In other words, what these companies do through these designs is trying to discourage us from exercising our rights to privacy. This is precisely why the council titled their report “Deceived by Design”.  

(Deceived by Design Report, Forbrukerrådet, Page 3)

As I mentioned in my article about the regulation of behaviours in the European Union internet, we can trace these design strategies to the early days of web cookies. In the late 90s, Netscape Communication released a version of their Navigator 4.0 browser which enabled users to reject third-party cookies. But as Lou Montulli, the Netscape developer also credited with inventing web cookies admitted, the feature did not affect the online advertising industry because people didn’t bother to change their default settings. As I argue in the article:

This is how advertising, tech and publishing companies have been controlling information flow on the internet, the design of the architecture where it flows, but also users’ online behaviour and understandings of this environment. Spying on users’ behaviour and distorting their experience if they express their active rejection of cookies is presented as necessary procedures to the internet’s existence.

This notion of ‘consent’ naturalises and normalises digital advertising and technology companies’ practices of surveillance, and educates people about the boundaries of their actions. It also marks the boundaries of what users can demand and expect from commercial actors and state regulators. Portrayed as control, autonomy and power, consent actually moves responsibility from the service or technology providers to individual users.

Consent is a design fail that should not be engineered into our online lives any more. But what are the alternatives, then? A good place to start is with the legal scholar Julie Cohen’s latest article, where she challenges the current functioning of privacy legislation. As Cohen argues, instruments that are meant to have operational effects, such as notice-and-consent, do not work. Ultimately, as Cohen argues “data harvesting and processing are one of the principal business models of informational capitalism, so there is little motivation either to devise more effective methods of privacy regulation or to implement existing methods more rigorously”. To tackle this, one of the first steps towards a change of the current ecosystem is a need to rethink this business model and invest in alternatives that would make the current model undesirable. These are some other possible solutions:

  1. Breaking monopolies of big companies such as Facebook, Alphabet (Google), Amazon and Microsoft.

  2. An internet tax which is funnelled to creating public services and spaces.

  3. Promoting decentralised systems such as peer-to-peer.

  4. De-individualising use of technology.

  5. A live communication platform that connects users to national and EU data protection authorities so that they can complain, discuss, negotiate and monitor how their rights are applied online.

  6. A real-time and dynamic terms and condition panel where people can get updates on changes, and can control and negotiate the different clauses without being denied access. This panel should also connect people to their networks so they can make collective decisions about settings and see the wider impact of those decisions.

  7. Developing education programmes, television shows and radio programmes to teach people about algorithms, data-harvesting and processing, data ethics and their rights.

  8. Enabling a control panel that is part of web browsers and cell phones, which shows what is happening in the back-end and enables people to have real-time negotiations with services. This panel, again, should also connect people with their networks.

These are just some ideas, but none of them should be seen on their own as the only answer that provides the ‘ultimate solution’. Multiple solutions and approaches should be made and promoted, primarily to change the way we use, think and understand the internet. As many media historians show, there are multiple ways in which technology can be used, developed and designed; the default setting is never fixed.  The moment we create more possible ways for the internet to function, we can think of alternative ways to engage with it. Ways which really empower us, and not only give us ‘control’, but agency, autonomy and meaningful choices – individually and collectively. Don’t you consent to that?

About the author

Elinor Carmi is a media and culture scholar looking at the power and politics of unwanted forms of behaviour in media technologies such as spam, noise and cookies. She is also a journalist writing on technology, gender, sound and digital rights. She tweets @Elinor_Carmi.

Read On

More from the Human Rights and the Internet partnership.

Do you care about online rights? digitaLiberties needs your support to keep publishing alternative perspectives on surveillance, net neutrality and privacy. Please help by donating whatever you can.


We encourage anyone to comment, please consult the
oD commenting guidelines if you have any questions.