Human Rights and the Internet

We don’t need no education? – informing the misinformed

To know that information is false, you need to dedicate time and effort to verify things. Would you have the time, resources and energy to do this?

Elinor Carmi
21 October 2019
A twitter scenario including tweet from Benjamin Netanyahu.
A twitter scenario including tweet from Benjamin Netanyahu.
|
Screenshot.

Family WhatsApps have become a battlefield of guilt, shame, too many baby pictures but more importantly – disinformation. Whether it is from your parents, racist uncle or the husband of your sister everyone hates – people receive a lot of ‘fake news’, problematic memes and Trump tweets to make you wish you could just stay with dad jokes.

The images above might be fake, but the scenario probably did happen. On September 17, Israeli election day, the prime minister Benjamin Netanyahu tweeted this image warning people about high voting rates of the left parties around 10AM in the morning. The image showed ‘left wing’ cities colored in red, ‘right wing’ cities painted in blue. The problem with this was that the voting rates indicated in the image which ranged between 60-70% were incorrect, because in the morning hours they range between 10-20%. In fact, the overall voting rates in Israel are usually around 70%. Netanyahu had deliberately prepared an image which presents false information to push his supporters to vote.

This was not the only piece of disinformation he spread that day, and as the election day progressed more false information was spread by him and his party members on various social media.

But to know that this information is false you need to dedicate time and effort to verify things. You first need to be aware that information from your government politicians might be false, then you need to spend more time to check the correct figures, verify through several sources if the numbers match, and make sure it is correct in several places. Would you have the time, resources and energy to do this?

Fight for what rights?

It seems that families, friends and communities in Brazil, India, South Africa and Israel are spreading disinformation material all the time, especially around election time. UNESCO’s Global Media and Information Literacy Week 2019 is a good time to pause and start asking which skills and what knowledge we need in the age of information wars and what we can do about it.

Do we even need skills and information? A daily stroll on the Twitter timeline of various world leaders whose names we shall not mention and the answer is pretty clear. Not only do we need them, they are part of the UN fundamental human rights that were established in 1948. These rights were created after the second world war as a lesson to be learned from the way the world had got into what seemed to be a one-way ride to apocalypse. And just like the endless sequels of Terminator, we seem to be getting the newer version of mad world.

Media literacy is celebrated by the UN during a specific week, but it is based on two important human rights which apply throughout the whole year, even if many world leaders (conveniently) have forgotten about them. The first is the Right to Education – as the UN pledges, ithis literacy “shall be directed to the full development of the human personality and to the strengthening of respect for human rights and fundamental freedoms. It shall promote understanding, tolerance and friendship among all nations, racial or religious groups, and shall further the activities of the United Nations for the maintenance of peace”. The second is the Right to freedom of Opinion and Expression which also says that we should be able “to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers”.

Understanding and tolerating others and also being able to receive information seem to be more important than ever. But while the intentions behind these rights are good, their application is more complex. Can we expect leaders who do not even know how to properly use Caps Lock and basic grammar rules while making more than 13,435 false or misleading claims over 993 days to understand and address the importance of education?

What do we even mean by literacy here? Is it just reading and writing like it used to be in the past? Are frameworks like the UN’s Education 2030 practical and relevant to the way the internet is used today? Or are we dealing with a more complex set of skills and knowledge so that we can cope with today’s confusing datafied world? Literacy in the age of disinformation is complicated and nuanced, and while people across the world still require basic literacy skills, we also need other skills which suit the way information is distributed across multiple platforms. But first, what do we mean by ‘disinformation’?

What are we talking about?

But before we continue, let’s understand what we are actually talking about. Terms like ‘fake-news’ are problematic because they try to discredit media reporting, which is probably the last thing we need right now. Clarifying these terms, Claire Wardle and Hossein Derakhshan wrote a report – Information Disorders – for the Council of Europe in 2017, which identified three types of problematic information practices (mis-, dis-, and mal- information): 1) Dis-information is information that is false and deliberately created to harm a person, social group, organization or country; 2) Mis-information is information that is false, but not created with the intention of causing harm; 3) Mal-information is information that is based on reality, used to inflict harm on a person, organization or country. Whether they are created with intention to harm or not, most of these campaigns are distributed in all the platforms people use everyday such as Youtube, Twitter, Facebook and also Google search engine results.

Whether they are created with intention to harm or not, most of these campaigns are distributed in all the platforms people use everyday such as Youtube, Twitter, Facebook and also Google search engine results.

These strategies are not a new thing, and in fact mis- and dis-information go way back to information wars within and between various countries in the last century, the obvious cases being the Soviet Union and Nazi Germany or say more recently, Rwanda. What’s new here is the way that social media platforms work to disseminate such information. But do you know how these platforms actually work?

We’ve all been there. Playing a Youtube video about climate change, and 3 hours and a couple of glasses of gin and tonic later, finding ourselves watching scientists saying that there is no global warming because the prophecy of the Maya said that the Wind gods will save us all in 2025. (Which is totally true by the way.) We have all participated in the time warp experiment of Youtube, it’s just a click to the left and a scroll to the right. But what do we really know about how Youtube works? Do we actually know how it is funded, and how do they decide which videos to show after you’ve clicked on one video and left it running? To understand that, we need to follow the money, to know how they are funded.

Show me the money

One of the main issues with Youtube and other social media is that many people actually do not know they are funded. According to OfCom, almost half of UK citizens do not know that platforms like Google and Facebook are funded by advertising. You might say, fine but I get free videos of grumpy cat eating some sushi with Japanese girls in an arcade. That is a valid point, but the digital advertising ecosystem makes important decisions on the kind of content it shows, promotes and highlights and which content it suppresses, hides and filters. In this way, for example, while extremist content is highlighted and promoted, LGBT content is constantly being demonized, restricted or removed. Sensational content gets more clicks. More clicks means more engagement which means more revenue and hence has value to these platforms.

And this popularity contest is toxic and dangerous. As the sociologist Zeynep Tufecki argues: “What we are witnessing is the computational exploitation of a natural human desire: to look ‘behind the curtain,’ to dig deeper into something that engages us. As we click and click, we are carried along by the exciting sensation of uncovering more secrets and deeper truths. YouTube leads viewers down a rabbit hole of extremism, while Google racks up the ad sales”.

As a recent article in the New York Times shows, this is precisely what happened in Brazil’s latest election where “YouTube’s search and recommendation system appears to have systematically diverted users to far-right and conspiracy channels in Brazil”. But my mom sent it, so it must be OK, right?

This is partly the issue. Findings from the recent OfCom report on news consumption in the UK, show that most people say they access news through social media. More specifically, people access news through stories that are trending or by seeing comments and stories shared by friends and family rather than following pages of news publishers such as the Guardian. But while your friends might post mouth watering food porn, that doesn’t mean that they understand that they just commented on a fake image that Yair Netanyahu recently posted on Facebook. What many people do not know is that these platforms and various groups are making different types of interventions in which type of trends and news people will see on platforms. So does that mean we need to start learn coding and become programmers to understand how these platform work? Not quite.

What new skills are needed for the age of information disorders?

So what kind of skills do people need? While there is no common agreements on the skills needed, the British government has published an Essential Digital Skills Framework which includes: Communicating, Handling information and content, Transacting, Problem Solving, and Being safe and legal online. But these skills don’t actually help us in understanding why we end up seeing anti-vaccine or anti-Greta Thunberg videos on Youtube.

So what else is needed other than skills? Doteveryone have expanded on the traditional literacy frameworks and have argued that beyond the basic skills listed above, people also need to have ‘digital understanding’ (2018). This means not only knowing how to use the internet but also people “knowing the implications of using the internet”.

People need to know… why comments of your friends to some disinformation Benjamin Netanyahu spread are surfacing at the top of your Facebook feed.

That means that people need to know who has access to their data, that trending topics change according to their location, profile and sponsored topics, that things you write on WhatsApp might appear as ads on Facebook, and why comments of your friends to some disinformation Benjamin Netanyahu spread are surfacing at the top of your Facebook feed. So, no, you don’t need to be a programmer, economist or lawyer, but you do need to understand the rationale behind why you will see certain things while not seeing others.

What’s next?

Many governments across the world have recommended media literacy as a way to tackle disinformation. The UK’s Department for Digital, Culture, Media and Sport (DCMS) for example, published a report on February 2019 on ‘Disinformation and “fake news”', where they highlight the importance of digital literacy.

Interestingly although DCMS recommended integrating digital skills into education systems the UK government has refused to do so, claiming that other organisations are already working on this area so they don’t want to duplicate such strategies. One of DCMS’ suggestions is interface designs that encourage ‘friction’ with algorithms to slow down the time people engage on platforms and by doing so, allows them to think about what they write and share on social platforms. Another design solution they offer involves developing online tools that distinguish between quality content and disinformation sources because it is difficult for people to make this distinction. The problem with such solutions is that they are technologically deterministic and ironically leave the companies – one of the main causes and facilitators of disinformation – to provide the answers.

Interestingly although DCMS recommended integrating digital skills into education systems the UK government has refused to do so, claiming that other organisations are already working on this area so they don’t want to duplicate such strategies.

Can we really trust Facebook, Google and WhatsApp to provide designs that put people’s interests at the centre? The fact that Donald Trump’s Twitter account is still active despite distributing disinformation and other sexist and racist tweets, suggests no. More than that, Facebook recently said they will not fact-check politicians and will treat their posts as true (and also did not clearly define who is a politician, what about Ivanka Trump?). In fact, as the Norwegian Consumer Council showed in their research, Deceived by Design,on social media platforms’ compliance to the GDPR – they use dark patterns to manipulate people. Their report analyses the privacy settings of “Facebook, Google and Windows 10, and show how default settings and dark patterns, techniques and features of interface design meant to manipulate users, are used to nudge users towards privacy intrusive options”. As many scholars and journalists who have been studying these issues have been arguing for a long time – Tech won’t save us.

So while governments and technology companies do not provide solutions, in recent years various NGOs have developed mechanisms to educate and inform people about the potential dangers in them. Lately the Finnish broadcasting company YLE developed a game called the Troll Factory to try to “illustrate how fake news, emotive content and bot armies are utilized to affect moods, opinions and decision-making”. The Washington Post have published a fact checking guide to manipulated videos, while Data & Society’s Britt Paris and Joan Boston have published a useful guide about deepfakes. Another tool was developed by Indiana University's Observatory on Social Media who launched the BotSlayer – a new tool in the fight against online disinformation. The free software scans social media in real time to detect evidence of automated Twitter accounts, or bots.

These are great starting points, but there is much more yet to be done.

Understanding what people need

In our Nuffield Foundation funded project – Me and My Big Data: Understanding Citizens Data Literacy – we are trying to make sense of all of this and to develop education materials that will provide different types of audiences with the skills to meaningfully understand, critique and participate in the datafied society we live in.

First, we have circulated a survey trying to figure out what are UK citizen’s digital skills and understanding. While designing the survey we realised that we also want to approach an important group of people not previously addressed – non-users. According to the Oxford Internet Institute’s latest internet survey (OXIS) from 2019, almost 20% of British people are not using the internet. This is a large group of people who despite not using the internet do have data traces of their profiles from different agencies and organisations they engage with (benefits, municipalities registries etc). We need to know how they understand data, disinformation and the internet economy.

In addition, we wanted to zoom into the difference between ‘active’ and ‘passive’ users. Much recent research shows that many people do not post so much, but do listen quietly to different stories and news. In fact we think a simple dichotomy between ‘active’ and ‘passive’ doesn’t quite make sense, because reading posts on Facebook and maybe then sharing and discussing them on the family WhatsApp is not really ‘passive’.

Data participation

Following these complexities, a key issue we focus on is people’s data participation. How do people understand and engage on various platforms? What do they do and how does that relate to their everyday activities? In particular, we want to examine how people develop a network of literacy; meaning, what kind of support systems and systems of trust do people establish between their peers and communities in order to develop their digital skills and understandings?

To do this, we must take into account people’s digital literacy beyond the ‘digital’ – just because my phone is in my pocket doesn’t mean I’m not ‘online’. Let’s face it, we are on Facebook, Gmail, Microsoft outlook (not by choice if you’re in UK academia), pay your bills, travel with Oyster, register for petitions and elections, and of course, participate in several WhatsApp groups in several ways (for example, you could mute your family WhatsApp but you’re still part of it).

We are plugged into multiple digital hubs, but at the same time we’re also having a drink (or three) with our friends, going to the park, cooking dinner for our families and practicing for air guitar contests. Our lives are a web of entangled threads of experiences which cannot be neatly separated into digital versus physical.

Because we realise that a lot of nuance can be lost in surveys we will be conducting multiple focus groups to get a better understanding of how people think and engage with different spheres. Our final goal is to create education material that will cater for people in different places with different abilities, backgrounds, understandings and resources.

No one solution to ‘save us all’

Phew now that we got the education part sorted we can all go home and celebrate with the new Pixies album, right?

Not quite. It’s important to realise that education materials are not THE one solution. Education should be part of a larger set of strategies which: 1) provide more funding for libraries and librarians; 2) regulate the digital advertising industry; 3) break up the big technologies monopoly; 4) penalise disinformation strategies performed by individuals, groups or politicians (by de-motivating sensational content and defund extremist content creators); 5) do not place the responsibility to be educated on individuals but rather on state institutions and technology companies; 6) look for longterm training and education rather than one-off campaigns; 7) oblige technology companies to design interfaces which do not deceive and mislead, but rather inform, empower and engage people’s participation. 8) enable people to have a say in the way technologies are developed; 9) design multiple solutions that cater for people from different ages, genders, religions, race and ableism. This is just a short list, but there are many other ways which can help us tackle the information distortions that we are experiencing.

So next time someone in your WhatsApp group sends problematic material (I’m looking at you uncle Rob), it is time to pause, verify and show them gently how that information is misleading. Especially the stuff about Blink 182 latest video, regarding aliens.

Had enough of ‘alternative facts’? openDemocracy is different Join the conversation: get our weekly email

Comments

We encourage anyone to comment, please consult the oD commenting guidelines if you have any questions.
Audio available Bookmark Check Language Close Comments Download Facebook Link Email Newsletter Newsletter Play Print Share Twitter Youtube Search Instagram