The Online Safety Bill endangers us by ignoring digital threats to democracy
In Russia, Putin's propaganda reminds us of the risk of online manipulation. Why are our MPs failing to address it?
Russia’s invasion of Ukraine and its accompanying propaganda wars are a stark reminder that, as Aeschylus already knew in the sixth century BC, the first casualty of war is truth.
And as Russia attempts to manage the minds of its people with a clampdown on ‘fake news’ – choking online information flows and criminalising free expression – the direct connection between virtual and real-world harms comes into sharp focus. Unlike in Vegas, what happens online does not stay online, although traces of it may linger there forever. Peace and democracy rely on freedom of information and the right to form and hold our opinions freely. Increasingly, these freedoms are both enjoyed and destroyed online.
States around the world are all grappling with the dangers we are exposed to online. In the UK, the Online Safety Bill, which had its second reading in Parliament this week, recognises the need to protect “content of democratic importance”. But it fails to recognise that the content is often not the problem. The bigger threat is systems that work to undermine democracy by curating information flows that mould our worldview.
A few weeks ago, I received a video on WhatsApp from a friend in Uganda, who asked me if it was real. It appeared to be a BBC newsflash reporting that the Russians, against the background of tensions in Ukraine, had launched a nuclear attack on London. It was not real. But it was very realistic. It turned out the video had been made in 2018, as part of a corporate emergency response exercise. There was nothing inherently damaging about the content and my friend was not sharing it with malicious intent. The video hadn’t been designed to undermine peace and democracy, but its widespread circulation, in the context of the war, may well have been.
Get one whole story, direct to your inbox every weekday.
The right to freedom of opinion, including the right to keep our opinions to ourselves and to form our opinions free from manipulation, is protected absolutely in international human rights law as long as those opinions remain inside our heads. But the ways in which online information is managed, targeted and amplified pose serious threats to that right in practice.
The use of propaganda by states to manipulate the worldview of entire populations, at home and abroad, is not new. In the First World War, the British Ministry of Information developed a structured approach to propaganda, which was honed by Nazi Germany with the technological tools of mass communications in the build-up to the Second World War. But what has changed, with 21st-century technology, is the ability to personalise messages to target individual minds on a massive scale around the world.
In the wake of the Cambridge Analytica scandal, the UK’s Information Commissioner’s Office flagged the risks of political parties’ unchecked use of data in its 2018 report ‘Democracy disrupted?’. This data is used not only to understand voters, but to influence them directly and surreptitiously – that’s what makes it so valuable. But politicians of all persuasions preferred to turn a blind eye to the issue when the Data Protection Act was passed that same year. It seems they are now preparing to do the same with the underlying problems of online safety and its impact on democracy.
Cambridge Analytica has shut down. But the potential for political behavioural micro-targeting – tailoring the information citizens receive online to their personal foibles, to press their individual psychological buttons – is enshrined in UK law. And the threat of Russian or other hostile-state online interference with our personal visions of the world is well documented. What could be more harmful than the ability to hijack the minds of a population through the targeted manipulation of their information environment? The data, and the content, matter because they provide a portal to our inner lives.
If we don’t want to lose our minds, we need to think about the systems, not the symptoms, of online harms
And it’s not only about war. The Online Safety Bill does not address the underlying problem of the manipulative power of online information flows, which affect so many aspects of our lives. That power is what makes Instagram toxic for teenage girls’ mental health and what persuades people that COVID-19 is a hoax or that the world is flat. Propaganda is not found in the syntax of a single statement; it is in the control of the delivery systems, the atmosphere and, ultimately, the emotions of the population. The problem is not with individual pieces of content but with how, by whom and to what end that content is managed.
The Online Safety Bill mentions only two rights explicitly – privacy and freedom of expression. But as campaigners like Article 19 have pointed out, the focus on content creates a serious threat for those rights too. Political hostility to the wider human rights project makes it difficult to discuss the bigger picture around online harms, but we cannot afford to look away. Rather than making the UK a beacon for online safety, the new bill fails to address the underlying issues, while exacerbating the risks to human rights by creating a system of outsourced censorship.
If we don’t want to lose our minds, we need to think about the systems, not the symptoms, of online harms. And we need regulation that sets out to protect all our human rights. The real-life horrors of war in Ukraine are a stark reminder that we need to think fast. We cannot afford to be left scrambling over parochial political culture wars while the big boys get on with the business of mind control.
Get our weekly email