Photo by rawpixel.com
We know what censorship looks like: writers being murdered, attacked or imprisoned; TV and radio stations being shut down; the only newspapers parrot the state; journalists lost in the bureaucratic labyrinth to secure a license or permit; government agencies approving which novels, plays and poetry collections can be published; books being banned or burned or the extreme regulation of access to printing materials or presses. All of these damage free expression, but they leave a fingerprint, something visible that can be measured, but what about self-censorship? This leaves no such mark.
When writers self-censor, there is no record, they just stop writing or avoid certain topics and these decisions are lost to time. Without being able to record and document isolated cases the way we can with explicit government censorship, the only thing we can do is identify potential drivers to self-censorship.
In 2013, NSA whistle blower, Edward Snowden revealed the extent of government surveillance that enables intelligence agencies to capture the data of internet users around the world. Some of the powers revealed enable agencies to access emails in transit, files held on devices, details that document our relationships and location in real-time and data that could reveal our political opinions, beliefs and routines. Following these revelations, the UK government pushed through the Investigatory Powers Act, an audacious act that modernised, consolidated and expanded digital surveillance powers. This expansion was opposed by civil rights organisations, (including Scottish PEN where I work), technologists, a number of media bodies and major tech companies, but on 29th November 2016, it received royal assent.
But what did this expansion do to our right to free expression?
As big data and digital surveillance is interwoven into the fabric of modern society there is growing evidence that the perception of surveillance affects how different communities engage with the internet. Following the Snowden revelations, John Penney at the Oxford Internet Institute analysed traffic to Wikipedia pages on topics designated by the Department of Homeland Security as sensitive and identified “a 20 percent decline in page views on Wikipedia articles related to terrorism, including those that mentioned ‘al Qaeda,’ ‘car bomb’ or ‘Taliban.'” This report was in line with a study by Alex Marthews and Catherine Tucker who found a similar trend in the avoidance of sensitive topics in Google search behaviour in 41 countries. This has significant impact on both free expression and democracy, as outlined by Penney: “If people are spooked or deterred from learning about important policy matters like terrorism and national security, this is a real threat to proper democratic debate.”
But it doesn’t end with sourcing information. In a study of Facebook, Elizabeth Stoycheff discovered that when faced with holders of majority opinions and the knowledge of government surveillance, holders of minority viewpoints are more likely to “self-censor their dissenting opinions online”. If holders of minority opinions step away from online platforms like Facebook, these platforms will only reflect the majority opinion, homogenising discourse and giving a false idea of consensus. Read together, these studies document a slow erosion of the eco-system within which free expression flourishes.
In 2013, PEN America surveyed American writers to see whether the Snowden revelations impacted their willingness to explore challenging issues and continue to write. In their report, Chilling Effects: NSA Surveillance Drives US Writers to Self-Censor, PEN America found that “one in six writers avoided writing or speaking on a topic they thought would subject them to surveillance”. But is this bigger than the US? Scottish PEN, alongside researchers at the University of Strathclyde authored the report, Scottish Chilling: Impact of Government and Corporate Surveillance on Writers to explore the impact of surveillance on Scotland-based writers, asking the question: Is the perception of surveillance a driver to self-censorship? After surveying 118 writers, including novelists, poets, essayists, journalists, translators, editors and publishers, and interviewing a number of participants we uncovered a disturbing trend of writers avoiding certain topics in their work or research, modifying their work or refusing to use certain online tools. 22% of responders have avoided writing or speaking on a particular topic due to the perception of surveillance and 28% have curtailed or avoided activities on social media. Further to this, 82% said that if they knew that the UK government had collected data about their Internet activity they would feel as though their personal privacy had been violated, something made more likely by the passage of the investigatory Powers Act.
At times, surveillance appears unavoidable and this was evident in many of the writers’ responses to whether they could take actions to mitigate the risks of surveillance. Without knowing how to secure themselves there are limited options: writers either resign themselves to using insecure tools or choose to avoid the internet all together, cutting them off from important sources of information and potential communities of readers and support. Literacy concerning the use of Privacy-Enhancing Technologies (oftentimes called PETs) is a vital part of how we protect free expression in the digital age, but as outlined by the concerns of a number of the participants, it is largely under-explored outside of the tech community: “I think probably I need to get educated a wee bit more by someone...because I think we probably are a bit exposed and a wee bit vulnerable, more than we realise.” Another was even more stark about their worries about the available alternatives: “I have no idea about how to use the Internet ‘differently’”.
When interviewed, a number of writers expressed concerns about how their writing process has changed or is in danger of changing as a result of their awareness of surveillance. One participant who had covered the conflict in Northern Ireland in 70s and 80s stated that they would not cover the conflict in the same manner if it took place now; another stopped writing about child abuse when they thought about what their search history may look to someone else; when they heard of a conviction based on the ownership of the Anarchist Cookbook, a participant who bought a copy for research shredded it. Further to this a participant stated: “I think I would avoid direct research on issues to do with Islamic fundamentalism. I might work on aspects of the theory, but not on interviewing people…in the past, I have interviewed people who would be called…‘subversives’.”
These modifications or avoidance strategies raise a stark and important question: What are we as readers being denied if writers are avoiding sensitive topics? Put another way, what connects the abuse of personal data by Cambridge Analytica, the treatment of asylum seekers by the Australian government on Manus and Nauru, the hiding of billions of pounds by wealthy individuals as revealed in the Panama and Paradise Papers, the deportation of members of the ‘Windrush Generation’ and the Watergate scandal? In each case, writers revealed to the world what others wanted hidden. Shadows appear less dense if writers are able to explore challenging issues and expose wrongdoing free from the coercive weight of pervasive surveillance. When writers are silenced, even by their own hand, we all suffer.
Surveillance is going nowhere – it is embedded into the fabric of the internet. If we ignore the impact it has on writers, we threaten the very foundations of democracy; a vibrant and cacophonous exchange of ideas and beliefs, alongside what it means to be a writer. In the words of one participant: “You can’t exist as a writer if you’re self-censoring.”