Print Friendly and PDF
only search openDemocracy.net

The internet's fading promise

About the author
Becky Hogge is a freelance writer and broadcaster. She is the former executive director of the Open Rights Group, a London-based campaigning organisation that fights for civil and consumer rights in the digital age. She was previously the managing editor, and then technology director, of openDemocracy.net. She blogs here, and co-presents acclaimed London radio show Little Atoms. Her first book, Barefoot into Cyberspace, was published in summer 2011

It seems like old news. Two years ago, for openDemocracy, I reported on the release of the Open Net Initiative (ONI)'s investigation into internet censorship in China (see "The great firewall of China", 20 May 2005). Back then, I was able to use the words of John Gilmore - "the internet interprets censorship as damage and routes around it" - as a starting-point from which to watch in awe as his lore was slowly disproved. Today, everybody's at it, and the internet is starting to look more like a tool of control, not one of freedom.

What's new is that the Open Net Initiative's latest body of work, released on 18 May 2007, is the largest sample they've attempted to date - a study of forty-one countries across the world. Of those forty-one, more than half are engaging in content-filtering. Twenty-five countries, from Azerbaijan to Yemen, are found to be blocking different types of content in a variety of areas - from political (related to dissent, free expression, human or minority rights, or internal conflicts) to what the ONI terms "social" content (related to sex, drugs and hate-speech).

The ONI is a partnership between four organisations - the University of Toronto's Citizen Lab, the University of Cambridge's Advanced Network Research Group, the Oxford Internet Institute and the Berkman Center for Internet and Society at Harvard Law School. One of its aims is to establish a rigorous methodology around tracking content-filtering. The field is fraught with uncertainty and paranoia. Witness the story of Wang Xiaofeng, who in March 2006 closed down his own blog, Massage Milk, in a censorship hoax aimed at shining the west's prying light back in its face (and getting a bit of free publicity). The ONI regularly receives reports of suspected filtering - but in order for a country to make it into their report, it must undergo substantial network interrogation.

The state of control

The ONI uses proxy servers, laptops taken into the country it hopes to study, and in-country black boxes to test the access status of lists of potentially censored urls across a sustained period. Across this hardware they run a mixture of existing tools - many more often employed by hackers, criminals and spies - and homegrown software, for example, to determine at what point in the network a website is being blocked.

Some of the countries surveyed by the ONI - such as India and Morocco - filter in a discreet set of instances which have to do with local security or conflict issues. Others - like Azerbaijan, Jordan and Tajikistan - have, according to the ONI report, "exhibited only a few isolated incidents of state-sponsored filtering", mainly around "political topics". Oman, Singapore and Sudan filter only in the social category, with Yemen and South Korea filtering in social and security contexts, the latter presumably blocking propaganda emanating from North Korea.

China, Iran, Ethiopia, Burma, Pakistan, Saudi Arabia, Syria, Tunisia and the United Arab Emirates are the countries that block content in all three categories. Of these, Iran and Tunisia are the most pervasive, which may come as a surprise to those used to talking about the "great firewall of China". The study also notes Iran's imposition of limits on internet speed, a tech-savvy move ostensibly made to halt its citizen's ability to download western cultural content via the web.

The techniques employed by these countries range from blocking a website's IP address, to configuring the DNS servers which match that IP to a website url in order to take the web browser somewhere different. So, when internet surfers in South Korea type in a url to take them to a North Korean website, they are automatically redirected to a site run by the South Korean police. As if that weren't scary enough, the site is programmed to flash the user's own IP address back at him, and warn him that the authorities know where he is.

Other countries engage in url filtering - usually carried out by off-the-shelf commercial products such as San Jose-based Secure Computing's "Smartfilter", these can block urls or url paths, as well as entire country-coded top-level domains (CCTLD). Syria blocks all websites ending in .il - Israel's CCTLD. Yet other states (such as China) use the http headers of webpages to block certain keywords.

The needs of others

The ONI report's launch on 18 May, hosted at the Oxford Internet Institute, was attended by representatives from all four of the partner organisations, together with in-country partners (such as Human Rights Watch) and other stakeholders. What struck me most on the day was not the rise in content-filtering across the world, alarming though it is, but the pervasiveness of content-filtering in the west.

One of four interactive maps produced by the ONI on its website shows Canada, north America and substantial parts of Europe (including Germany, France and the United Kingdom) engaged in "selective" levels of content-filtering along the axis of social content. In the US and Canada, the study admits that most "filtering" occurs in the form of takedown notices, rather than technical blocking, although so-called "government facilitated industry self-regulation" in Canada has led to technical blocking of child pornography and hate-speech.

In Europe, according to the report, filtering "is the norm, not the exception". As with Canada, whether or not this filtering is private or mandated by the state is somewhat blurred, as state actors often use the threat of new legislation to persuade ISPs to conform to "voluntary" schemes. What gets filtered includes child pornography and content that falls foul of European hate-speech and defamation laws. Illegal gambling and copyright infringement have also been proposed as activities that could legitimately lead to content-filtering.

So if even "we're" up to it, is this the end of the unfettered internet? For those who know how to use the right tools, it's hard to see how. The protocols which power the network of networks are such that filtering by any of the means so far discovered by the ONI can be circumvented by those dedicated enough to try. Unfortunately, they are more likely to be the ones "we" might justifiably try to contain. It is the people for whom the internet held so much promise - the dispossessed, the under-represented and the over-burdened - who are slowly seeing that promise starting to fade.


We encourage anyone to comment, please consult the
oD commenting guidelines if you have any questions.