A laptop with its webcam taped over. Image: Santeri Viinamäki, CC BY-SA 4.0 CC BY-SA 4.0 During the past five years, a trend has emerged that can be spotted in most cafés, libraries and corporate headquarters – the covering up of cameras on our personal devices. Once deemed a paranoid precaution, placing a sticker or tape over cameras on our laptops, tablets and even smartphones has now become a relatively commonplace measure. Over a third of Americans now cover up at least some of the cameras on the devices they own, according to a survey published by YouGov last year. The webcam sticker’s rise to ubiquity can be traced back to Edward Snowden’s NSA-leaks in the summer of 2013, after which an unprecedented public discussion of digital surveillance and privacy took place. Among the headlines were stories of how Edward Snowden and Mark Zuckerberg put stickers on their webcams, which led to a decline in public trust in the little eye above our screens. It was the first time that reports were published on how the American intelligence service, with its GUMFISH plug-in, could monitor people by hijacking their webcams. Since then, the webcam sticker has become a symbol for a growing distrust with technology and our attempt to uphold a sense of privacy. It has come to represent a physical means of protection against an unknown evil in tools we use everyday.
Last month, a landmark piece of privacy regulation came into effect in the European Union, which could have wide-reaching implications not just for how our data is used online but also for our privacy. Several years in the making, the General Data Protection Regulation (GDPR) arrives only months after the Cambridge Analytica revelations. Both news stories have – in each their own way – thrown new light on how little control we have over our data and devices. An extensive 2014 Pew Research survey revealed that there is a widespread sense of powerlessness over digital privacy: while 74% of Americans believe that being in control of their data is very important to them, only 9% feel like they have “a lot” of control over how much information is collected about them online. Is there any hope that we might regain a sense of trust and empowerment, or are we heading towards total alienation and apathy?
3 things about this photo of Zuck:
Camera covered with tape
Mic jack covered with tape
Email client is Thunderbird pic.twitter.com/vdQlF7RjQt
In the wake of the Cambridge Analytica scandal, there has been an uptick in digital activism aimed at reasserting control over the online services which collect our data. On the 19th of March, just two days after the first articles about the scandal appeared in the Observer and in the New York Times, #DeleteFacebook began trending on Twitter. The hashtag was catapulted into the spotlight when Tesla and SpaceX founder Elon Musk, deleted the Facebook pages for both his businesses, which had around 2.6 million followers each, in support of the movement. Activists were split over #DeleteFacebook, with some criticising the movement for ignoring the many businesses or communities that rely on the social network and accusing it of being only for the privileged few.
“Most people simply can’t throw [their computers] away and move into the forest to become a self-sufficient farmer,” argues out Brandt Dainow, a researcher at the Computer Science Department at Maynooth University, Ireland. Dainow knows a thing or two about how our data gets used online. Late in the 1990s, he set up a software company to develop tracking software, “doing exactly the sort of stuff that, you know, everybody is now worried about.” He goes silent for a few seconds, then adds: “We just thought it was a way to get services delivered to people in a way that fitted their needs better. We never really thought about how it could be misused.”
The problem is it's being presented as a data breach by one company and it’s not – it's business as usual.
Perhaps wishing to atone for his earlier contribution to online tracking, Dainow is currently working on a PhD in data ethics. His research focuses on the concept of digital alienation, which he suggests is the fault of tech firms. “Facebook has said ‘we're an advertising company’, so the end-user is the advertiser. There is no point in Facebook spending time and money building features that don't earn it any money, or which might even reduce its profits. It's a profit-making business. And that’s the alienation. It means that the social network in which you live is not designed for you.” As long as we aren’t the end-users, that sense of alienation is unlike to change. The huge interest in collecting personal data in order to target advertising will prevail, commodifying us, the obvious users, along the way. But the question remains: does the public actually care? “I don't think that this [Cambridge Analytica] controversy is making very much difference to people,” says Dainow. “See, the problem is it's being presented as a data breach by one company, as if it's an isolated incident. And it’s not – it's business as usual.”
A list of companies tracking you on Weather.com, according to Ghostery. Image: Jens Renner, CC BY-SA 4.0.
It’s not just websites that are collecting our data, the practice has also become integral to political organising. The Interactive Advertising Bureau, an organisation for online advertisers, published a report in 2012, praising the use of micro-targeted advertising during the Obama campaign. The conclusion: “Micro-targeting may have been instrumental for some campaigns in 2012. Micro-targeted advertising should – and almost certainly will – become part of a more data analytics-driven culture in successful political campaigns of the future – especially larger campaigns, such as the contest for the White House.” While micro-targeted political ads might not be new, we are only just beginning to understand the unscrupulous methods used to obtain the data needed for them. The data used by the Obama campaign was obtained from consenting supporters through an app. However, the data used by the Trump campaign was first obtained by Cambridge Analytica from users who had no knowledge of how it was going to be used. It remains unclear whether the practice of micro-targeting has a significant impact on elections.
The free reign that tech companies once had over collecting and tracking our data may soon be coming to an end – in the EU at least. From May 25, companies within the EU, or companies handling data from within the EU, will have to follow a new set of regulations. Some of the hidden data collectors on the internet today, like AppNexus, Datalogix and DoubleClick – the trackers snooping in on my weather-checking habits above – will have to establish a direct relationship with me, in order to ask me for permission to do so. This means no more opt-out checkboxes and stupefying long privacy policies. It also means the right to have everything a company has on you erased for good. Among skeptics it is argued that the regulations will further strengthen the monopoly of advertising companies like Google and Facebook, which already have a direct relationship with their European users and thus a higher chance of getting explicit consent to continue the collection of personal information. But proponents of the legislation are equally outspoken, and have maintained that it hands users agency over their information and offers a model for the rest of the world.
The internet shouldn’t be perfect. It can’t be. The physical world isn’t either.
Control over our data might not be enough, however. For one, it does not address a far larger problem – the deluge of misinformation online. Technological advances are making it increasingly harder for us to distinguish between what information is false and what isn’t. Last year, researchers at the University of Washington developed an algorithm that can transpose audio onto a 3D model of Barack Obama’s face, giving them the power to create hyper-realistic fake videos of the former US president. The potential risks of such seamlessly doctored videos are alarming. In a recent interview with Buzzfeed, Aviv Ovadya, a chief technologist for the University of Michigan’s Center for Social Media Responsibility, outlined a scenario where a fake video of Kim Jong-un declaring nuclear war reaches the person in charge of pushing the button to retaliate. “It doesn’t have to be perfect — just good enough to make the enemy think something happened that it provokes a knee-jerk and reckless response of retaliation,” he told the website.
Despite his warnings of a possible “info-apocalypse”, when I speak to Ovadya he is relatively optimistic about the possibility of positive change. “The internet shouldn’t be perfect. It can’t be. The physical world isn’t either. Facebook and Google developed from a state of nothing, and though they haven’t done that good of a job, they haven’t done that bad either. Ethically they’re not where they should be, but they’re not very far away either. I think they could get there.” Ovadya believes that both companies could eventually get to the point where they are serving the public good. They have acknowledged their role in the fake news crisis, for a start. For Ovadya, change must come from the companies themselves, as he is concerned that external activism cannot influence companies quickly enough as new threats emerge. “Social movements are often slow or coarse, but the problems of technology move very fast. I’m not sure that we can count on public movements to drive nuanced change.”
Although GDPR has its limitations, for now Europeans have gotten more choice and thus more power to influence the terms with which they use their technology. But our relationship with data is more complicated than ever before and changing faster than legislation can keep up. Until it does, it might be better to keep our webcams covered.
Correction, June 12 2018: the article originally suggested that Aviv Ovadya was skeptical about the impact of activism. In fact, Ovadya said that he was concerned about whether activism can influence tech companies rather than its overall impact.