Image: Facebook boss and founder Mark Zuckerberg. Credit: NurPhoto/PA Images.
Ponder this: in 2004 a Harvard sophomore named Zuckerberg sits in his dorm room hammering away at a computer keyboard. He’s taking an idea he ‘borrowed’ from two nice-but-dim Harvard undergraduates and writing the computer code needed to turn it into a social-networking site. He borrows $1,000 from his friend Eduardo Saverin and puts the site onto an internet web-hosting service. He calls it ‘The Facebook’.
Fourteen years later, that kid has metamorphosed into the 21st-century embodiment of John D Rockefeller and William Randolph Hearst rolled into one. In the early 20th century, Rockefeller controlled the flow of oil while Hearst controlled the flow of information. In the 21st century Zuckerberg controls the flow of the new oil (data) and the information (because people get much of their news from the platform that he controls). His empire spans more than 2.2bn people, and he exercises absolute control over it — as a passage in the company’s 10-K SEC filing makes clear. It reads, in part:
”Mark Zuckerberg, our founder, Chairman, and CEO, is able to exercise voting rights with respect to a majority of the voting power of our outstanding capital stock and therefore has the ability to control the outcome of matters submitted to our stockholders for approval, including the election of directors and any merger, consolidation, or sale of all or substantially all of our assets. This concentrated control could delay, defer, or prevent a change of control, merger, consolidation, or sale of all or substantially all of our assets that our other stockholders support, or conversely this concentrated control could result in the consummation of such a transaction that our other stockholders do not support… ”(1).
Such concentration of corporate control is unusual in large public corporations (2) and raises questions about corporate governance and social responsibility. These problems are particularly acute in Zuckerberg’s empire because he wields not just the economic power of a monopolist (in the field of social networking, Facebook has no serious competitor) but also operates a business model that affects democratic processes and may perhaps even have influenced the outcomes of the 2016 Brexit referendum in the UK and presidential elections in the US and elsewhere.
This is not to assert that Zuckerberg himself pursues political objectives, at least at the moment. Rather, the claim is (a) that the computerised, targeted-advertising system Facebook has constructed can be (and has been) exploited by political actors seeking to target political or ideological messages at users whose data-profiles suggest that they may be receptive to those messages; and (b) that the outcomes of such ‘weaponisation’ of social media may be at best anti-social and at worst anti-democratic.
The most striking thing about the discovery of political exploitation of social media in 2016-17 was the initial incredulity of Zuckerberg and his executives that such a thing could have happened. This suggests some or all of the following: a very high degree of political naiveté; a serious case of wilful blindness; and/or a cynical determination to avoid public discussion of the root cause of the trouble — the business model of the corporation and the responsibilities that accompany the power that it confers upon its owner.
The business model: surveillance capitalism
The five most valuable corporations in the Western world at present — Apple, Alphabet (owner of Google), Amazon, Microsoft and Facebook — are all digital enterprises. Three of them — Apple, Amazon and Microsoft — have relatively conventional business models: they produce goods and/or services for which customers pay. The other two — Google and Facebook — provide services that are free in return for the right to extract and monetise the personal information and data-trails of their users. The data thus extracted, refined and aggregated are then deployed to enable advertisers — the actual customers of the companies — to target advertisements at users. This is often summarised in the adage ‘if the service is free, then you are the product’.
Google and Facebook operate what economists call two-sided markets: in their cases revenue from customers on one side (advertisers) subsidise users on the other side. In recent years, the term surveillance capitalism (3) has been coined to describe this business model. Although Google and Facebook portray themselves as tech companies, it’s sometimes more illuminating to regard them as extractive enterprises like oil or mining companies. The latter extract natural resources from the earth, which they then refine, process and sell to customers. Facebook and Google do something analogous, but the resources they extract, refine and monetise are purely digital — the data-trails generated by their users’ activities on their platforms.
There is, however, one radical difference between the oil/mining enterprises and the two digital giants. Whereas reserves of natural resources are, ultimately, finite (4), reserves of the ‘resources’ extracted by Google and Facebook are, in principle, virtually infinite because they are created by what the industry calls user engagement — i.e. users’ online activity. The level and volume of this engagement is staggering; every 60 seconds on Facebook, for example, 510,000 comments are posted, 293,000 statuses are updated, and 136,000 photos are uploaded (5).
Since user engagement is what produces monetisable data-trails, the overriding imperative of the business model is to continually increase engagement. Accordingly, the companies deploy formidable technical and design resources to persuade users to spend more time on their platforms and to engage with them more intensively (6).
Much of this supply-side design is informed by applied psychological research on human behaviour — the same kind of research that informs the design of slot machines (7). Some of the services are addictive by design (8) while others exploit known human fallibilities (9) (for example, by using default settings like autoplay on videos — which users can change but generally don’t (10)). And the companies continually conduct thousands of A/B experiments a day in real time to determine which presentational tweaks most effectively increase user engagement. In a metaphorical sense, therefore, users of social media are unwitting rats in Skinnerian mazes created for their delectation. This is what leads some commentators to speak of social media as a ‘dopamine economy’ (11).
On the demand side, human psychology and sociality play important roles in keeping the machine humming. Humans are famously subject to a wide range of cognitive biases (12), which social media exploit. Well-known examples include confirmation bias (the tendency to search for, interpret, focus on and remember information in a way that confirms one's preconceptions); and hyperbolic discounting (the tendency for people to have a stronger preference for more immediate payoffs relative to later payoffs). On the sociality side there is homophily — the tendency of individuals to associate and bond with similar others.
What the world has belatedly woken up to is the realisation that we are in a kind of perfect storm created by the confluence of a number of powerful forces: network effects which lead to the emergence of global monopolies; the business model of surveillance capitalism, with its insatiable demands for increased user engagement; astute deployment of applied psychology to design compulsive or addictive apps, devices and services; cognitive biases which are part of human psychology; powerful tendencies to cluster together (which are probably an inheritance of early human social groups) and which leads to digital echo chambers online; and weaponisation of social media by political actors.
User-generated content: a double-edged sword
When the internet first went mainstream in the mid-1990s it was hailed as a democratising technology that would liberate people’s innate creativity. Instead of being passive consumers of content created by corporations, ordinary people would be able to bypass the editorial gatekeepers of the analogue media ecosystem. These possibilities of the ‘internetworked’ future were memorably celebrated in The Wealth of Networks, a landmark book by the Harvard scholar Yochai Benkler published in 2006 (13).
Although the technology did (and does) possess all the empowering, democratising potential celebrated by Benkler, in fact only a relatively small minority made use of it in creative ways. This changed with the arrival of Facebook and YouTube — services that made it easy for users to upload content. Much of the resulting content was unremarkable (and much of it infringed copyright), but from early on it was clear that social media platforms effectively provided a mirror to human nature and some of what appeared in that mirror was unpleasant and sometimes horrific (14). Furthermore, it transpired that some of this extreme or otherwise problematic content increased user-engagement — which meant that it generated more monetisable data for the platforms hosting it.
Initially, Facebook’s response to this was relaxed: the onus was placed on users to flag unacceptable or problematic content for possible review by the platform’s owners. This casual attitude was reinforced by Section 230 of the US 1996 Communications Decency Act, which absolved internet services providers from legal liability for content hosted on their platforms (15). But in the wake of a series of developments, including the controversies about the weaponisation of social media by political actors in 2016 and 2017, revelations of the role that Facebook services had played in ethnic cleansing in Myanmar and Sri Lanka, the company’s failure to remove hate-speech and conspiracy theories, and a raft of other scandals (16), this relaxed posture in 2018 had become untenable and the company was now struggling — with questionable efficacy — to contain the abuses that followed from running a platform that enabled anyone to publish whatever they wanted whenever they wanted.
A window into people’s souls
The surveillance capitalism companies have become very good at giving users what they want — which is one reason why some ruefully admit that they find them addictive. They are able to do this because they have garnered an astonishing amount of revealing data about those users and their likely interests, concerns and needs.
As far as Facebook is concerned, the key insight was the discovery in 2013 of how revealing even a low level of user engagement can be. Cambridge University researchers demonstrated something that the company probably already knew, namely that Facebook likes could be used to “automatically and accurately predict a range of highly sensitive personal attributes including: sexual orientation, ethnicity, religious and political views, personality traits, intelligence, happiness, use of addictive substances, parental separation, age, and gender” (17).
Insights available from a user’s behaviour on the site were eventually supplemented by (a) information gleaned from tracking Facebook’s users as they traversed the wider Web and (b) data about users purchased from external sources (e.g. credit-rating agencies) to construct data-profiles which reportedly (18) ran to 98 data-points per user.
In 2007, Facebook made a significant innovation that would later have major implications for both its evolution and for its role in democratic disruption. The company suddenly offered itself as a platform on which third-party developers could run apps. “People should build an application on the Facebook platform because it provides a new kind of distribution on the internet,” said a senior company executive at the time. “Really, what has been lacking in all of the other operating systems and platforms that have ever been created is the ability to really access people (19).”
But whereas the World Wide Web platform was open and uncontrolled, the Facebook platform was proprietary and controlled by the company. The strategic goal for the platform move was to expand Facebook’s global reach and penetration to the point where it — rather than the open web — would effectively become the internet as far as most people were concerned. From the point of developers, the attraction was that their apps could exploit the user data that Facebook had accumulated. It was the decision to allow third-party apps to run on its platform, coupled with a failure to adequately police what these apps were doing with user data that eventually led to the Cambridge Analytica scandal (20) in 2018.
The targeting engine
As we observed, Google’s and Facebook’s users are not their customers. That role is played by advertisers who use the automated engines developed by the companies to identify targets for their commercial messages. Consequently, the most revealing insights into how surveillance capitalism works are obtained not by being a user but by going in as a customer, i.e. an advertiser.
Both companies have constructed automated engines for enabling advertisers to identify types of audiences they wish their messages to reach. In operation and design, these engines are impressive. The Facebook one is particularly user-friendly (21), gently nudging the customer through the various steps needed to identify what the company calls custom audiences and helpfully suggesting categories of user that one may not initially have thought about. As one critic put it:
“If I want to reach women between the ages of 25 and 30 in zip code 37206 who like country music and drink bourbon, Facebook can do that. Moreover, Facebook can often get friends of these women to post a ‘sponsored story’ on a targeted consumer’s news feed, so it doesn’t feel like an ad.”
But one doesn’t have to be a traditional firm doing commercial advertising to use the Facebook engine. The machine is at the disposal of anyone who wishes to direct almost any message at targeted audiences. What seems to have taken Facebook by surprise is that some of the entities that chose to use its system — including at least one foreign power — were in the business of sending not commercial but ideological or political messages to selected categories of users (23).
And it looks as though the use of social media is a highly cost-effective way of doing this. According to the New York Times, Russian agents intending to sow discord among American citizens disseminated inflammatory posts that reached 126m users on Facebook, published more than 131,000 messages on Twitter and uploaded more than 1,000 videos to YouTube (24).
A striking demonstration of the effectiveness of the Facebook targeting engine was provided by an experiment conducted by the news website ProPublica in September 2017. The researchers paid the company $30 to place three promoted posts in the news feeds of Facebook users who — according to the service’s profiles of them — had expressed interest in the topics ‘Jew hater’, ‘How to burn jews’, or ‘History of why jews ruin the world’. The Facebook engine approved all three ads within 15 minutes (25).
Facebook is Zuckerberg’s monster. Unlike Frankenstein, he is still enamoured of his creation, which has made him richer than Croesus and the undisputed ruler of an empire of 2.2bn users. It has also given him a great deal of power, together with the concomitant responsibilities that go with it. But it's becoming increasingly clear that his creature is out of control, that he's uneasy about the power and has few good ideas about how to discharge his responsibilities.
A good illustration of this was provided by a revealing interview (26) that the Facebook boss gave to the tech journalist Kara Swisher in the summer of 2018. The conversation covered a lot of ground but included a couple of exchanges which spoke volumes about Zuckerberg's inability to grasp the scale of the problems that his creature now poses for society.
One of them – obviously – is misinformation or false news. “The approach that we’ve taken to false news", said Zuckerberg, “is not to say, you can’t say something wrong on the internet. I think that that would be too extreme. Everyone gets things wrong, and if we were taking down people’s accounts when they got a few things wrong, then that would be a hard world for giving people a voice and saying that you care about that.”
Swisher then asked him about the alt-right site Infowars – whose Facebook page had more than 900,000 followers and which regularly broadcast falsehoods and conspiracy theories, including a claim that the Sandy Hook mass shootings (27) never happened. But Infowars continued to thrive on Facebook, even though Zuckerberg agreed that the Sandy Hook story was false. Was this because “everyone gets things wrong”, or because of those 900,000 followers? Swisher didn't ask, but a Channel 4 undercover investigation (28) of the Irish firm to which Facebook had outsourced content moderation suggested that objectionable content on Facebook pages with large followings could not be deleted by the traumatised serfs in Dublin; instead such decisions had to be referred up the management chain.
The most revealing part of the Swisher interview, however, concerned Holocaust denial – a topic that Zuckerberg himself brought up. “I’m Jewish,” he said, “and there’s a set of people who deny that the Holocaust happened. I find that deeply offensive. But at the end of the day, I don’t believe that our platform should take that down because I think there are things that different people get wrong. I don’t think that they’re intentionally getting it wrong, but I think it’s hard to impugn intent and to understand the intent.”
If you think this is weird, then join the club. I can see only three explanations for it. One is that Zuckerberg is a sociopath, who wants to have as much content – objectionable or banal – available to maximise user engagement (and therefore revenues), regardless of the societal consequences. A second is that Facebook is now so large that he sees himself as a kind of governor with quasi-constitutional responsibilities for protecting free speech. This is delusional: Facebook is a company, not a democracy. Or thirdly, and most probably, he is scared of being accused of being biased in the polarised hysteria that now grips American (and indeed British) politics.
It's as if he's suddenly become aware of the power that his monster has bestowed upon him. As the New York Times journalist, Kevin Roose, put it on The Daily podcast (29), Zuckerberg's increasingly erratic behaviour could be a symptom of something bigger. “He built a company that swallowed communication and media for much of the world,” observed Roose. “And now we're seeing him back away from that... The problem with ruling the world is that you then have to govern and that's not what it seems he wants to do.” In which case, who will?
This is an edited version of a chapter from the new book Anti-Social Media: The Impact on Journalism and Society, edited by John Mair, Tor Clark, Neil Fowler, Raymond Snoddy and Richard Tait, available from Abramis priced £19.95. Email [email protected] to order a copy.
- https://d18rn0p25nwr6d.cloudfront.net/CIK-0001326801/80a179c9-2dea-49a7-a710-2f3e0f45663a.pdf. The relevant passage continues: “In addition, Mr. Zuckerberg has the ability to control the management and major strategic investments of our company as a result of his position as our CEO and his ability to control the election or replacement of our directors. … As a stockholder, even a controlling stockholder, Mr. Zuckerberg s entitled to vote his shares, and shares over which he has voting control as governed by a voting agreement, in his own interests, which may not always be in the interests of our stockholders generally.”
- Though not unknown in Silicon Valley where charismatic founders use multi-tier shareholding arrangements to ensure that they retain overall control of their creations. This was the case with Google (now Alphabet), for example, and is motivated at least partly to insulate founders from the short-term pressures of Wall Street and enable them to take longer-term strategic views of their enterprises.
- Shosana Zuboff, “The Secrets of Surveillance Capitalism”, Frankfurter Allgemeine Zeitung, 5 March, 2016. http://www.faz.net/aktuell/feuilleton/debatten/the-digital-debate/shoshana-zuboff-secrets-of-surveillance-capitalism-14103616.html
- Pedantic point: the key word is ‘ultimately’. The level of available reserves of natural resources is a function of market-price, location and other factors. Thus the level of oil reserves depends on global oil prices. If the price is high, then it will be economically feasible to extract oil from fields which are relatively harder to work.
- Current estimates put the time the average Facebook user spends on the platform at 20 minutes per day. (https://zephoria.com/top-15-valuable-facebook-statistics/) Some estimates are higher.
- Natasha Dow Schüll, Addiction by Design: Machine Gambling in Las Vegas, Princeton, 2012.
- See Nir Yal, Hooked: How to Build Habit-Forming Products, Penguin/Portfolio, 2014. Hilary Anderson, “Social media apps are 'deliberately' addictive to users”, BBC News, 4 July, 2018, https://www.bbc.co.uk/news/technology-44640959
- Mallory Locklear, “Sean Parker says Facebook ‘exploits’ human psychology”, Engadget, 11 September 2017. https://www.engadget.com/2017/11/09/sean-parker-facebook-exploits-human-psychology/
- John Naughton, “More choice on privacy just means more chances to do what’s best for big tech”, Observer, 8 July, 2018. https://www.theguardian.com/commentisfree/2018/jul/08/more-choice-privacy-gdpr-facebook-google-microsoft
- John Naughton, “How Facebook became a home to psychopaths”, Observer, 23 April, 2017. https://www.theguardian.com/commentisfree/2017/apr/23/how-facebook-became-home-to-psychopaths-facebook-live
- John Naughton, “How two congressmen created the internet’s biggest names”, Observer, 8 January, 2017. https://www.theguardian.com/commentisfree/2017/jan/08/how-two-congressmen-created-the-internets-biggest-names
- For example the use of Facebook Live to stream horrific acts of violence, bullying and worse. See John Naughton, “How Facebook became a home to psychopaths”, Observer, 23 April, 2017. https://www.theguardian.com/commentisfree/2017/apr/23/how-facebook-became-home-to-psychopaths-facebook-live
- Michal Kosinski, David Stillwell, and Thore Graepel, “Private traits and attributes are predictable from digital records of human behavior”, PNAS, April 9, 2013. 110 (15) 5802-5805; https://doi.org/10.1073/pnas.1218772110
- https://www.theguardian.com/news/series/cambridge-analytica-files. See also Kevin Roose, “How Facebook’s Data Sharing Went From Feature to Bug”, New York Times, 19 March, 2018. https://www.nytimes.com/2018/03/19/technology/facebook-data-sharing.html
- Jonathan Taplin, Move fast and Break Things: How Facebook, Google and Amazon Have Cornered Culture, and What it Means for All of Us, Macmillan, 2017, p.143.
- Dipayan Ghosh and Ben Scott, “Russia's Election Interference Is Digital Marketing 101”, The Atlantic, 19 February, 2018, https://www.theatlantic.com/international/archive/2018/02/russia-trump-election-facebook-twitter-advertising/553676/
- Mike Isaac and Daisuke Wakabayashi, “Russian Influence Reached 126 Million Through Facebook Alone”, New York Times, 30 October, 2017. https://www.nytimes.com/2017/10/30/technology/facebook-google-russia.html
- Julia Angwin, Madeleine Varner and Ariana Tobin, “Facebook Enabled Advertisers to Reach ‘Jew Haters’”, ProPublica, 14 September, 2017. https://www.propublica.org/article/facebook-enabled-advertisers-to-reach-jew-haters
Get our weekly email