'Fake news' sites. Raphael Satter/AP/Press Association Images. All rights reserved.Over the last few months, discussion of ‘post-truth’ has been everywhere. Questions about the status of truth and fact in political debate have acquired a pre-eminence in commentaries on the complex political developments of 2016. These stories vary, but they share an understandable concern with the consequences of a misinformed electorate. Which, of course, are far-reaching.
Within this debate, algorithms have taken on a surprising presence, often as the source of blame for affording the conditions in which lies and manipulated ‘facts’ can thrive. Alongside the emergence of notions of the ‘post-truth’ era, algorithms have increased in public profile and level of notoriety.
As many accounts of the last year have made clear, social media now play a powerful role in shaping the circulation of news and information. The recent reports and wide-ranging reflection concerning the sharing of ‘fake news’ on Facebook are part of broader claims that social media provide the perfect conditions for the propagation and dissemination of misinformation. As we know, it was even suggested that this sharing of ‘fake news’ on social media influenced Donald Trump’s presidential election victory.
In general terms, what we are beginning to see here is what happens to news when the decentralisation of media takes hold. News and current affairs were dominated by broadcast media and newspapers as little as ten years ago. Now decentralised forms of social media change not just the way that such information spreads but also how it is produced. We have masses of content held in self-organising media archives.
One outcome of this in recent politics is that Facebook and other social media are seen to be enabling new kinds of ‘post-truth politics’ to flourish – which has become a dominant trope and cause for concern. This has recently been linked to the already established idea of a social media ‘filter bubble’, in which we are exposed to views and information that reinforce our existing world views. Social media algorithms are envisioned as locking us in or hermetically sealing us off from alternative perspectives, whilst also making us more likely to believe potential misinformation that fits with our existing ideas of the world.
This is comparable to the kind of bubbling that we see within the social world more broadly, with people seeking to withdraw or to ‘pad the bunker’ – from gated communities to the use of smart phones or earphones. Social media might appear to be a site of connection and networking, but it has long been argued that internet-based media forms produce a kind of ‘networked individualism’. The difference with social media is that the infrastructures themselves manage our networks, shape our newsfeeds and filter the masses of content into the things we are most likely to want to see.
Perhaps the most significant change we are seeing here concerns the new types of politics of circulation that underpin the transformations that social media bring. Information circulates around social media at an alarming rate, afforded by the tagging of content and the algorithms that sort and prioritise. News agendas move quickly. It can be possible for people to anticipate what will circulate and to use these social media to gain rapid visibility for their points. Such visibility is possible for those who know how to tag the news with metadata and who have a sense of how algorithms might then enable the sharing to snowball. In short, those with the capacity, money and capabilities to use social media’s politics of circulation to their advantage are most likely to get heard.
As with everything else that happens in social media, news and information become a means for provoking activity. These are the conditions in which information and misinformation now circulate, in a space aimed at prediction rather than curation. Of course, all of this is now quite familiar. Because of the role of algorithms in news circulation they have become villains of the piece.
Once the human team was removed from monitoring Facebook trends, it was reported that the algorithmic processes were not as discerning in differentiating the quality or accuracy or agendas of the news delivered to people. Algorithms took on a malevolent character in these stories, and in the so-called post-truth era that they embodied. The ‘echo chamber’ or ‘filter bubble’ perspectives also suggested that algorithms were responsible for sealing people off from visions of the world that contravened or challenged their own. Algorithms’ villainous status continued to grow. They now not only failed to filter the rubbish out, they also enabled spurious and dangerous content to amplify whilst closing down world-views. This is what I’ve described elsewhere as the ‘social power of algorithms’.
Algorithms now have a material presence in shaping the social world in often unseen ways. It is interesting that the notion of ‘post-truth’ has also had the effect of increasing the public profile of algorithms and their embedded presence – particularly in social media. In a brilliant post, Ben Williamson has shown how the media coverage of algorithms varies between media outlets. The variations in positive and negative coverage is striking. So your perspective on the merits or destructive capacity of algorithms may well depend on your reading patterns.
It is the proposed answer to this situation that has conversely pitched the algorithm as the potential solution and saviour. With developments around algorithms being used to check facts in real-time, allowing utterances and content to be examined at scale and instantly, the vision of the algorithm becomes more heroic. The only way to resolve the problems created by algorithms, it would seem, is to counter them with more algorithms.
The result is that algorithms are depicted as both the villains and potential heroes of post-truth politics. They might enable lies and misinformation to successfully circulate, but in the context of what Mark Andrejevic has called the ‘infoglut’ they are also then suggested to provide the only real means for spotting the problems in the unfathomable masses of information. This instant type of analytics is close to what Andrejevic describes as the pursuit of ‘immediation’. Through a kind of real-time fact checking, algorithms can then save the day — or that is, save us from the problems created by other algorithms.
The question is whether we think more algorithms can solve the malaise and remove the problems of bypassing a discerning editorial eye. It’s unclear how to resolve this, but the close alignment between algorithms and so called post-truth suggests something. As the profile of algorithms has grown and as their actions become the source of discussion, we might want to avoid thinking of them as good and bad algorithms, and think instead about how these media forms mesh human with machine agency and what this means. Reducing it to algorithms as the problem and solution might cause us to miss just how much of the decision-making and visibility is in the hands of code.
Whether we see algorithms as villains or heroes (or both), what really matters is how those algorithms are used and deployed to shape who and what gets heard in the mutating public sphere. At the very least we need to think about whether the apparent problems created by algorithms can be resolved with more algorithms. The problem with treating algorithms as heroes or villains is that it might distract us – in the same way that we might be distracted by the concept of post-truth itself – from how power is being deployed through those algorithms.