
An employee walks past a sign at Facebook headquarters in California. Credit: Jeff Chiu/AP/Press Association Images. All rights reserved.On a Thursday evening, while casually checking my Facebook profile, I came across an article on Social Media Today, an online marketing publication, boldly claiming to reveal the ‘truth’ about the algorithms behind social media and search engines.
“The truth is that algorithms know you better than you think. More than that, the best algorithms may actually know your likes and interests better than you do,” it claimed.
Algorithms, the article went on to explain, are the craze of the day. From Facebook to Google, any respectable online content provider invests in developing programs able to record and analyse users’ online activities in order to further push content upon them. The revelation here is that such programs are getting better and better at knowing us. Companies using them have seen their sales increase – allegedly a result of algorithms showing people the right type of content. In other words: “The truth about algorithms is they work”.
But who do these algorithms ‘work’ for? The article suggests they work for us: users. In filtering irrelevant information, they perform the much-needed function of getting rid of noise, allowing us to focus on what works for us. It might be tempting to see algorithms as our personal assistants. Except they aren’t.
Algorithms are not working for you and me. They are working for corporate interests. Their role is not to ‘know’ us, but to mould us into receptive customers. The ideological implications of this vision – of helpful and innocent algorithmically produced knowledge – have important consequences for how we come to regard not only our online lives, but also our very nature as human beings.
To be honest, I did click on the article. After all, it was so visible on my news feed precisely because of algorithms patiently recording my each and every keystroke to produce an allegedly factual portrait of my (online) actions and the assumed values, attitudes or preferences behind them. Indeed, I am interested in anything social media-related. Pondering the complex relationship between technology and society is part of what I do as a communications scholar. I had previously read several articles shared by the same Facebook contact sharing this one. In this sense, the algorithm worked. It correctly anticipated, based on the sum of my past behaviors, that if it made this story visible to me I would want to click on it.
Focus on people’s actions entails blatant disregard for the complexity of the meaning-making processes that make us human.
But this narrative is problematic in two ways. Firstly, it sells us a simplistic idea of human beings as something that can be objectively known. In fact, algorithms are merely a return to the behavioural paradigm, where a focus on people’s actions is coupled with a blatant disregard for the complexity of the meaning-making processes that make us human. This vision of algorithmically produced knowledge also hides an equally problematic commodification of people. Algorithms promise to deliver a profile of the individual that can subsequently be used to alter people’s behaviour and seduce them into consuming things. We have become increasingly used to this, mostly because the work algorithms do is in the background, invisible to us.
Algorithmic representation
Facebook’s algorithm has assembled a version of me based on my past online actions. It recorded me reading the Social Media Today article, and used this information to prioritise similar content and products. This behavioral trace was recorded because Facebook is, in effect, an information distribution system. Its algorithm collects these traces to predict the likelihood that I will click on whatever article or advertising it provides in future. Such predictions are effectively representations of humans based on specific behaviors that hold, in some form or another, monetary value for the company.
Representation is not a mirror of the individual – and should not be taken as such. On Facebook, for instance, the individual is algorithmically reconstructed – and, in this process reduced to – the sum of selected online traces. When conflating this representation with popular understandings of ‘knowledge’ of the individual, with its connotations of objectivity and certainty, the reductive and reifying drive producing these representations becomes invisible.
Take my own use of Facebook. What type of representation am I creating when clicking on or sharing or merely opening my account? First, there is no context for my actions: what combination of factors led me to read the article above? I openly admit to being bored at the time I read it. What did it mean to me? I obviously disagreed with its arguments. What did I do with it? Although I did not repost or comment on it, I did get annoyed enough to write this piece—although, of course, the writing was not done within Facebook’s walled garden and, as such, it will not be part of the representation of me that this platform has created.
Facebook’s algorithmic representation of me does not care about such subtleties! My making sense of the article is irrelevant: the only thing that matters is that I clicked on it, spending sufficient time on the page for the algorithm to record that I had read it. The reduction performed in this representation of me is an important reminder that algorithms do not ‘know’ us better than we know ourselves. They merely record our actions, black boxing our very (messy) nature as meaning-making subjects.
The sum of our online actions, then, becomes our ‘essence’. Our online traces are taken as an expression of what we like, think, and care for–they are taken to express the messiness of the human condition. Thoughts, feelings, values, beliefs, and so on are notoriously ambivalent. They are shifting and hard to pinpoint even for the human experiencing them. In claiming to produce knowledge, the developers and supporters of the algorithmic paradigm can shortcut this mess and produce a clear, albeit simplistic, picture of who we are.
In focusing on essences, we fail to realise how the things we do or feel are shaped by external circumstances.
We are certainly prone to believing there’s an identity core hidden inside each of us. The subjective experience of our own selves glosses over our own inner messiness. We do not like our own incontinuities, hesitations, ambiguities and all the problematic paradoxes of the self that defy reason. Furthermore, the pitfall of subjectivity is that we perceive the world through the filter of our conscious mind, which further seduces us into thinking there is someone at the helm. In a nutshell, the thesis of identity as an inner essence is appealing.
In focusing on essences, we fail to realise how the things we do or feel are shaped by external circumstances. We fail to acknowledge how the values or beliefs that define us are, to a large extent, influenced by our context – cultures and socioeconomic positions both mould us and provide us with communicative repertoires through which we can understand ourselves and our place in the world. On top of that, in the context of specific communicative encounters, who we are and what we say is also a response to the contingencies of the interaction. The ‘real you’ is an interplay between structural conditions, biological predispositions and your particular biography. This, then, is the ideological work of algorithms: the reification of the contextual, contingent, and intrinsically messy, nature of being human.
Algorithmic social engineering
But there’s a second problem here. New media enthusiasts, like the author of the Social Media Today article, may be quick to embrace the huge volumes of data generated by algorithms as an objective method of ‘knowing’ the individual. Not only is this form of ‘knowledge’ erasing the ambiguity and fluidity of the human condition; it is also replacing it with a rather dull alternative: being imprisoned within your own past. In the algorithmic paradigm, your past online actions are used to shape your future behaviour. Type “I crave a Coca-Cola” as your Facebook status, or reveal your professional affiliation on your profile, and watch the advertisements, sponsored content, or even news feed recommendations unfold. Did you get an ad for Coca-Cola or McDonalds? Or perhaps one of those '10 things an academic should never do' stories? Although the details of the Facebook News Feed algorithm are unknown, we do know that our past actions influence the type of content that is made visible to us. But what if I had given up drinking Coca-Cola in the meantime?
The algorithms behind social networking sites like Facebook or online distributors like Amazon serve a predictive goal. They recommend new things based on the assumption that past actions will entice you to consume new items. While our future behaviour is shaped by many other factors, to the extent that we are increasingly becoming captive audiences of online spaces that cater to our every possible online need–from chatting with our friends to reading the news or booking our next holiday–our chances of stumbling across things in an unexpected fashion are artificially minimised. Such algorithms effectively attempt to confine us to the narrow boundaries of our own past. This, in effect, is a social engineering enterprise.
It is true that algorithms can also introduce new things and tempt us into making changes. Yet this change (in the form of exposure to other types of information, products and services) is one driven by profit-making agendas. Furthermore, it is not that such algorithms force you to buy, or to click on their content. They are merely making the right information available for you—but you retain your autonomy in buying, liking, or commenting on it. The Social Media Today article puts it quite eloquently:
“At some stage of growth, whether you like it or not, it makes sense for all social networks to implement an algorithm in some form. The ideal time to implement an algorithm is right at that point when engagement is starting to plateau, when the noise is increasing and people are seeing fewer posts, and are starting to spend less time on platform as a result…Unfortunately, in this instance, we may be better off ceding control to the robots and letting them help us create better experiences. And really, isn’t that fundamentally what machines are built to do?”
What we are being told here is seductive: we do indeed feel inundated with information. And we don’t care about much of it. The vision of the algorithmic personal assistant, filtering out the noise, rubs off on us quickly. Algorithms’ function appears to be that of creating better online experiences for us. Furthermore, we seemingly retain the choice to read or buy.
But there’s another source of power for this seductive vision of ‘machines’ as mere enhancers of our lives. Algorithms, like machines, are imagined as neutral, scientific, or objective. They are only tools, with no values or politics. They ‘compute’ online behaviors in a very neat and tidy way: everything you do online is recorded. Then, through a feat of mathematical wizardry, these recordings are assessed and synthesized into an allegedly accurate mirror of your true self.
On the one hand, then, algorithms are in our service. On the other, they are neutral and scientific tools of recording and calculating who we are. Both arguments tame any concerns we may have about the permanent recording of the minutiae of our online lives. They operate in an ideological manner, by neutralising the ultimate goal of algorithms: to embed us further within the capitalist logic of consumption.
Algorithms’ wizardry is opaque. We do not know the details of their workings, but do we know a few things. For instance, we know that on both Google and Facebook, popularity is an important criterion in pushing content to the top of your screen. Popular content is made more visible. But algorithms, in doing so, are also producing the very popularity they take into account. The more something is visible, the more it gets liked or clicked on or bought, which in turn makes it even more visible. Perhaps paradoxically, the result of popularity is standardisation. We all end up being exposed to similar, popular things. There is, of course, change as well as diversity, since there’s something popular provided for every taste.
But there’s a second aspect of the algorithm that often works hand in hand with popularity: paid-for content. In spite of the claim that algorithms work to produce ‘the best experience ever’ for us, as individuals, they do not do it solely on the basis of our ‘real self’, our friends’ recommendations, or the previous popularity of the text or product. Within their secret formula, algorithms are also designed to cater to the companies who paid to reach us, the users. Paid-for content is prioritised—and the more it is prioritised, the more it is clicked on. And the more it gets clicked on, the more popular it becomes, prioritising it further still.
In the end, what algorithms really do is to create new commodities.
For, in the end, what algorithms really do is to create new commodities: categories of users, ready to be matched to suitable products, services or stories. Back in the 1980s, when television ruled the entertainment industry, Canadian critical media scholar Dallas Smythe described media content as the ‘free lunch’ offered to viewers in order to lure them into becoming both labourers and consumers for the capitalist economy. In watching television, audiences were not merely entertained; they also learned what to buy, how to decorate their houses, what to aspire to, what meals to cook, and so on. For Smythe, the real role of television as a business was to create demand for products–and it did so by generating a new commodity to be further sold to the corporate world: audiences.
Today, algorithms are the newcomers to the advertising machinery. In searching, clicking and more generally spending time online, we learn to desire goods and services, or to expect that information is and always will be at our fingertips. But we are also a commodity: our online traces are sorted out and reassembled based on their potential for advertising. This is why the algorithm is not in our service, filtering out noise. It may well do that, but that’s the ‘free lunch’ provided in exchange for making ourselves available for future advertising. That some of us resist this is irrelevant. I used all possible options to limit the presence of advertising on my Facebook profile. As a matter of principle, I never click on any of the recommendations or sponsored content. But I’m not the type to make resistance a way of life–so in the end, I will give in. But ultimately, my individual actions are irrelevant. I am still on Facebook, and therefore part of the database. Even if I resist it, the advantage of the database resides in its volume: statistically, large populations are bound to display different behaviors.
And this is what algorithms represent: a new and improved means of instrumentally sorting people, not based on a priori notions such as ethnicity, education or even personality, but on the basis of customisable combinations of past actions and the inner features they purport to signify. It is important that we remind ourselves that such sorting is also constitutive. It moulds us as individuals and it naturalises the categories it uses. Furthermore, as the systems of classification used by algorithms remain invisible, so do their politics.
This is, then how algorithms engineer the social body: by creating modular similarities across the social body, that can be easily recombined to create groups ready for consumption. The fact that sales or the time users spend on Facebook increase is not, in and of itself, evidence that algorithms work in the sense of ‘knowing us’. They work in the same way advertising worked long before them in the time of mass production and consumption. They monitor and sort us for instrumental purposes. This is, then the form of ‘knowledge’ they produce–not a holistic or compassionate understanding of the individual, but a superficial and instrumental picture of her alleged core essence designed for one purpose: to further influence our behavior.
There is a dangerous discursive slippage at play in such optimistic accounts of algorithms: it is the slippage between saying they seem to be good advertising tools, and saying they are good at ‘knowing’ the ‘real self’. This slippage should worry us all, as algorithms and big data become the craze of the moment in academia and, even more importantly, in the governance of everyday life.
Read more
Get our weekly email
Comments
We encourage anyone to comment, please consult the oD commenting guidelines if you have any questions.