Amazon's Echo - Alexa Voice Service presented at the IFA in Berlin, Germany, 1 September 2017. Photo: Britta Pedersen/Press Association. All rights reserved.In my previous article I argued that the growing concerns regarding the internet’s social impact is rooted in many different causes: from underlying social problems, to bad science, to bad incentive structures put in place by big platforms, to the ongoing process of centralization that affects the web (the content layer that sits on top of the internet).
I defined centralization as the process through which intermediaries reshape the architecture of the web, increasing their gatekeeping power over the information that circulates through it.
I argued that the centralization of the web is creating the single point of failure that the original decentralized architecture sought to avoid, and that this should be the key concern of policy-makers hoping to deal with citizen concerns regarding the online. The centralization of the web is creating the single point of failure that the original decentralized architecture sought to avoid… this should be the key concern.
The culture of fail fast and iterate that has boosted innovation over the past decades has become highly problematic. In a centralized architecture problems are no longer localized and easy to neutralize. In a centralized architecture failure spreads too quickly, and can cause a lot of harm.
How does centralization take place? The web’s architecture is always and only becoming. It is in constant evolution. Each link that is made, each server that is set up to host content is part of this process.
But some actors have bigger wrenches than others. There are gatekeepers at a network, device, browser, and platform level. They have the capacity to influence the decisions of millions of people who produce and consume content, and – through them – how the entangled web evolves, and how we understand the world we live in.
These brokers are not merely replacing the traditional media in their role as information brokers. Their power is qualitatively superior. Whereas traditional media managed a one-way stream of information (media-->consumer), new information brokers also harvest a LOT of real-time data about the information recipients (new media <-->user). They can leverage this process and direct users to certain content instead of others, or limit their access to links all-together.
This can be more subtle than the usual censorship cases. See for example what happens when you post a link on Instagram, one of the rising social networks owned by Facebook.
Intermediation continues to grow in breadth and depth, fuelling the process of centralization
Intermediation is not in itself a bad thing. Search engines, for example, have become a key ally in enabling the web to scale by helping users find relevant information in the ever-growing web of content. But it can, nevertheless have problematic effects.
There are several ways in which intermediation can take place. It can be structurally embedded, such as through algorithms that automatically sort information on behalf of the user, or as part of the interfaces that wrap the content that is being transmitted from one user to another over a platform.
Intermediation can also operate within the previously mentioned structure in somewhat organic ways, such as when users unknowingly interact with networks of bots (automated accounts) controlled by a single user or group of users, or armies of trolls paid to disseminate specific information, or disrupt dialogue. In these cases, the bots and trolls act as intermediaries for whoever owns or created them.
But how did we get to this point where centralization is giving the internet a bad name?
Intermediation, centralization and inequality
Part of it is an “organic” cycle whereby the more central a player is the more personal data it can collect, enabling such player to further optimize intermediation services. This optimization and personalization can in turn make services more attractive to users, pushing competitors out of the market, and thus organically reducing the range of providers to which users can migrate. An example of the rich-get-richer dynamic.
The other key dynamic occurs beyond the set of existing rules, and I would call it outright illegitimate. That is, intermediaries often leverage their position as a tool to prioritize their own services, allowing them to further increase their market share.
The perils of centralization and a way forward
New technological developments – such as smart assistants, and augmented and virtual reality – will likely increase the breadth and depth of intermediation. This in turn threatens to accelerate and further entrench the process of centralization.
Whereas in the past, the user was presented with a list of websites of interest, smart assistants increasingly skip that step and provide the user with specific contents or services, without providing the bigger picture.
A winner takes all approach. With AR and VR the user is placed in an even more passive role, where she might be "force-fed" information in more seamless ways than through today's online advertising. Whoever operates the code will blend the curated digital world into the physical environment in which our species evolved over millions of years. No contours on your phone, TV or cinema-screen. No cover on your book to remind you of the distinction between worlds.
New technologies are offering intermediaries the possibility of taking personalisation to a new level. Smart assistants such as Siri, Google Assistant, and Alexa are making agreements with companies producing smart devices (cars, refrigerators, thermostats, etc) to allow users to control these swarms of smart objects through these smart assistants.
This means the smart assistant gets access to more quantity and quality data about users. Developments in technologies such as AR and VR are capable of further isolating people into algorithmically curated silo-worlds, where information flows are managed by the owners of these algorithms. This means the smart assistant gets access to more quantity and quality data about users.
This would reduce the probability of people facing random or unanticipated encounters with information – such as a protest on the streets or a bus conversation with a stranger about her daily struggles – that might allow people to connect and empathize with others in ways that are relevant for social movements.
Having further isolated and uncommunicated groups would erode the common set of experiences and knowledge required to nurture a sense of belonging and trust within the broader society, which is key for the coordination of big social projects, and to ensure a fair distribution of the benefits of such coordination.
Those in control of the information silos are gaining too much control over what conversations and encounters can and will take place. As intermediaries’ power increases it becomes evident that whatever fixes they try to implement they are met with distrust and criticism. At the root of these reactions there seems to be a sense that these corporations have outgrown the shell society had granted them, and the legitimacy of the power they exercise today is slim (if any).
Will this impact person-to-person communication?
The internet has not merely reduced the cost of one-to-one person communication; it has offered a qualitative leap in communications. Whereas the newspaper, radio and TV enabled one-to-many communications, and telephone facilitated one-to-one communications, the internet has facilitated group communications, sometimes referred to as many-to-many communications.
This is what we observe in places like Twitter and chat rooms, where thousands if not millions of people interact with millions in real time. The deployment of effective many-to-many communications often relies on curatorial algorithms to help people find relevant conversations or groups. This means that some of the challenges faced in the realm of search affect person to person communications.
Yet centralization also poses a distinct set of risks for these communications, amongst them, risks to the integrity of signifiers (representations of meaning, such as symbols or gestures), and their signified (meaning).
Intermediation in person-to-person communications
A. The intermediary’s responsibility to respect the integrity of a message
Millennials might notice that when texting with a new lover it is often the case that a word or emoji is misinterpreted. This often leads to an unnecessary quarrel, and we need to meet up physically to clear things up. Oh, no! That’s not what I meant...What I wanted to say is... Conveying meaning is not simple, and we often require a new medium or set of symbols to explain and correct what went wrong.
Now imagine that someone could tamper with your messages, and you might not have that physical space to fix things... And that it’s not your lover you are communicating with, but the electorate or a group of protesters.
The internet facilitates engagement by bringing people closer together. The apparent collapse of the physical space between users is achieved by slashing down the time between the moment in which a message is and received, until it's close enough to real time. And for millions of years the only type of real-time communications we’ve had as a species involved physical presence. This illusion often makes us forget that someone is managing the physical infrastructure through which the message is transported. It is fundamental that any and all parties who control these channels respect the integrity of the message that is being delivered. It is fundamental that any and all parties who control these channels respect the integrity of the message that is being delivered.
Centralization which leaves communication channels under the control of a handful of actors could effectively limit parties from exchanging certain signifiers. If virtual and augmented reality are the future of communications, then we should bear in mind not merely spoken or written language will be sent over communication channels. These communications will include a wide array of signals for which we still have poorly defined signifiers. This includes body gestures and – potentially – other senses, such as smell and taste. To get a sense of the complexity of the task ahead of us, think about the gap between experiencing a movie through descriptive noise captioning and the standard hearing experience of the same content.
Screenshot: George Costanza, Youtube.In the past the debate was focused over the legitimacy of the frames traditional intermediaries such as newspapers applied to political events and discourse. With new intermediaries come new risks. By reducing (or eliminating) the availability of alternative mediums of communication between parties, centralization could limit the sender’s ability to double-check with the receiver whether or not a message’s signifiers were appropriately received. Distributed archives… based on Bitcoin’s blockchain model, offer a glimpse of hope in this battle.
Distributed archives, such as those currently being developed based on Bitcoin’s blockchain model, offer a glimpse of hope in this battle. It must be noted that the phase between the message’s production and its transcription onto the ledger is subject to some of the same risks of meddling present in our current model. Blockchain as such could nevertheless protect the message's integrity from ex-post tampering.
B. The effect of centralization on the fluidity of the decoding process
A second issue affecting person-to-person communication is the process through which the relationship between signifier and signified comes to be (point B on the diagram).
The process of information consumption is not automatic or passive. The receiver plays a role in the decoding process. The word cat triggers a different set of connections to a cat owner, than it does to a person who is allergic to cats.
The receiver constructs meaning by relying on her own experiences as well as recalling instances in which members of the community managed to coordinate a conversation by relying on what could be deemed the agreed-upon meaning of this concept. Through this process individuals and groups play an active part in the construction of reality.
This active interpretation enables language to be fluid, relationships between signifier (symbol, such as a word) and signified (meaning) to shift over time. The system is open. The noisiness of the process through which we interpret and discuss our world provides the flexibility necessary for critical social changes to be possible. A reflective capacity comes embedded within language. A reflective capacity comes embedded within language.
With cat the process is quite straightforward. Now shift from cat to more abstract concepts – like justice and war, or muslim and latinx – and things get much trickier. Since people do not necessarily deal with these concepts directly, third parties – such as the mass media and the state education system – purvey greater control over their meaning.
Much like the elites in charge of writing definitions in a dictionary, mass media often takes over the process of rooting the signifiers onto a broader set of signifiers to construct meaning. The process of constructing meaning is deeply political.
Reiterated associations between muslim or latinx to negative concepts can over time lead people to trigger mental responses associated to the negative frame even when the frame itself is not present.
A centralized web of content, where the few define which frames should be applied and distributed, becomes a liability – the opposite of the open space the web was meant to create, and which many of us still believe can democratize the exercise of power by redistributing widely the power to construct meaning, and therefore the way we understand our identity, our relationships, and the world we live in. A centralized web of content, where the few define which frames should be applied and distributed, becomes a liability.
Let’s think how this might play out in 20 years. Many resources are currently devoted to the development of brain-computer interfaces. Brain-computer interfaces imply tending a bridge across the air gap that currently exists between people and their devices (as well as the intermediaries that manage traffic over the internet and onto these devices).
Eliminating such air gaps might imply limits to the receiver’s capacity to diverge in the way she processes the signifier: the computer would arguably take over the decoding role, and with it our subject’s ability to decode and reconstruct signifiers into – mistakenly or purposefully – novel and critically transformative meanings.
A family plays a game of rock-paper-scissors with "Alexa," their Amazon Echo wireless speaker and voice command personal assistant. TNS/Press Association. All rights reserved.Every step towards the large-scale roll out of these technologies strengthens incentives for intermediaries to ensure that they can operate these systems unchecked.
Next week: Part Three – how we should coordinate to solve the mess we are getting into before it’s too late.
Get our weekly email