The glass is half full. Voices of resistance to power are everywhere. Students and young people are taking to the streets; women are infiltrating public discourse; and the unlikely activist is joining popular uprisings. Social movements are rejecting tokenism and ‘woke-washing’ from corporations, demanding that governments deal with the alarming state of the planet’s health, and making place for feminist sensibilities, one struggle at a time.
All the ingredients for a seismic shift are here – well, almost.
There is still the half empty portion – and filling this is singularly about the business of ideas. And this, today, is in more than a bit of crisis, besotted as we are with the heady toys of a data culture that seems to overpower us.
In 1962, marine biologist and author Rachel Carson catalyzed the modern environmental movement with her epoch-making book 'Silent Spring' — a painstakingly crafted exposé of the pesticide industry. She highlighted the fragmentation, commodification, and erasure of truth in an era when narrow silos blind specialists to the interconnected whole of the planet and its natural ecosystems, where market forces sacrifice truth at the altar of revenue.
Carson is relevant today not only because she called attention to the links between capitalism’s greed and the erasure of truth – but also for her caution that the pharmaceutical industry’s actions be assessed for “consequences remote in time and place”.
The data and artificial intelligence (AI) debate may certainly have come far in acknowledging the place of ethics. But it has a longer way to go in addressing its whole-of-planet implications; about consequences remote in time and place, as Carson would put it.
From the mainframes of the 1960s to the 54-quibit Sycamore processor that is celebrated for achieving “quantum supremacy”, we are at a point where AI is the master signifier of our times. In Lacanian thought, a master signifier is an empty signifier – but one that lends traction to other signfiers. Marx's conception of “commodity fetishism” demonstrates how money becomes a master signifier of value. Money refers to value as such, and all other commodities are thought of in terms of how much money one can get for them. That is, money as a commodity becomes self-referential and all other commodities are worth (signify) money.
AI, as the master-signifier, provides a “quilting point”, an anchoring peg around which other signifiers can stabilize. AI generates ideal types of ‘winners’ and ‘losers’, based on many signifiers – scores, ranks, ratings, predictive models – all of which lend meaning to what is of value in the new economy. Airbnb's trait analyser AI, for instance, distinguishes suitable guests from unsuitable ones, categorising human beings for “conscientiousness and openness”, “neuroticism and narcissism, Machiavellianism and psychopathy”. These filters are the sorting apparatus that determines who is valued/valuable and who is not.
The AI community is increasingly seized of the social outcomes of such AI-determined typologies. But a larger, political task remains. This is to prise open AI as the master signifier of the fantasies of neoliberal capitalism. The AI-led economy as we know it, is not an accident. From the relatively innocent Internet of the 90’s through Snowden, and the rise and rise of the FAANG, to Cambridge Analytica, we have seen the unfolding of a data culture that is deeply intertwined with capitalism’s impulse to move, expand and swallow.
Data accumulation in the current political economic order mimics capital accumulation:
- It is not for anything other than just accumulation. Data culture is an obsessive pursuit of data that acquires the mythical force of a revolution servicing capital. The race to corner mammoth volumes of diverse data sets hinges on the promise of new unforseen connections that can materialise into avenues for profit.
- It is produced socially, but owned privately. Wealth creation in data culture is predicated on a seamless sociality that knows no boundaries – countless individuals and collectives provide data, but it is still owned privately.
- It thrives on differentiation. Data provides capitalism with the material engine – in the form of AI – to perfect the art of sorting, segmenting, eliminating, targetting, optimizing, and finally, simplifying capitalism’s relentless endeavor to optimize difference for the reproduction of social hierarchies.
- It leaves a trail of exploitation. Neoliberal data culture creates the material tools for an opportunism, a gold rush, that brings forth the worst forms of societal erosion. Cameras and chips in far away places generate the real-time data to extract value. ‘Smart agriculture’ is already transferring the control of small farms to large corporations in Asia-Pacific. And the algorithmic models that make this possible are valorised as the new age innovations that break through IPO glass ceilings.
- It works to consolidate control. In a world that is unequal, relentless data accumulation and incessant intelligence production emboldens capital to remain in tightly knit networks. Wealth concentration, we are told, is at an unprecedented high, with Big Tech overtaking oil, automobile and financial corporations in market capitalization, which exceeds the GDP of most countries. The 26 richest people on earth have the same net worth as the poorest half of the world’s population, some 3.8 billion people. The labour share of global economic value added is in free fall, and this decline has coincided with the rise of the AI economy.
In early January 2020, the Brookings Institution published a report prophesying that the country that leads in AI in 2030 will go on to rule the planet until at least 2100. McKinsey has cautioned that late mover countries in AI may never be able to bridge the development gap with leading AI economies.
A sense of immediacy about seizing the AI opportunity has percolated to governments everywhere. Just between 2016 and 2018, over 20 countries established committees/task forces for the creation of national level data and AI roadmaps.
The scramble for AI-led development demands that we pay closer attention to the ideas and ideal types that data culture generates – the discourses of development it entrenches. The raging protests across the world point to a collapse of the current system. This is as much about a “philosophical crisis” – a troubling loss of social autonomy in the AI-led world order – as it is about the failure of institutions. Maybe the most important fact about living in the 21st century, as Israeli historian Yuval Harari put it, is that humans are now “hackable animals”.
But mainstream AI debates – especially on AI and ethics – seem to sidestep this critical connection.
Debates in AI and ethics
Current debates on AI governance tend to be preoccupied with the prevention of human bias in the design of algorithmic parameters and safeguarding the representational accuracy of input/training data sets. Concerns around fairness have revolved around recidivism scoring and facial recognition systems that disproportionately penalise racial minorities and immigrant communities. Research has pointed to how predictive profiling techniques in welfare targeting discipline the poor. The real risk of gender and racialised discrimination in the algorithmised marketplace for credit, housing and services has been highlighted. Recent scholarship also throws light on the need to move beyond impartial decision rules or procedural fairness to asking, "How does my algorithm interact with society at large?" – advocating thus for substantive fairness as an antidote to structural inequalities.
Despite these crucial contributions, concerns on AI, fairness and ethics in the scholarship are still liberalist – grappling with techno-design aspects that imagine rights and freedoms as individualised. This discounts the social relationships that make up our complex institutional frames.
But AI is moving the world, with a veritably fraught politics of a fast emerging platform economy. AI is not only about AI-enabled hospitals or AI-assisted retail stores – closed systems where data science delivers wonders. AI must also be thought of as a system disruptor, a social force that changes the way things work in general. The contestations in AI are therefore not just ‘endogenous’ to the techno-social parameters of a specific AI solution or closed AI system, but also ‘exogenous’ – reconstituting the algorithmically mediated platform marketplace and the rules of the AI economy.
Distortions to competition in the algorithmically governed platform marketplace have been discussed across the world in recent times. Regulators in the US, Europe, South Africa and India are all examining new ways by which anti-competitive practices in platform-controlled markets can be checked.
Algorithms on dominant platforms orchestrate market relations, subjecting small actors (traders, small producers, coops) to highly unfair terms. Research by IT for Change in 2019 (forthcoming) found that online travel aggregator platforms in the tourism sector are flexing their algorithmic muscles for client matching and hotel ranking. For small hoteliers, it is a Hobson’s choice; becoming part of the platform ecosystem means succumbing to deep discounting schemes that simply don’t work; but not joining the bandwagon means risking isolation. In the niche adventure tourism segment in the eco-sensitive Himalayas, previously independent hike operators find themselves reduced to a reserve workforce of small-time contractors at the disposal of the platform. Taking away interdependencies in the local economy, the platform creates winners and losers through its remote management.
There is a trust paradox here. Consumers in perpetual search of new experiences seek out platforms as the modern day trust insfrastructure; whereas relationships among local enterprises see a breakdown of trust. The AI that rates, ranks, visibilises, obscures, connects, and disconnects is clearly reconfiguring local tourism – with moral hazards that imperil the overall health of the economy, with negative consequences for a highly sensitive ecological system.
Despite rhetoric to the contrary, small producers, artisans and micro-entrepreneurs in most developing countries struggle to find a foothold in the platform marketplace. The platform regime’s algorithmic base orders the production relations, leading to a highly uneven playing field.
The big platform relegates small actors to the fringes of the marketplace, eventually squelching them or swallowing them. Platform regulation that calls for transparency, explainability and public audit of algorithms may only go so far. They could perhaps check predatory pricing or deep discounting practices. But behemoths are first-movers; they own mammoth volumes of data and have the algorithmic prowess to harness value in real time. As they say in my country, ‘Nothing grows under a banyan tree’! The very presence of data-wealthy platforms eviscerates the right to market participation of small actors.
What would algorithmic justice look like in these contexts? How can AI rules for big ecommerce companies privilege small actors?
The structural consequences of AI in ecommerce point to impacts in the AI-led economy that are remote in time and space. They are not technical bugs that can be fixed, but cascading impacts that create new faultlines of power in the economy. They are not about feedback loops or corrections internal to algorithmic systems. Rather, they arise in the suprasystemic logic of platform infrastructures, as the latter intertwine with neoliberal capitalism. That is, the modalities by which platforms aggrandize social data and its value through an "intelligence premium" that is locked up with little or no accountability to local actors.
The data battleground
The multi-scalar AI force-field needs to be grasped for its developmental realpolitik. Given the inordinate clout that the US and China wield in the emerging geo-economic order, experts predict a bi-polar global AI economy. The UNCTAD Digital Economy Report 2019 exemplifies this:
"It has been estimated that this general-purpose technology [AI] has the potential to generate additional global economic output of around $13 trillion by 2030, contributing an additional 1.2 per cent to annual GDP growth […] China and the United States are set to reap the largest economic gains from AI, while Africa and Latin America are likely to see the lowest gains. […] China and the United States account for 75 per cent of all patents related to blockchain technologies, 50 per cent of global spending on IoT, at least 75 per cent of the cloud computing market, and for 90 per cent of the market capitalization value of the world’s 70 largest digital platform companies."
Under the circumstances, nation-states – rich and not-so-rich – are becoming anxious about missing the AI bullet train. In the developing world, countries are hurrying to build national AI capabilities before the window of opportunity is permanently lost. Chinese scholar Kai Fu Lee notes a rather tragic irony here.
“The countries that are not in good shape are the countries that have perhaps a large population, but no AI, no technologies, no Google, no Tencent, no Baidu, no Alibaba, no Facebook, no Amazon,” Lee says. “These people will basically be data points to countries whose software is dominant in their country. If a country in Africa uses largely Facebook and Google, they will be providing their data to help Facebook and Google make more money, but their jobs will still be replaced nevertheless.”
To gain a modicum of control over the AI economy, the African bloc and countries like India have opened new battlefronts in the WTO. They have asserted the need to retain policy space for AI-led digital industrialization that can help them climb to the higher value parts of the digital economy. Slaving away in image annotation and data labeling is not really going to change the geo-politics of development.
But the rules of battle are skewed.
A new era of trade deals – including the CPTPP and RCEP – promote the status quo, with developed countries disallowing developing countries any right to access algorithms and source code. This means local governments in developing countries must cede any regulatory power to scrutinise Big Tech.
Neither can they demand access to AI, though agreements made through the TRIPS very much recognise developing countries’ right to technology transfer.
The AI paradigm seems to fail the fairness test in the global development order – blatantly negating local oversight and willfully denying any claim to equality .
This crisis of economic democracy has brought to the fore assertions of sovereignty. Developing countries have pushed back proposals by the US and its allies to maintain free cross border data flows in global digital trade. Efforts are afoot in many developing countries to regulate data for the economy. Laws to establish the state’s eminent domain over anonymised personal data and non-personal data sets are being enacted.
The modus operandi of developing country governments to gain AI capacity is however not very clear. There is of course the problem of poor legacy data sets and lack of domestic data management capacity among most firms, but equally, years of deindustrialisation – for instance in Africa – render the data dreams of these countries a non-starter. When there is little local production capacity left, how can an intelligence economy be built? The aspirational road to competitive advantage for most developing countries seems to be paved with more questions than answers.
Pathways being adopted also reveal an uneasy contradiction – the desire to build local data infrastructures seems to go hand in hand with “AI partnerships” – a euphemism for easy access to citizen or public data by multinational firms with little or no overarching institutional norms. In 2017, regulators in the UK held that a partnership between Google DeepMind and the National Health Service broke the law for overly broad sharing of data. Tech partnerships for public services delivery in developing countries thus comes with huge risks. While they may bring efficiencies, they may well lead to a data exodus – transferring citizen data, often with very little privacy safeguards, to corporate AI labs.
Calls to data nationalism thus seem to be accompanied by the legitimation of data extractivism. This is not surprising. The race for AI today is predicated on an extractivist capitalism, and data extractivism is its natural handmaiden.
However, no sense of urgency to contain the mindless speed machine of data extractivism is evident in the global geo-political horizon of the day. The AI dream is on a rabid path, like an autonomous driving application gone rogue.
The systematic commodification and rapacious colonization of new data frontiers shows a data wild west that is looming large. Satellite data is used widely by Wall Street investment brokers speculating in food futures. The Earth Bank of Codes being set up by the World Economic Forum aims to create an open source database of the genetic codes of all living organisms on earth, in a bid to “unlock the potential of the planet’s biodiversity" and "boost the global marketplace for bio-inspired chemicals, materials, processes and innovations" by opening up biological and biomimetic assets to 4IR technologies. Given global pharma’s ambitions, these self-laudatory initiatives do not ring the right bells.
Privacy International has found that data brokers are subverting the GDPR in Europe – breaking the law to collect, process and trade personal information. As the law tightens its hold in EU, adtech firms are looking to developing countries where privacy laws are less stringent, seeking new pastures for training data sets.
In the making of a new international political economy, one that is centred around AI, we are witness to deep contestations – states are locked in a global tussle; big tech is immune to territorial boundaries; global finance is dismantling the knowledge commons and things of the biosphere outside of the market have been converted into commodities. All suggesting a neocolonization of people and the planet.
Calls to ethics are necessary, but not sufficient, if the AI paradigm must work for development justice, societal autonomy and human rights. The task before us is to envision alternate institutional frameworks for an AI-enabled economy. Human-in-the-loop models are being advocated today to make algorithms more accurate and "confident". Smartifying an algorithm to address "edge cases" that the AI model is not familiar with will not fix the structural antecedents of inequality and injustice. On the contrary, in an AI order that furthers capitalism’s impulse for material accumulation as an end in itself, confident algorithms may have little regard for equity and social justice. Human-centric AI means rejecting technological absolutism and making place for human aspiration and planetary wellbeing, above and beyond market fundamentalism.