How to make AI work for people and planet
Whether AI will be a weapon of social injustice or an agent of positive change depends on the stories we choose to weave.
The human body is asserting itself more than as a performative spectacle on the streets. From Hong Kong to India, Catalonia, Lebonan, Chile and many more, the body is a signifier of bio-power and hope, defeating surveillance, courting arrest and deliberately seeking the system’s panoptic gaze.
The proliferation of protests suggests a tipping point – but systematic change is harder work. And in this regard, we know enough about what the building blocks to dismantle and recreate social structures in their entirety may look like. After all, democracy was born out of the churn that political and economic structures were put through in history.
Get one whole story, direct to your inbox every weekday.
The AI (artificial intelligence) moment as we know it presents a frightening challenge – it is born of and continually feeds a global crisis of gross injustice of a deep state that is led on by a deeper corporation. Sonia Correa, the Brazilian feminist scholar, calls attention to the wider process of de-democratisation that confronts us. The collusion of the neoliberals and the neoconservatives is ravaging the rights of the majority everywhere, constantly creating the un-deserving other.
Human rights activists in the digital domain are broadly of the view that the existing human rights regime is still relevant to the challenges of the 21st century. However, a complete failure of twentieth century institutions in promoting global democracy and the rapid evolution of the AI era seem to bring in an unforeseen complexity. A normative crisis characterises the systems that work the techno-economic structures of the world. Carrying the DNA of cyber-libertarian and tech-utopian ambitions of Silicon Valley entrepreneurs, these structures point to the crying need for norms-building for a new data epoch, acknowledging the strengths and limitations of the human rights framework.
The fourth industrial revolution demands that the creaking institutions of twentieth century democracy be dismantled. This is not a romantic call for revolution, but an assertion that is historically aware and grounded in the continuities of capitalism. We are compelled to look at time-space and scalar relationships afresh. We are forced to rearticulate institutional values and norms as if data is real; the data economy, the real economy. And we need to acknowledge that the virtual is institutional.
Incremental approaches will not work. In the long march of capitalism, this moment of discontinuity has produced hegemonic discourses of AI that serve neoliberal capitalism. The bi-polar geoeconomic order in which US and China are carving up the rest of the world into their economic dominion requires to be countered by an ‘ideas revolution’ for data and AI. The planet’s very sustenance is at stake.
The intelligent corporation with its virtualized production and distribution networks can no longer be contained by the rulebook for place-bound operations. Its ceaseless data accumulation needs a new institutional framework for economic rights in data.
The datafication and algorithmic management of citizenship cannot be mended by making technical choices. Citizenship rights need to be reimagined as nested, multi-scalar and essentially political.
The future of work built on the present of AI-led labour substitution and worker surveillance is but a new era of bondage. It epitomises capital’s ‘final freedom’ from labour. Worker data rights need to be the cornerstone of a new social contract, not to be left to the benevolence of capital.
A virulent patriarchy is on the upsurge globally. The fourth industrial revolution is blatantly sexist and mainstream public spheres are inherently misogynistic. Women need a different world order, and they need the power to vision and create it.
A new AI order as if people and planet matter
How can we re-signify AI to recover alternative discourses of value beyond incessant capital accumulation as an end in and of itself? How can AI be decoupled from market fundamentalism and re-cast as a quilting point or master signifier of a fair, accountable and equal economic order? How can the evolution of AI technologies be guided by evaluative criteria that correspond to the crisis of de-democratisation?
Four criteria or meta parameters become key to answering these questions. These include:
1. Sustainable datafication
2. Economic democracy
3. Multiple value imaginaries, and
3. Nested sovereignty
Criterion 1. Sustainable datafication
We hear a lot about the post-truth emergency. The subversion of the public sphere in the attention economy is but a symptom of a larger problem – that of a surveillance capitalism, for which nothing in human sociality is off limits. Privacy-by-design has emerged as a new criterion to govern data-based profiling and algorithmic targeting techniques. But the ad-tech paradigm itself is rarely questioned.
As per a 2018 study, door-to-door food delivery in China accounted for a nearly eight-fold jump in packaging waste in just two years, from 0.2 million tonnes (2015) to 1.5 million tonnes (2017). This has coincided with the exponential growth of the sector in the country, where the number of customers using food-delivery platforms has gone up from zero in 2009 (when the first delivery app, Ele.me, appeared) to nearly half the population of internet users (406 million), by the end of 2018. The ecological footprint of e-commerce is by and large invisible today.
The digital paradigm has fuelled a galloping hyper-consumption – where AI is in a self-propelling and perennial search for data. The ecological destiny of a hyper-profiled planet clearly rests on the trajectory of the AI economy. The course we take will need to be political. The tech must follow. The AI-led economy needs to be evaluated for certain key questions:
What are the boundaries of data extraction and algorithmic profiling?
How is design and policy working to dismantle surveillance tech?
How is AI being reclaimed from self-propelling cycles of data accumulation for big capital?
How does AI in the marketplace promote sustainable consumption?
Criterion 2. Economic democracy
Sustainable value creation is possible only when there is room for disparate, relatively autonomous local spaces of capital accumulation. This not only pre-supposes free and fair market exchange and the absence of market consolidation and antitrust practices; it also demands consistent and continual attention to creating and maintaining a market mechanism for a vibrant and diverse economy that makes place for small actors, and allows the local to co-exist with the global.
Evaluating AI for economic democracy therefore means building norms around several key aspects:
What norms guide the use of social or community data?
How is the digital knowledge heritage of humankind, its genetic and biodiversity resources, being governed for the public good? What institutional frameworks exist to prevent the abuse and exploitation of such resources?
How are IP regimes addressing the enclosure of the intelligence premium?
What are the social boundaries / policy conditions to harness the intelligence commons for the local economy? For example, is the AI for city transport commonsified?
How are interests of small economic actors promoted in the algorithmified marketplace? – For instance, are there rules calling for preferential ranking of women’s coops in product push and placement strategies in mainstream e-commerce platforms? Is there a dedicated publicly funded marketplace for local artisans, with zero commission rates?
Criterion 3. Multiple value imaginaries
In the capitalist imagination, the value that is accorded primacy is capital accumulation – and all social organization is subservient to the circuits of capital generation and commodity circulation. When AI is tied to neoliberal capitalism, data extractivism is an inevitability.
However, if other values – such as the common or public good or reclaiming the potential for an unalienated life – acquire centre stage, the AI economy would assume a different shape. It would become possible to pursue data and AI projects that may not always pursue profits , but have immense public value.
Radical options would also be in the realm of possibility. We could ensure that productivity gains from AI-induced labour substitution are shared equally and fairly, through worker cooperative ownership of robotic technologies. Workers will have more leisure time and also a share in profits; a dramatic change from the endless gig hustling of today.
Diversity in value imaginaries are based on various parameters:
What kinds of AI projects exist in open science, public health, sustainable farming, bio conservation and climate adaptation?
What alternatives are encouraged in the digital marketplace? For instance, are there publicly supported cloud, platform and AI infrastructures for small enterprises to flourish in the local economy?
What AI stewardship models exist for fair distribution of value to workers?
How is AI-enabled automation reducing drudgery of tasks performed by women and other marginal workers?
Criterion 4. Nested sovereignty
In a liberalist AI paradigm, sovereignty tends to be pegged on to the individual user. This has meant a default regime in which data resources (the raw material of the AI economy) are enclosed and benefit only big tech monopolies. Proposals for user control in the data marketplace, including the right to monetise data and seek a data dividend are gaining traction. And, the market, it is assumed, will eventually find the tech-fix for individual privacy.
But, sovereignty may also be seen as the right of communities over their territories and resources. It also has a long history in the idea of national and sub-national jurisdictions.
Rights in data could hence mean the claim of a community, say, suffering a rare disease, to their collective health information. It could also mean the right of First Nations communities to a GIS (geographic information system) database of their natural resource commons. It could imply the right of a municipal body to its water use data. Sovereignty is also applicable to nations. Nations have a duty to their citizens for the progressive realisation of their development rights – and this includes the deployment of AI technologies for the public interest.
Sovereignty thus is a claim to self-determination – individual and collective, and one that is overlapping and nested. In the context of AI, it refers to the right to share the gains of the intelligence commons.
The geo-economics of the AI paradigm determines how this right is experienced today across scale and location. Its geo-political antecedents corner value for some, while exploiting, excluding and even expelling others. The autonomy of individuals and communities in the current epoch hinges on a new political economy of AI – one that can only be forged through a new data constitutionalism – an international covenant of sorts on data rights – that recognises all three generations of human rights.
So, what should a global-to-local regime of economic rights in data and the intelligence commons look like?
What models will take us beyond the tyranny of hierarchical multilateralism, a sweeping data nationalism and a coopted multistakeholderism?
What institutional ethics, norms, and architectures are needed to build overlapping, polycentric AI governance frameworks, with nested ideas of sovereignty?
The ethical turn in AI politics has generated some momentum to dislodge capitalist complacence. But that will not do. Capital’s wheelings and dealings to assuage social unrest are legendary.
We need to build democratic foundations bottom-up for the AI paradigm. And this is a socio-political task.
Today, in many quarters, we hear of data colonialism. Colony and empire are interconnected in history not only because the colonised were enslaved by the empire’s totalising hold; the story of empire also carries an indelible imprint of the power struggles that break that hold, one blow at a time.
Whether AI will be an autonomous weapon of social injustice or a powerful agent of autonomous societies depends on the stories we choose to weave, the parameters we decide to make worthy.
Data as master signifier is firmly located in social subjectivities carefully crafted and tirelessly nurtured by neo-liberal capitalism. In dismantling the AI that fuels 21st century capitalism, democracy must find new master signifiers that inspire new subjectivities.
I wish to thank Nandini Chami, my colleague at IT for Change, for her research support.
Get our weekly email