Technology and Democracy: Opinion

The EU should refocus the AI Act on workers and people

Proposed EU legislation on AI is driven by a desire for growth, with few provisions for safeguarding the rights of individuals, particularly workers

Aida Ponce Del Castillo
17 December 2021, 12.00am
In April this year, the European Commission proposed the Artificial Intelligence Act
|
Martin Bertrand / Alamy Stock Photo. All rights reserved
erasmus support.png

In April this year, the European Commission proposed the Artificial Intelligence Act, which aims to regulate the use of AI-driven products, services and systems within the EU. But the market-driven draft legislation, aimed at creating and developing a competitive European AI sector, failed to meet the expectations of civil society, which had been hoping that the act would prioritise the protection of people.

The EU Council presidency also criticised the text, pitching substantial changes to the proposal and suggesting, in particular, further restrictions on the possible use of a ‘social credit system’ and facial recognition technology. Critical negotiations are ongoing, but there is no guarantee that they will result in the draft law becoming more protective of individuals.

Employment and workers’ rights are particular areas of concern in the context of the AI Act, and measures must be taken to ensure that workers are protected.

AI in the workplace

When they work, AI systems do what we ask them to do: achieve an objective. But, it is easy to give an AI system the wrong problem to solve, or for it to produce solutions that are useless, wrong or biased. What’s not so easy is identifying issues like this before something goes wrong. As AI systems are being entrusted with increasing ‘authority’ in the workplace and in hiring processes, for example screening resumes and estimating the ‘risk level’ of workers, we need to ask whether developers can build AI systems that are protective of workers’ rights when the system’s objectives have nothing to do with these rights.

How can we set limits on a system that may have a negative impact on workers and their rights? Given these questions and the particular risks that AI raises in the context of employment, the European Commission should produce an additional ad hoc legislative proposal dedicated to protecting workers’ rights when they are exposed to, interact with or work with AI systems.

With some amendments, the final AI Act could – and should – become a less permissive tool of market regulation, compared to its current version, one that addresses issues related to the impact of AI on workers, who are particularly at risk given their subordinate position in the employment relationship.

As it stands, the proposed AI Act does not regulate AI. What it does is establish rules concerning the placing of “AI systems” on the market and putting them into service and use.

It is clear from the draft that the primary objective of the European Commission is to foster the development and uptake of AI for economic growth. Meanwhile, protecting the public interest, in particular the health, safety and fundamental rights and freedoms of individuals, is only a side objective, and the draft includes only a limited number of concrete provisions to achieve these objectives.

Issues the draft must address

Updating and modernising the understanding of work-related risks to include data-driven or AI-related risks. This would involve conducting an extensive anticipatory exercise involving both employers and workers. The new way of thinking about workplace risks should be broad and go beyond occupational health and safety. It should include the risks to privacy, data protection and fundamental rights, and the possible abuses of managerial power stemming from the employment relationship.

The ‘transparency paradox’ at work must be addressed. Transparency does not work in the workplace as it is a unilateral requirement that does not provide workers with actionable rights. As AI applications involve processing personal data, including workers’ data, reaffirming the relevance of GDPR rights and making them explicit in the context of employment is essential. AI systems use workers’ data for predictive analysis, performance evaluation or task distribution, and workers must be able to fully exercise their rights under GDPR with their employers.

In line with GDPR, Workers must more easily be able to exercise their right to explanation (GDPR Art 22) of how their data is being used. If workers don’t understand how their data is being used to manage and evaluate them, they can’t take action to protect their rights.

Equally important is that workers can exercise their right to be consulted. This can imply an obligation for employers to consult workers before AI systems are implemented and to provide a mechanism to monitor the outcomes.

From a labour perspective, trying to fit protective provisions into the AI Act proposal may turn out to be like trying to push square pegs into round holes

Affirming the ‘human-in-command’ principle as a component of work organisation, and preserving worker autonomy in human-machine interactions. In the workplace, humans and machines act together. Managers, IT support team or external experts cannot be the only humans actively interacting and intervening with the AI systems. When joint (human- machine) problem solving takes place, employers should ensure that the AI systems they deploy always require worker interaction, for example by using feedback loops or validation systems, and by incorporating workers knowledge and understanding of their own roles and tasks within their jobs.

A total ban of algorithmic worker surveillance. AI systems can bring worker monitoring to a new level, which can be defined as ‘algorithmic worker surveillance’. Advanced analytics can be used to measure biology, behaviours, concentration and emotions. One can compare this to switching from radar, which scans the surface of the sea, to sonar, which builds a 3D image of everything under the surface. Such surveillance is extremely intrusive, as it does not passively scan but ‘scrapes’ the personal lives of workers, actively building an image and then making judgements and decisions about individuals. It must be banned.

From a labour perspective, trying to fit protective provisions into the AI Act proposal may turn out to be like trying to push square pegs into round holes. The time is right to ask fundamental questions on how best to address the governance of AI, including the necessary ability to assess risks before implementing AI systems and those risks materialise into reality.

The AI Act was designed by the European Commission to ensure the development of a competitive and ‘deregulated’ European AI market. But a more balanced approach is needed and the European Parliament and Council should listen to the many voices asking for a regulatory framework that also protects citizens’ and workers’ rights.

Had enough of ‘alternative facts’? openDemocracy is different Join the conversation: get our weekly email

Comments

We encourage anyone to comment, please consult the oD commenting guidelines if you have any questions.
Audio available Bookmark Check Language Close Comments Download Facebook Link Email Newsletter Newsletter Play Print Share Twitter Youtube Search Instagram WhatsApp yourData