Can Europe Make It?

Welfare surveillance on trial in the Netherlands

Courts held hearings on a lawsuit by a number of groups concerned at the draconian Dutch system in late October, with a decision awaited in January.

Amos Toh
8 November 2019, 11.45am
Rotterdam, March 2017.
Utrecht Robin/PA. All rights reserved.

The Netherlands is consistently ranked as one of the world’s strongest democracies. You might be surprised to learn that it is also home to one of the most intrusive surveillance systems that automates tracking and profiling of the poor.

On 29 October, the District Court of the Hague held hearings on the legality of Systeem Risico Indicatie (SyRI), the Dutch government’s automated system for detecting welfare fraud. The lawsuit, filed by a coalition of civil society groups and activists, argues that the system violates data protection laws and human rights standards.

SyRI is a risk calculation model developed by the Ministry of Social Affairs and Employment to predict an individual’s likelihood of engaging in benefits and tax fraud, and violations of labor laws. SyRI’s calculations tap into vast pools of personal and sensitive data collected by various government agencies, from employment records to benefits information, and personal debt reports to education and housing history.

When the system profiles an individual as a fraud risk, it notifies the relevant government agency, which has up to two years to open an investigation.

Fraud risks

The selective rollout of SyRI in predominantly low-income neighborhoods has created a surveillance regime that disproportionately targets poorer citizens for more intrusive scrutiny. So far, the ministry has worked with municipal authorities to implement SyRI in Rotterdam, the Netherlands’ second largest city, which has the highest poverty rate in the country, as well as Eindhoven and Haarlem. During the hearing, the government admitted that SyRI has been targeted at neighborhoods with higher numbers of residents on welfare, despite the lack of evidence that these neighborhoods are responsible for higher rates of benefits fraud.

But SyRI doesn’t just have discriminatory effects on the privacy of welfare beneficiaries. It could also facilitate violations of their right to social security. Because SyRI is shrouded in secrecy, welfare beneficiaries have no meaningful way of knowing when or how the system’s calculations are factored into decisions to cut them off from lifesaving benefits.

The government has refused to disclose how SyRI works, on the grounds that explaining its risk calculation algorithms would enable fraudsters to game the system. But it has disclosed that the system generates “false positives” – cases in which the system erroneously flags individuals as a fraud risk.

Without more transparent explanations, it is impossible to know whether these errors have led to improper investigations against welfare beneficiaries or the wrongful suspension of their benefits.

The government claims it uses these “false positives” to rectify flaws in its risk calculation model, but there is also no way to confirm this claim. In fact, it is anyone’s guess whether the system maintains a high enough accuracy rate to justify risk assessments that keep people under suspicion for up to two years.

Global trend

SyRI is part of a broader global trend to integrate Artificial Intelligence and other data-driven technologies into the administration of welfare benefits and other essential services. But these technologies are frequently rolled out without meaningful consultation with welfare beneficiaries or the broader public.

In the case of SyRI, the system was authorized by Parliament as part of a package of welfare reforms enacted in 2014. However, the government experimented with high-tech fraud detection initiatives for almost a decade before relenting to legislative scrutiny.

Local groups have also complained that the legislative process was inadequate. According to the lawsuit, Parliament failed to meaningfully address privacy and data protection concerns that were raised by its own legislative advisory council as well as the government’s data protection watchdog.

The court will issue its decision in January. We will be watching to see if it protects the rights of the poorest and most vulnerable people from the vagaries of automation.

Empower and protect, don’t prohibit: a better approach to child work

Bans on child labour don’t work because they ignore why children work in the first place. That is why the International Year for the Elimination of Child Labour will fail.

If we truly care about working children, we need to start trying to keep them safe in work rather than insisting that they end work entirely. Our panelists, all advocates for child workers, offer us a new way forward.

Join us for this free live event at 5pm UK time on Thursday 28 October.


We encourage anyone to comment, please consult the oD commenting guidelines if you have any questions.
Audio available Bookmark Check Language Close Comments Download Facebook Link Email Newsletter Newsletter Play Print Share Twitter Youtube Search Instagram WhatsApp yourData