Migrant Futures: Opinion

Could artificial intelligence improve decision-making in refugee cases?

Despite the risks that it poses, predictive technology could be a force for good in refugee systems under the right legal conditions

Hilary Evans Cameron
13 April 2021, 9.50am
the Canadian government has been experimenting with the use of AI in immigration assessments
|
Kiyoshi Takahase Segundo / Alamy Stock Photo. All rights reserved
Toronto University CERC Migration logo with extra white space.png

With COVID-19 disrupting travel, shutting borders, and redefining what is essential work, Pandemic Borders explores what international migration will look like after the pandemic, in this series titled #MigrantFutures

In theory, under the right legal conditions, predictive technologies powered by artificial intelligence (AI) could help ensure that fewer refugees are sent home to face persecution. However, the right conditions do not exist in Canada and, under the law as it stands, AI will only hurt claimants. To understand why, it is necessary to understand the role that uncertainty plays in refugee hearings in Canada.

Under Canadian law, claimants must prove each of the assertions in their application for refugee status. If the decision maker is unsure as to whether an assertion is proven, they will reject it. If the decision maker is paralysed by doubt – "I'm not convinced that I should accept this assertion, but I'm also not convinced that I should reject it" – the law says: "Forget your second set of uncertainties. Since the claimant bears the legal burden of proof, only your first set of uncertainties matters. If you are not convinced that the assertion is proven, you should reject it. Full stop."

To see this in action, imagine that the decision maker accepts that the claimant's ex-husband wants to kill her and that he will be able to find her if she returns home. The remaining question from a legal perspective is whether the police will protect her.

On the law as it stands, the claimant must prove that the police will not protect her. If evidence on this point is scarce, or partial or conflicting, and the decision maker is unconvinced, she loses.

The only doubts that matter are doubts about the assertion that she is trying to prove, that the police will not protect her. If the decision maker has equivalent doubts about whether the police will protect her, these doubts are legally irrelevant.

In a system like this, any uncertainty hurts the claimant. This kind of decision-making model resolves doubt at the claimant's expense and errs on the side of rejection. But the drafters of the United Nations’ 1951 Refugee Convention were clear that doubt should resolve in the claimant's favour.

Like criminal law, which has long recognised ‘Blackstone’s ratio’ – "it is better that ten guilty persons escape than that one innocent suffer" – the Convention makes clear that it is, by orders of magnitude, a worse mistake to send a person home to persecution than to offer protection to someone who does not need it.

A different kind of decision-making model, one that would respect the Convention's error preference, would have the decision maker compare the assertion being put forward by the claimant (the police will not protect her) and the counter-theory (they will protect her) and decide which is more persuasive.

Assessing what the conditions are in a claimant's country of origin requires making sense of large quantities of incomplete and inconclusive information

To give the claimant the benefit of the doubt, the decision maker would have to accept the claimant's assertion – unless the counter-theory was decidedly more persuasive. In such a model, the claimant still bears the legal onus: if, in the final analysis, the decision maker is not convinced that the claimant's allegation has met this standard of proof, the claimant loses.

But in this analysis all doubts are relevant, and their benefit goes to the claimant. If the decision maker is equally uncertain of either conclusion, then the claimant has met her burden of proof.

Making uncertainty visible

An AI support tool could play a helpful role in a legal system that resolves doubt in the claimant's favour by helping decision makers recognise the uncertainty of their predictions.

For a host of legal and practical reasons, an AI system could not advise a decision maker directly about the likelihood that a claimant faces a serious risk. It could advise, however, on intermediate issues arising in a claim, if it assumes that the claimant's factual allegations are true.

For example, AI could predict how, in a given country, the police will respond to an appeal for help or how a government will react to a certain kind of activism. This would probably involve some combination of natural language processing and standard statistics.

Assessing what the conditions are in a claimant's country of origin requires making sense of large quantities of incomplete and inconclusive information. Even when the actor is a known quantity – for example, a government, guerrilla group, crime syndicate or police force – relevant data may be partial or conflicting.

Under such a model, the AI's strength does not lie in the quality of its predictions. Where the available information is inadequate, the AI's predictions will be poor. But this transparent inaccuracy is what gives AI systems an advantage over human-only systems.

Whereas human decision makers make poor predictions with confidence, a well-executed AI support system would acknowledge explicitly their inherent uncertainty.

A well-executed AI tool would reveal high uncertainty for most but not all predictions. It could affirm that some predictions involve little uncertainty – in those circumstances in which human decision makers would be equally if not more confident. Provided the legal model gives claimants the benefit of the doubt, increasing doubt by decreasing overconfidence would reduce the risk of mistaken rejections.

Had enough of ‘alternative facts’? openDemocracy is different Join the conversation: get our weekly email

Comments

We encourage anyone to comment, please consult the oD commenting guidelines if you have any questions.
Audio available Bookmark Check Language Close Comments Download Facebook Link Email Newsletter Newsletter Play Print Share Twitter Youtube Search Instagram WhatsApp yourData