Further, the algorithms generate feedback loops, so that when an error leads to a penalty (for example, if an application wrongly records a person in a database as a criminal suspect), the mistake is copied into other datasets until it becomes impossible to identify where the first error was made.
In recent years, police and probation services have used automated services more and more to help identify suspects or predict reoffending rates. In a report published last month, the Justice and Home Affairs Committee of the British parliament’s House of Lords gives examples: Avon and Somerset Police has bought a tool called Qlik Sense to predict criminal trends; the Home Office uses an algorithm to review applications for marriage licences in order to predict which marriages are ‘sham’ and so to justify refusing visas; and Durham Constabulary has been using a Harm Assessment Risk Tool, which uses machine learning to predict who is likely to commit crimes and should therefore be more closely monitored.
The committee speaks of the “benefits” of these schemes in terms of increased efficiency. It warns, however, that technologies of punishment and control are developing faster than legal resources to prevent injustice. For example, the committee says it’s unusual for a criminal defendant to be told if their investigation is based on a recommendation from a computer application. It also speaks of a “vicious circle”, in which an application pre-emptively identifies groups of people living in particular areas as being at risk of becoming criminals, leading the police to closely monitor their lives, with such monitoring generating data in turn that justifies further investigation.
The authors note, but do not properly explain, that several of the algorithms recently purchased by police forces mimic others that have been controversial. One much-published application, which is widely used in the US (and in Kent between 2013 and 2018), is PredPol, which uses historical crime data to predict, on an hour-by-hour basis, where crime might be committed. Police forces are given maps with boxes marked in red to show where crime should be expected. Sites of past burglaries can be patrolled, and possible crimes pre-empted, until the incidence of that offence – and crime in general – has declined.
We can see the legacy of PredPol in some of the newly acquired algorithms. Qlik Sense, used in Avon and Somerset, mimics its focus on the local geography of crime and its changing local trends.
There are good reasons however why PredPol has become controversial. First, its supporters have insisted that predictive policing will make ordinary citizens safer. That claim has been based, in turn, on the theory of ‘zero tolerance’: that if only you could stamp out petty crime and anti-social behaviour, serious offending would also decline, and that both could be achieved without social cost. “Misdemeanor crimes,” according to Predpol’s training materials, are “seen as the gateway to more serious crimes.”
In his international bestseller ‘Humankind’, Rutger Bregman has shown how poorly thought-through were the studies upon which zero-tolerance policing was based. It began with a series of experiments in which psychologists deliberately created conditions intended to maximise the incidence of crime, and then recorded (unsurprisingly) that it had risen. Meta-analyses of zero-tolerance policing show that there is in fact no correlation between its aggressive approach to petty crime and reduced violence. Increasingly arbitrary and authoritarian policing – inevitable with a zero-tolerance approach – produces not less offending but more discontented people.
Comments
We encourage anyone to comment, please consult the oD commenting guidelines if you have any questions.