Say the words “predictive policing” to most people and the response you frequently get is: “You mean like Minority Report?”. Britain’s police forces do not yet have access to psychics that can predict crimes before they happen, but they are becoming increasingly interested in various “predictive policing” systems. While they seem to promise reductions in crime, the possible downsides appear to have received little consideration. The perpetuation of policing biases, privacy invasions through the gathering of ‘big data’, further outsourcing and privatisation, pre-emptive interventions and the evasion of alternative approaches to crime, risk being overlooked in the optimistic promotion of this new technology.
Mapping future crime
Her Majesty’s Inspectorate of Constabulary (HMIC) has defined “predictive policing” as: “Methods used by police forces to use and analyse data to predict future patterns of crime and vulnerable areas.” It also offers a definition of “preventative policing”: “Methods used by police forces to pre-empt crime and to prevent it happening. These can range from working with other partners and agencies to using predictions about where crimes may occur to decide where to place visible resources.”
Predictive policing does not require particularly sophisticated technology. A RAND Corporation presentation on the issue says: “If you are doing crime mapping you are already doing a basic form of predictive policing,” and so far, it appears that the most significant use made by UK police forces of predictive policing techniques is through the use of crime mapping and limited data analysis.
Neither is it a new idea, although it has one particular novelty factor. As one American police chief has said:
It is a coalescing of interrelated police strategies and tactics that were already around, like intelligence-led policing and problem solving. This just brings them under the umbrella of predictive policing… What is new is the tremendous infusion of data.
Trials and testing
PredPol, software sold by a US company of the same name, seems particularly popular at the moment: Greater Manchester, Kent, West Midlands, West Yorkshire and the Metropolitan Police have all reportedly undertaken trials. Other systems include IBM’s ‘Crush’, which was tested by two unnamed forces in 2010, and a methodology known as ‘optimal forager’, used by Greater Manchester Police’s Trafford Division.
To use PredPol, police forces provide the company with several years’ worth of crime data along with daily updates of location, time, and type of crime committed. The software:
[C]reates prediction boxes of precise 500 sq ft zones which are listed in priority order as to where crimes are most likely to occur, which is then delivered to the smart phones, tablets and PCs of police officers who use it to make decisions on where to deploy.
It reportedly also takes into account other factors: “the people, the places, what kind of buildings there are. Taller buildings might mean there’s more crime, or less – it takes into account everything.” The company is keen to stress its advanced nature: “In contrast to technology that analyses and maps past crime for some hints at the future – a kind of ‘rear view mirror’ policing – PredPol tells law enforcement what is coming.”
An analyst used an intelligence-led policing process to identify on a daily basis 80 predicted crime locations divided equally among day- and night-time shifts at two districts (Sevenoaks and Maidstone). PredPol independently deployed 80 predicted crime locations in the same manner… The target of the evaluation was whether PredPol can predict more crime on a routine basis than the Kent boxes using existing best practice.
The force concluded that “the scientific basis of PredPol has been confirmed” after finding that “PredPol is 59% more likely to predict the location of a crime than the Kent boxes,” and that its use led to “a 4% reduction in all crime towards the end of the pilot.” Thus, “the Chief Constable made the decision to deploy PredPol live to the whole force on April 29, 2013.”
An evaluation of the software’s use over a longer period of time is yet to be published, but figures trumpeted by the firm no doubt look promising to forces in the UK – a 13% reduction in crime in one LA district, a 19% reduction in burglaries in Santa Cruz. However, concerns have been raised over these numbers and the fact that PredPol only deals with a limited set of crime types, such as burglary. TechDirt says that the company’s “habits of presenting selective, limited data as proof and utilising contracts that turn customers into unwilling shills [forces that sign up in the US have been obliged to undertake marketing and promotion activities] isn’t exactly a confidence booster.”
Beyond this, a more fundamental concern is that the use of such systems could reinforce discriminatory policing practices. If police crime records already over-represent particular neighbourhoods or groups, it seems predictive policing systems will simply perpetually amplify already-existing biases – what The Economist has called a “vicious circle”.
Predictive policing might “get results” – lower crime rates and bigger tabloid headlines – but this has limits. As recent events have shown, crime statistics are not necessarily to be trusted. Moreover, it may be that the promise of predictive policing makes it less appealing to look at alternative, more fundamental questions about reducing crime. In this respect it is worth pointing out that the cause of crime is not the police’s inability to prevent it.
From the area to the individual
PredPol and other predictive policing systems currently in use give forces information on areas in which crime is likely to occur, and the company stresses that “personal information about victims, offenders, or law enforcement is NOT collected, ever.” But what if police forces decided to try and implement forms of predictive policing based on and around individuals, rather than geography? Then it seems that the ‘Minority Report’ label might be more justifiable.
In the United States – which often seems to be ahead of the game in dystopian technology research and development – the Department of Homeland Security’s FAST (Future Attribute Screening Technology) project investigates how to monitor “respiration, cardiovascular response, eye movement, thermal measures, and gross body movement,” in order to “remotely (and therefore, more safely) identify cues diagnostic of malintent (defined as the intent to cause harm). This will potentially lead to elimination of the need for face-to-face interaction.”
At a more practical level, police officers are hoping to “tap into the wealth of non-traditional data available locally, such as medical and code-compliance data,” in order to feed more information into predictive policing systems. Californian police chief Jim Bueermann has said: “Predictive policing has another level outside the walls of the police department… how do we integrate health and school and land-use data?” The most likely answer to that question, assuming the data can be obtained for policing purposes in the first place, is: “outsourcing”. Individualised crime risk assessments could become as commonplace as the ‘know-your-customer’ checks undertaken for banks and others by companies such as WorldCheck and Regulatory DataCorp, in the name of countering terrorist financing and money laundering.
Chicago Police Department has developed a “heat list”, which uses information held in police records to generate “an index of the roughly 400 people in the city of Chicago supposedly most likely to be involved in violent crime.” These are not necessarily people who have committed crimes in the past, but rather those who have associations or connections with those who have. Miles Wernick, an academic who helped develop the system, has said: “It has to do with the person’s relationships to other violent people.” Hanni Fakhoury of the Electronic Frontier Foundation has raised some concerns over the project: would people end up on the list “simply because they live in a crappy part of town and know people who have been troublemakers?”
Back in the UK, continued experimentation with and expansion of predictive policing systems seems almost certain, and vast troves of information are out there just waiting to be harvested for law enforcement purposes. So far this appears to have focused on social media data.
Following the August 2011 riots, there was a sharp increase in police interest in social media, with forces’ inability to make use of information from Twitter, Facebook and elsewhere seen as one cause of the disorganised police response. “Police need a central information hub to help them anticipate disorder by drawing together all available information,” recommended HMIC, and the Metropolitan Police duly established one, following an invitation from HP, which “ingested feeds from 22 social media channels” and while not quite predictive, “provided accurate analysis on a near real-time basis versus 24 hours in arrears.”
A £195,000, 18-month research project based at Cardiff University is currently working out how to: “harvest, store, analyse and interpret [Twitter] data to interrogate the potential statistical link between [tweets] that relate to crime and disorder and official rates of crime as recorded by the police in six London boroughs.” The project will take data on “where someone tweeted from and where the crime was reported” in order to generate statistical models “to predict crime rates from social media data.” The Cardiff-based Universities’ Police Science Institute has already developed reactive policing tools based on mining data from social media, in the form of “the Sentinel platform”.
David Omand, former GCHQ Director and Permanent Secretary at the Home Office, and currently a visiting professor at King’s College London’s war studies department, is a great fan of “social media intelligence”, having even developed a new acronym for it: SOCMINT. “SOCMINT must become a full member of the intelligence and law enforcement family,” declared a 2012 article Omand co-authored, arguing that if the right methodological and ethical frameworks are put in place then more invasive social media mining could be employed:
[T]he rise in the use of social media, together with the rapid development of analytics approaches, now provides a new opportunity for law enforcement to generate operational intelligence that could help identify criminal activity, indicate early warning of outbreaks of disorder, provide information and intelligence about groups and individuals, and help understand and respond to public concerns. Some of this access will come from ‘open source’ information… Some, however, will require legal authorisation to override privacy settings and encryption of communications.
The London Mayor’s Office for Policing and Crime (MOPAC) has endorsed predictive policing as part of “the model for modern policing by consent across the developed world.” How “policing by consent” – and, more specifically, individual consent for data processing – might work in a possible future where all manner of personal data is harvested and analysed to assess potential crime risks is unclear. Nevertheless, the ‘National Policing Vision 2016’ – which sets out “what policing will look like in 2016 and beyond” – says that:
Predictive analysis and real-time access to intelligence and tasking in the field will be available on modern mobile devices. Officers and staff will be provided with intelligence that is easy to use and relevant to their role, location and local tasking.
Physical processes and misdirected energy
Some of the work that has gone into developing predictive policing systems highlights the need to ask more radical, fundamental questions about crime and policing. Jeff Brantingham, an anthropologist at the University of California, Los Angeles who supervised the university’s predictive policing project undertaken in partnership with the Los Angeles Police Department, told the LA Times in 2010 that:
The naysayers want you to believe that humans are too complex and too random — that this sort of math can't be done… But humans are not nearly as random as we think… In a sense, crime is just a physical process, and if you can explain how offenders move and how they mix with their victims, you can understand an incredible amount.
If, as Brantingham says, “humans are not nearly as random as we think,” perhaps it makes more sense to strive for social systems in which people’s “physical processes” are not so easily led towards crime, rather than encouraging the development of technological systems that may lead to the development of individualised suspicion or even situations in which individuals have to try and prove their future innocence.
Brantingham’s point has an interesting – if perhaps unintended – parallel with Emma Goldman’s argument:
Crime is naught but misdirected energy. So long as every institution of today, economic, political, social, and moral, conspires to misdirect human energy into wrong channels; so long as most people are out of place doing the things they hate to do, living a life they loathe to live, crime will be inevitable, and all the laws on the statutes can only increase, but never do away with, crime. 
Equipping the police with ever-more advanced technology might reduce crime rates, but it will never address the root causes of crime. Some of that technology – as with predictive policing – increases the potential for violations of privacy, the amplification of biases, and perhaps, in the future, even situations of ‘guilty-by-algorithm’. Other new and favoured technologies – such as Tasers – are blunter and increase the likelihood of police violence. Questioning and scrutinising the justifications for and policy frameworks surrounding these technologies remains as important as ever, but more fundamental questions about crime and its causes also need to be asked.
Emma Goldman, ‘Anarchism: what it really stands for’, in Anarchism and Other Essays.