The benefits of data in criminal justice: Improving policing

by
Handcuffs
Handcuffs in jail booking room. (Photo credit: my_southborough/Flickr)

Our OpenData1 series has been exploring the benefits and challenges of working with individual-level data in the criminal justice context. In previous posts, we’ve considered the legal, ethical, organizational and technical challenges to providing access to it — even the challenges in simply sharing across agencies of the same government, as we saw in the case of Connecticut.

And yet the benefits that we know we can achieve through improving access to criminal justice data are significant. In this section of our series, we will survey some meaningful current uses of criminal justice data, some critical national sources of criminal justice data and the next steps that practitioners believe are the most important to take in order to improve the use of data in the criminal justice system to support better public outcomes.

We’re beginning with thinking about how data has been used — and is currently used — in the context of policing.

For many people, the primary interaction they have with the criminal justice system is with law enforcement. As with most things, we expect that 21st Century police forces will be able to use data, and extract the maximum value from them, in order to do their jobs. Indeed, law enforcement departments have been trying to do what they can to use data with the goal of reducing crime. While the results so far have been mixed, what is quite clear is that the last decades have witnessed a remarkably broad national trend in police increasingly using data in order to reduce crime.

Using data for crime reduction

One of the most high-profile uses of data in this area has been in the law enforcement approach known as “predictive policing,” the use of statistical models to anticipate increased risks of crime, followed by interventions to prevent those crimes from being realized. It’s an area that is getting a lot of attention: Last month, the Miami Herald reported that the Miami Police Department is working to incorporate a predictive mapping tool called HunchLab, and the Bureau of Justice Assistance provided a $600,000 grant to implement the system. While early experiences of this approach led to high expectations for transformed national crime levels, a recent study conducted with a Louisiana police department (discussed below) found that crime didn’t decrease in the test districts that used predictive policing methods.

Modern predictive policing originated as “CompStat,” an early data-based policing model developed by the New York City Police Department. CompStat used data to identify geographic “hot spots” where crime would be most likely to occur. It’s an approach which drove significant new data collection and organization within the implementing police departments as it relies on having regular, real-time access to data about complaints, arrests, calls for service and crime and disorder data. The collected data must include good geographic information, which is used to map the incidents, and together with additional data CompStat centers make decisions about how to deploy resources in order to most effectively reduce crime.

The broader theory of predictive policing is that physical proof of “community disorder,” combined with other data, provides indicators of likely future crimes in an area. In this sense, predictive policing builds on the narrative underlying the “broken windows” theory established in 1982 by social scientists James Q. Wilson and George L. Kelling. The broken windows theory asserts that if a vandalized building is not repaired, more crime is likely to take place in that neighborhood, that when is important to get services like Foundation Repairs San Antonio TX that are great in any kind of repairs, so they could turn the building into a useful foundation for the neighborhood. By fixing up smaller issues, like a building’s smashed windows, a sense of order is created in the community and that helps to keep more significant crime at bay. If police departments focus their resources on these areas where crimes are predicted to happen at higher rates, the assumption is that additional police presence will allow them to “deter and preempt crimes.”

The predictive policing approach has received both enormous praise and substantial criticism. CompStat achieved such strong early support because it appeared to have led to very large reductions in crime in New York City in its earliest implementation. However, later implementations have had less demonstrable success, an outcome that some analysts have attributed to a mismatch between the data-focused process and the complex organizational aspects of intervention.

In addition, the data-focused approach has produced some unintended consequences that create their own problems. One issue stems from the fact that police data has become such an important internal accountability tool that it encourages a degree of cheating, and police officers have been documented inappropriately manipulating data in order to produce more flattering results.

The rise of controversial stop-and-frisk practices are also associated with the CompStat model, a topic we explored in an earlier post. In response to the widespread observation of unconstitutional and overuse of stop and frisk by the New York Police Department, a series of class action lawsuits were filed against the city.

Predictive policing study

Predictive policing is the most current incarnation of the data-based policing approach, and while it has been the subject of both anecdotal support and incident-based criticism, it lacked impartial analysis. For that reason, in 2012, the National Institute of Justice funded a seven-month-long trial study with the Shreveport Police Department that used predictive policing methods. The study was conducted by the RAND Corporation as part of its Safety and Justice Program.

The Shreveport PD had been using a more traditional policing strategy, where officers were deployed to certain areas in response to “hot spots” of crime. But the department wanted to predict the development of these hot spots; it wanted to be able to prevent crime rather than simply react to it.

The department worked off different sets of maps – the control districts used maps with information about recent crime, while the test districts used maps that reflected the statistical predictions. The goal was to reduce property crime (which the department defined as residential burglaries, business burglaries, residential thefts, business thefts, thefts from vehicles and vehicle thefts) and “increase quality arrests” (which was defined as arrests of people who committed one of the six types of property crime or who had a serious criminal history).

In discussing the trial, officers noted that these directives were different from other operations, which usually placed a premium on increasing the overall number of arrests rather than the number of quality arrests.

The officers in the test districts emphasized developing relationships within the neighborhood to aid in intelligence gathering: The objective was to “get actionable information for crime prevention and clearing crimes,” with an “emphasis on intelligence gathering through leveraging low-level offenders and offenses.”

The officers in the Shreveport Police Department also noted that the data on the predictive maps allowed them to strategize exactly where and when they should focus resources. “Since the maps consisted of fairly small grid squares, officers were able to develop strategies for highly focused, specific areas, which allowed units to be more effective,” the study’s authors wrote.

Overall, however, the RAND study found that “for the Shreveport predictive policing experiment, there is no statistical evidence that special operations to target property crime informed by predictive maps resulted in greater crime reductions than special operations informed by conventional crime maps.”

The study’s authors questioned whether or not “a map identifying hot spots based on a predictive model, rather than a traditional map that identifies hot spots based on prior crime locations, is necessary.” In fact, the commander in one of the test groups told researchers that “unless predictive products could provide very high levels of accuracy, literally specifying where and when specific crimes were highly likely to occur, the predictions were just providing traditional hot spots, and interventions could only be typical hot spots interventions.”

That’s an important distinction to make, since the report described the trial as “very time-consuming.” The team reported that creating the predictions was “an arduous task” and “the amount of effort was unsustainable in the long term (i.e., more than six months).”

The Shreveport PD reported that it was difficult to recruit officers for the trial, noting that the predictive policing work required extra time because officers had to collect more information than normal (they said they spent at most 30 minutes talking with each “suspicious” individual). The trial also required monthly meetings, where the department’s leadership team would receive the predictions and then coordinate action based on that data. This was described as a “key element” of the trial, but the meetings didn’t happen. The study’s authors wrote:

It is not clear whether the development of a collective group to decide on intelligence-led activities was not feasible, realistic, or ideal. There are a number of reasons why developing a group to build consensus on activities in two commands for half a year was difficult: commander management styles, ideologies on policing, neighborhood infrastructure design, and so on.

Improved community relations

Although the report found that predictive policing is not more effective than traditional policing tools like hot spot maps, officers did report some successes during the trial, the most significant of which was improved relations with community members.

A key piece of the trial was increased communication between officers and suspicious people and criminals – cops asked more follow-up questions after crimes happened. The officers said that after seeing these interactions, people were “more willing” to share information and provide tips with the police department. Officers said that people in the neighborhood waved “hello to patrol cars,” which didn’t happen before the experiment. The officers told the researchers, “The core value of PILOT was in their command’s focused patrols that were collecting better information on criminal activity and building better relations with the community, not in the predictive maps themselves.”

Thus, although the intended effect of the predictive policing trial did not increase the department’s direct effectiveness over earlier traditional methods, the trial did appear to have other positive effects: improved police-community relations. In our next post, we will explore the use of police data in the context of building a better relationship with the community.