Predictive policing: Preventing crime or reinforcing biases?

IMAGE CREDIT:
Image credit
iStock

Predictive policing: Preventing crime or reinforcing biases?

Predictive policing: Preventing crime or reinforcing biases?

Subheading text
Algorithms are now being used to predict where a crime can happen next, but can the data be trusted to remain objective?
    • Author:
    • Author name
      Quantumrun Foresight
    • May 25, 2023

    Using artificial intelligence (AI) systems to identify crime patterns and suggest intervention options to prevent future criminal activity can be a promising new methodology for law enforcement agencies. By analyzing data such as crime reports, police records, and other relevant information, algorithms can identify patterns and trends that may be difficult for humans to detect. However, the application of AI in crime prevention raises some important ethical and practical questions. 

    Predictive policing context

    Predictive policing uses local crime statistics and algorithms to forecast where crimes are most likely to occur next. Some predictive policing providers have further modified this technology to predict earthquake aftershocks to pinpoint areas where police should frequently patrol to deter crimes. Aside from “hotspots,” the tech uses local arrest data to identify the type of individual likely to commit crimes. 

    US-based predictive policing software provider Geolitica (formerly known as PredPol), whose tech is currently being used by several law enforcement entities, claims that they have removed the race component into their datasets to eliminate over-policing of people of color. However, some independent studies conducted by the tech website Gizmodo and research organization The Citizen Lab found that the algorithms actually reinforced biases against vulnerable communities.

    For example, a police program that used an algorithm to predict who was at risk of becoming involved in violent gun-related crime faced criticism after it was revealed that 85 percent of those identified as having the highest risk scores were African American men, some with no previous violent criminal record. The program, called the Strategic Subject List, came under scrutiny in 2017 when the Chicago Sun-Times obtained and published a database of the list. This incident highlights the potential for bias in using AI in law enforcement and the importance of carefully considering the potential risks and consequences before implementing these systems.

    Disruptive impact

    There are some benefits to predictive policing if done right. Crime prevention is a major advantage, as confirmed by the Los Angeles Police Department, which said their algorithms resulted in a 19 percent reduction of burglaries within the indicated hotspots. Another benefit is number-based decision-making, where data dictates patterns, not human biases. 

    However, critics emphasize that because these datasets are obtained from local police departments, which had a history of arresting more people of color (particularly African-Americans and Latin Americans), the patterns merely highlight existing biases against these communities. According to Gizmodo’s research using data from Geolitica and several law enforcement agencies, Geolitica’s predictions mimic real-life patterns of overpolicing and identifying Black and Latino communities, even individuals within these groups with zero arrest records. 

    Civil rights organizations have expressed concerns over the increasing use of predictive policing without proper governance and regulatory policies. Some have argued that “dirty data” (figures obtained through corrupt and illegal practices) is being used behind these algorithms, and agencies that use them are hiding these biases behind “tech-washing” (claiming that this technology is objective simply because there’s no human intervention).

    Another criticism faced by predictive policing is that it is often difficult for the public to understand how these algorithms work. This lack of transparency can make it difficult to hold law enforcement agencies accountable for the decisions they make based on the predictions of these systems. Accordingly, many human rights organizations are calling for the ban of predictive police technologies, particularly facial recognition technology. 

    Implications of predictive policing

    Wider implications of predictive policing may include:

    • Civil rights and marginalized groups lobbying and pushing back against the widespread use of predictive policing, especially within communities of color.
    • Pressure for the government to impose an oversight policy or department to limit how predictive policing is used. Future legislation may forces police agencies to use bias-free citizen profiling data from government-approved third parties to train their respective predictive policing algorithms.
    • More law enforcement agencies worldwide relying on some form of predictive policing to complement their patrolling strategies.
    • Authoritarian govenments using modified versions of these algorithms to predict and prevent citizen protests and other public disturbences.
    • More countries banning facial recognition technologies in their law enforcement agencies under increasing pressure from the public.
    • Increased lawsuits against police agencies for misusing algorithms that led to unlawful or erroneous arrests.

    Questions to consider

    • Do you think predictive policing should be used?
    • How do you think predictive policing algorithms will change how justice is implemented?

    Insight references

    The following popular and institutional links were referenced for this insight:

    Brennan Center for Justice Predictive Policing Explained