All across the U.S., police are increasingly using non-transparent and unaccountable automated decision systems (ADS) that exacerbate and often legitimize a long history of police surveillance and violence. Automated decision systems,
technologies that make or help make decisions about people, can have life or death impacts especially when used in the context of policing. These systems also often function as surveillance tools, expanding police surveillance powers and chilling people’s civil rights and civil liberties. People often do not know whether and how these systems are used because law enforcement agencies frequently do not publicly disclose their use of these systems.
I
n 2009, a 47-year-old Black woman named Denise Green was pulled over by multiple San Francisco police cars, handcuffed at gunpoint, forced to her knees, and searched — all because her car was misidentified as stolen due to an error made by a license plate reader. In
2015, an 18-year-old named Connor Deleire was sitting alone in a friend’s car in Manchester, New Hampshire when police officers confronted him, bashed his head into a cop cruiser, zapped him with a stun gun, and blasted his face with pepper spray — all because the vehicle Deleire was sitting in was parked in an area determined to be the likely scene of a crime by an algorithm used by the Manchester Police Department. In
2019, a 26-year-old Black man named Michael Oliver was wrongfully arrested and lost his job and car while being held in a Detroit jail for three days on a felony larceny charge — all because police misidentified him using racially biased facial recognition
technology.
These stories illustrate some of the harms caused by
error-prone automated decision systems. However, even when these systems operate accurately, they can still be inherently problematic.
Below are some examples of ADS that are frequently used by police.
Facial Recognition Technology
Facial recognition technology (FRT) is a type of surveillance tool and automated decision system built for the purpose of analyzing and identifying human faces, often by matching them against a database of faces. Unlike many other biometric systems, facial recognition can be used for general surveillance in combination with public video cameras, and it can be used in a passive way that does not require the knowledge, consent, or participation of the subject. FRT poses serious threats to everyone’s privacy and civil liberties and would exacerbate police violence even if it were error-free, but it is important to emphasize that this technology is
rife with racial and gender biases that cause disparate harm. Multiple studies have found the technology is up to 100 times more likely to misidentify Black or Asian faces compared with white faces. It is particularly inaccurate at identifying Black women and transgender individuals.
Since 2019, we have learned of at least three Black men who were wrongfully arrested and jailed because facial recognition software matched them to crimes they did not commit. In addition to
Michael Oliver, who was mentioned above,
Robert Julian-Borchak Williams went to jail for 30 hours after Detroit police arrested him in front of his wife and children.
Nijeer Parks was jailed in New Jersey for 10 days and spent more than $5,000 in legal fees to defend himself.
Alarmingly, law enforcement agencies are increasingly using facial recognition in routine policing. Local, state, and federal police have facial recognition databases filled with mugshots from arrestees. For example, Kent Police Department and other law enforcement agencies participate in the Regional Booking Photo Comparison System. Officers use photos from security cameras, Ring videos, social media accounts, CCTV footage, traffic cameras, and even photos they have taken themselves in the field to compare and locate matches with the RBPC booking photos. It is concerning that these mugshot images are taken upon arrest, before a judge has a chance to determine guilt or innocence, and often never removed from these databases, even if the arrestee is found innocent.
Additionally, law enforcement has used FRT to target people engaging in protests. During the protests after the death of Freddie Gray, the Baltimore Police Department used facial recognition on social media photos to identify and arrest protesters. Similarly, the FBI used facial recognition to surveil Black Lives Matter protesters after the murder of George Floyd.
Predictive Policing
Predictive policing (PP) technology uses algorithms that analyze large sets of data to forecast where crime will happen in the future, decide where to deploy police, and identify which individuals are purportedly more likely to commit a crime. Because predictive policing uses data that reflect the historical over-policing of neighborhoods of color, these technologies are more effective at reinforcing existing biased policing patterns than predicting crime. Police departments in
Los Angeles, New York, and Chicago adopted predictive policing technology, but have since shut those programs down due to a lack of transparency, accountability, and efficacy. For example, the Chicago Police Department ran one of the biggest predictive policing programs in the United States.
This program created a list of people it considered most likely to commit gun violence or be a victim of it. However,
an analysis of the program by the RAND Institute concluded that the program was ineffective at preventing homicides, and they found that the watch list was overbroad and unnecessarily targeted people for police attention.
Recent research has shown that predictive policing disparately impacts communities of color, exacerbating existing biases in policing. Unfortunately, due to a lack of public oversight, people do not often know whether predictive policing systems are in use or how they work. For example, in 2012, the
New Orleans Police Department contracted with Palantir to build a state-of-the-art predictive policing system. However, it wasn’t until 2018 that even members of New Orleans City Council were made aware of what their police department was doing.
In Washington, as a result of an ACLU-WA public records request in 2020, we know that Spokane Police Department uses Bair Analytics; Bair is known for developing tools that
identify and predict criminal patterns. We also know that the City of Tacoma has used
PredPol, a predictive policing tool known to disproportionately target Black, Brown, and poor neighborhoods.
Social Media Monitoring Software
Social media monitoring software (SMMs) can be used to geographically track, analyze, and organize information about our communications, relationships, networks, and associations. It can monitor protests, identify leaders of political and social movements, and measure their influence. Because social media can reveal one’s ethnicity, political views, religious practices, gender identity, sexual orientation, personality traits, relationships, and health status, law enforcement use of SMMS to track, analyze, and record people is a serious invasion of privacy. By its very nature, SMMS enables powerful dragnet surveillance. Use of SMMS by law enforcement silences discourse, targets innocent speech, invades people’s privacy, and can wrongly implicate an individual or group in criminal behavior.
For example, a Brennan Center
lawsuit against the State Department and Department of Homeland Security (DHS) shows how the collection of social media identifiers on visa forms, which are stored indefinitely and shared across the U.S. government, led many international filmmakers to self-censor by ceasing to talk about politics or promote their work online. Similarly, through public records requests, ACLU-WA is aware that Tacoma Police Department has conducted extensive social media monitoring of individuals and groups involved in Black Lives Matter protests after the police killing of Manuel Ellis.
Fresno Police Department has used MediaSonar, an SMMS tool that boasted the capacity to identify so-called “threats to public safety” by monitoring hashtags such as #BlackLivesMatter, #DontShoot, #ImUnarmed, #PoliceBrutality, and #ItsTimeforChange.
Concerns with law enforcement use of SMMS have only become heightened post-Dobbs. Abortion activists often rely on social media, and in the wake of
Roe’s reversal, many people went online to share details about how to find and use abortion pills. With abortion now criminalized in many states, SMMS could be used to track and arrest individuals helping with or seeking abortion.
Automated License Plate Readers
ALPRs (ALPRs) are high-speed camera systems often mounted on police cars or on objects like road signs and bridges that photograph thousands of license plates per minute. ALPRs capture information including the license plate number, date, time, and location of each scan. This information is often entered into enormous databases and retained for years, if not indefinitely, and may be shared with police networks across the country. This surveillance technology can reveal precise geolocation information about innocent people, like where they live, practice their religion, what bar they go to, and what political meetings they attend.
Law enforcement agencies have used this powerful technology to surveil and violate the rights of individuals and entire communities. For example, the NYPD used ALPRs to engage in religious profiling and suspicionless surveillance of the Muslim community for over a decade. ALPR data in Oakland, California, showed that police disproportionately deploy ALPR technology in low-income communities and communities of color. Police officers have also used
ALPR databases to search for romantic interests in Florida and to look up the plates of vehicles near a gay bar and blackmail the vehicle
owners. The University of Washington Center for Human Rights recently released
a report which shows the threats ALPRs pose to immigrant and reproductive rights in our state because of local law enforcement’s unrestricted data sharing with federal and out-of-state law enforcement agencies.
Gunfire Detection Technologies
Gunfire detection technologies are automated decision systems that detect and convey the location of gunfire or other weapon fire using sensors. Law enforcement agencies use these systems to decide where to send police officers or investigate for gun violence. One of the most popular gunfire detection systems, ShotSpotter, is highly inaccurate and ineffective, increases the risk of police violence, and creates significant surveillance concerns.
After hearing these concerns, Seattle recently rejected a $1 million funding proposal for gunfire detection technologies.
Multiple peer-reviewed studies
1 have demonstrated that ShotSpotter is neither accurate nor effective, and its false alarms send police into communities for no reason, increasing the risk of deadly police violence. ShotSpotter uses reports from police officers as “ground truth” in training its algorithm not to make errors, which is concerning as a ShotSpotter representative
admitted in a 2016 trial that the company often reclassifies
non-gunfire sounds as bullets at the request of police department customers. Such manipulation of reality further makes ShotSpotter inappropriate to use as evidence in court.
Additionally, ShotSpotter poses serious surveillance concerns. Its microphones can record human voices, and in at least two criminal trials, prosecutors sought to introduce as evidence audio of voices recorded on acoustic gunshot detection systems
2. People should not have to fear having private conversations surreptitiously recorded in their neighborhoods.
What Washington Can Do
Law enforcement agencies are increasingly using automated decision systems in policing, and without transparency and accountability. Police use of these systems can have life or death impacts. Many of these systems are error-prone, but even when they operate accurately, they can exacerbate police surveillance and violence. The examples of ADS above are just a few of many systems used in the policing context. We are working to pass a first-of-its-kind algorithmic accountability bill (SB 5116) in Washington that would, at a minimum, prohibit discrimination via algorithm and require government agencies—including law enforcement—to make transparent the ADS they use.
1. A
2021 peer-reviewed study of ShotSpotter use in 68 U.S. counties showed no reduction in firearm homicides, murder arrests, or weapons arrests. The authors conclude that there is a lack of evidence to support a return on investment (monetary or otherwise) from implementing gunfire detection technologies.
A 2020 peer-reviewed study of acoustic gunshot detection systems showed no reductions in serious violent crimes, despite a major increase in the number of police deployments responding to supposed gunshots.