Automated Decision Making Systems Are Making Some of the Most Important Life Decisions For You, but You Might Not Even Know It

Published: 
Wednesday, September 22, 2021
This post is part of a series covering how government agencies in Washington are using automated decision-making systems that affect people’s lives in harmful ways.  
 
Every day, people are denied healthcare, overpoliced, kept in jail, and passed up for jobs because of decisions made or aided by computers. These automated decision-making systems are increasingly affecting our lives, even though we don’t always know it. Although some function simply as calculators, helping government officials apply complex rules to individual circumstances, often these systems reinforce and perpetuate biases, leading to discriminatory impacts that are difficult, if not impossible, to explain and challenge. And more and more often, government agencies are using secret formulas, or algorithms, that the agencies themselves can’t even examine, let alone understand.  
 
Here in Washington, government agencies across the state are using algorithms to help make some of the most important decisions in people’s lives. In some courts, the judge’s decision to release or detain defendants pretrial is informed by a score that is calculated by software. In Kennewick, the city sifts through job applicants using software from Caliper, a company that “measures personality traits” using secret formulas to assess candidates’ fitness for certain roles. In Tacoma, police have used PredPol software, which uses a secret algorithm to predict the exact blocks where future crimes will occur, and recommends that police spend extra time patrolling those blocks. And throughout the state, Medicaid patients’ claims are assessed by software from Optum Health Solutions, which uses artificial intelligence to score claims for likelihood of fraud.  
 
Our government agencies have long used automated decision systems to process information according to certain rules. In fact, some of the earliest computers were created specifically to help the federal government process millions of census records. But increasingly, automated decision-making systems take this processing a huge step forward, sometimes with the computers not just following rules, but creating them.  
 
Traditionally, government agencies have used automated systems that follow published rules devised by humans. For instance, when sentencing defendants to prison for state offenses, a Washington judge may use a spreadsheet to help calculate the sentencing range according to the complicated “sentencing grid” rules that apply to that case. The system weighs the charges, prior convictions, and other factors to determine the state-mandated sentencing range for the case. State commissions and councils continually review and update the sentencing grid, publish information about how the grid works, and report publicly on whether the grid results in racially disproportionate sentencing. Although the sentencing grid itself is a type of automated system that may be unjust, can discriminate, and should be assessed for bias, the public is able to see and understand how it works and may lobby their elected representatives for changes. 
 
It is important to note that even transparent algorithms and automated systems can cause harm by perpetuating and exacerbating biases. However, agencies are increasingly turning to automated systems that follow rules and formulas the public has no ability to see or understand. These high-tech systems are often created and licensed by private companies, using secret, proprietary formulas that are hidden from the agencies by trade secrets law. The system vendors may claim to provide better results at lower costs than human decision-makers can — essentially replacing traditional labor costs with technology. However, when agencies delegate their decision-making processes to these third-party black boxes, they lose the ability to assess whether the decisions being suggested are just, and the public loses the ability to identify and challenge unfairness, or the costs associated with the loss of public goods or benefits.  
 
Many of these automated decision-making systems are created using mathematical formulas based in statistical regression analysis or machine learning (a type of artificial intelligence). In a nutshell, some of these formulas — or algorithms — are created by using computers to look for commonalities in large datasets. For instance, an HR software company might create hiring formulas based on a computation of what successful, or unsuccessful, job candidates have in common, given all their data from the job application process (e.g., employment history, college major, school attended, zip code). Once the computer determines the secret formula for success based on all the data available, that formula can then be applied to future job applicants to score their “fitness” for a role.  
 
However, because these algorithms by their nature favor people who are similar to those who have previously been hired, they reinforce any bias that was used in past hiring decisions — except without any visibility or opportunity for correction. In a blatant example of this bias, Amazon devised a hiring formula that examined 50,000 terms used in resumes, and found that applicants whose resumes included the word “women’s” or graduated from certain all-women colleges were less likely to be successful than other candidates. This bias was pretty obvious, but not all are, as sometimes characteristics that aren’t directly associated with sex, age, race, and other protected classes may still indirectly reflect them, without any real underlying justification. For instance, a machine-learning formula may discriminate against candidates who attended community college, which could indirectly disadvantage individuals from less affluent households; those who have gaps in their resumes, which could disadvantage the formerly incarcerated, pregnant individuals, and people with disabilities; or those who live in certain zip codes, which could disadvantage Black and Brown candidates. When a secret formula is used for job applications, the hiring manager may never know why certain applicants are being suggested or rejected, and whether those reasons have anything to do with their ability to do the job.  
 
Even when the automated decision-making systems are transparent, they can still perpetuate bias because they are based on flawed inputs. For instance, when Washington courts decide whether to release a defendant pretrial, they sometimes use openly published risk assessment tools. However, because these tools use criminal history as a major input in the decision process, the results reflect the racism inherent in our criminal legal system, from policing to prosecution to incarceration. 
 
At Washington government agencies, these automated decision-making systems are not being used just for hiring in places like the city of Kennewick, but also for policing, pretrial release decisions, distribution of benefits, and determining whether Medicaid claims are fraudulent. In future blog posts, we will explore how these systems are being used in Washington, and how the ACLU of Washington suggests bringing more transparency and accountability into their use.  
 
Although this issue is largely hidden from the public’s view, that does not mean it is not causing real harm to Washingtonians every day. We invite you to join us as we shine a light on these secretive practices in our state government.