Image of front page of testimony backgrounder

Recommendations for Advancing Equity and Civil Rights in AI Policy

Published: 
Friday, December 1, 2023
PDF icon Download Document (666.91 KB)
Read the document online

Recommendations for Advancing Equity and Civil Rights in AI Policy

WA State Environment, Energy, and Technology Committee Working Session on AI

American Civil Liberties Union of Washington (ACLU-WA) has a long history of working on civil rights issues in the realm of technology, including protecting people’s privacy and advancing digital equity.

Artificial Intelligence (AI) is poised to transform a range of industries and is already having a large scale impact on society. AI risks and harms intersect with civil liberties in every aspect of our lives, including employment, healthcare, education, and housing. We see this as a critical point in time to anticipate the potential harms of AI and develop meaningful safeguards through regulation. Rather than being counter to innovation, addressing these harms will allow Washington state to lead in the space of AI to develop cutting-edge technologies that benefit society while also protecting civil rights, preventing unlawful discrimination, and advancing equal opportunities for all Washingtonians.
 
AI Risks and Harms: A Few Intersections with Civil Liberties

AI can cause a range of privacy harms and violations.
AI is being used to collect and analyze vast amounts of data about people, often without their knowledge or consent. Further, AI-based surveillance systems can violate people’s privacy by extensively tracking them and making inferences about their opinions and activities, potentially chilling free speech.

AI is also poised to create new forms of privacy violations. For example, AI has been used to generate fake nudes of teenagers across the country, including at an Issaquah high school, and existing child porn and revenge porn laws do not apply to these images. AI can also be used to infer private information about people that they may not choose to disclose. For example, AI software in a healthcare setting was able to accurately predict patients’ races when race-specific data was not collected, potentially amplifying existing inequality in medical care.

The lack of transparency around AI prevents people from making meaningful choices about whether and how to use it.
A majority of Americans are concerned about the increased use of AI in daily life, according to a 2023 Pew Research survey. This concern extends to many domains; for example, most Americans oppose the use of AI in making major employee-related decisions or tracking workers in the workplace, and 60% of Americans feel uncomfortable about their healthcare provider relying on AI for their care.

At the same time, many Americans do not know when a system is using AI. This lack of transparency around AI prevents people from understanding the implications of using AI-powered systems and services, and from making meaningful choices about their use. Further, a lack of transparency around AI processes and decisions also hampers external assessments of their risks and functionality.

AI can replicate existing societal biases and produce biased content and discriminatory outcomes.
AI technologies can make discriminatory decisions based on inaccurate or biased information that have serious impacts on people’s well-being and livelihood across several domains, including the following:
  • Employment: AI is being used in several areas of employment, including hiring workers, tracking their activities, and evaluating their performance, with concerning impacts. For example, 70% of companies use AI-based tools in their hiring processes, and these algorithms can disadvantage or reject job seekers based on race, age, sex, disability, or other protected characteristics
  • Healthcare: Clinical AI tools are being used in many healthcare settings, but can also be subject to bias, such as failing to accurately predict the need for follow-up care for Black patients
  • Education: Exposing students to biased or harmful content is a key risk of AI in educational settings, as per Teach for America. For example, generative AI have produced sexist results that could impact students, such as ChatGPT’s statement that “only White or Asian males make good scientists”
  • Housing: AI tools have discriminated against applicants of color, overcharging them on loans and rejecting creditworthy applicants
  • Criminal Justice: Facial recognition software is less accurate for people of color, and the use of this technology by police departments has resulted in numerous false arrests, particularly of Black men

Key Policy Recommendations:

The following recommendations aim to improve transparency, accountability, and enforcement around AI while protecting people’s civil rights.

• Take a multi-stakeholder approach to identifying and understanding AI risks and harms to inform regulation.

To anticipate and avoid the AI harms and risks outlined above, AI policies and processes must be informed by input from diverse stakeholders, including communities that may be disproportionately harmed by technological issues. We recommend setting up a task force to explore these issues prior to making far-reaching policy decisions on AI. The task force should include experts and stakeholders from government agencies, private industry, academia, civil rights organizations, and community-based organizations that represent communities most likely to be negatively impacted by AI.

• Government and private sector agencies must be transparent about their use of AI and require opt-in consent to collect, store, or use people’s data.

Governmental and private entities must make it clear to individuals when they are interacting with an AI system/output or when a decision that impacts them has been made by AI. This will allow people to make meaningful choices about whether to use these systems or to seek redress when they face negative outcomes. Further, companies and state agencies must be prohibited from collecting, using, or selling people’s data without opt-in consent. People should have the right to withdraw consent and/or ask for their data to be deleted.

• Government and private sector agencies that use AI should be required to evaluate their automated decision systems and their impacts on fairness, justice, bias, and other community concerns and to consult with impacted communities.

Third party data audits should be implemented to determine whether the underlying data is free of bias and to ensure the algorithm is transparent, fair, and its decisions are explainable. Algorithmic assessments must be interdisciplinary to account for gaps in individual knowledge and perspectives. Evaluations should take place throughout the product lifecycle.

• Establish minimum standards of fairness and accountability for any government agency buying or using automated decision systems. Prohibit any government agency from developing or using automated decision systems that discriminate against people.

State agencies should be required to get independent review and approval before any system can be implemented. This should include algorithmic impact assessments and data audits for automated decision systems that involve employment, education, insurance, banking, public agencies, healthcare, and housing. Agencies should ensure there are processes in place for community input and engagement.

• Empower people to enforce their privacy rights via civil action.

As with enacted ESHB 1155 (My Health, My Data), violations of the law should also be made violations of the Consumer Protection Act, thus providing people with a private right of action and the Attorney General’s Office enforcement authority when people’s privacy or other civil rights are violated.

• Increase funding for accountability mechanisms and processes.

Increase budgets at state agencies for technical capacity-building and hiring so that regulators, attorneys general, and other enforcement agencies could meaningfully investigate, document and audit algorithms for bias and discrimination. Effective transparency and accountability require effective data regulators who are empowered to conduct algorithmic bias assessments, have the technical capacity to analyze and audit decision-making algorithms and can penalize companies for unfair and illegal practices.