Microsoft agrees face surveillance is dangerous, but offers no credible solutions

Published: 
Tuesday, December 11, 2018
The best way to prevent a surveillance state is not to build one

Microsoft President Brad Smith, in a blog post published on December 6th, acknowledged that widespread adoption of facial recognition technology (or face surveillance) carries significant risks. These include potentially moving us towards a 1984-like society where government eyes and ears are everywhere, bias and discrimination are exacerbated, and privacy is virtually non-existent. Smith even affirmed that government use of face surveillance poses a serious threat to our democratic freedoms.
 
Smith is absolutely right to anticipate those dangers, and perhaps it is a sign of progress that a senior figure in one of tech’s biggest companies has recognized the scale of face surveillance’s threat to civil liberties—a threat we’ve long highlighted.
 
Still, something doesn’t compute. Having outlined the threat to millions of people— including vulnerable communities such as people of color, religious minorities, people with disabilities, and people with limited financial resources—Smith proposes relying on inadequate safeguards that have failed in the past with technologies far less dangerous than face surveillance.  A long history of past surveillance technologies demonstrates this approach will fail.
 
And something else is amiss. Underlying Smith’s proposals is the assumption that widespread adoption of face surveillance technology is simply inevitable, and that society must find a way to deal with the consequences. This is a dangerous assumption, and a false one. The United States is a democracy. Individuals in our country get to collectively decide what’s fair and acceptable, and this should include communities most likely to be impacted.  What first must be decided now is not how to make face surveillance less bad, but whether we want it at all.
 
Impact on Vulnerable Communities
 
Throughout American history, whenever surveillance tools have been developed, the government has used them against people of color, immigrants, and religious and political minorities, the individuals and communities most vulnerable to tyranny. For example, Japanese-Americans were unconstitutionally incarcerated in large numbers after Pearl Harbor. More recently, Muslims have been subjected to suspicionless, warrantless targeting, including a years-long program in New York City, aided by technology such as license plate readers, that failed to find a single terrorist.  And in the present time, our current federal administration has made no secret of its desire to exploit surveillance systems to target immigrants.  Even here in Washington, face surveillance systems within agencies such as DOL have helped the administration target immigrants for deportation.  It’s not hard to imagine a game-changing technology like face surveillance putting those vulnerable communities in harm’s way, and making them even more fearful.
 
To address the proven problems of bias and discrimination in face surveillance technology, Smith says what is needed, in part, are new laws requiring companies that make face surveillance to provide consumers with more information, as well as laws that enable third-party testing and comparison of these tools.
 
But bias in the tool and the potential for intentional discrimination using it, while problematic, aren’t the heart of what makes facial surveillance so dangerous for people of color. The real issue is that face surveillance exacerbates systemic bias that is already widespread. Face surveillance will be directed and fueled by skewed datasets from our disproportionate criminal justice system, worsening historic over-policing and over-surveillance of communities of color and immigrants by the government and individual actors. When existing systems have bias and discrimination baked in to them, we won’t need individual, biased actors to cause widespread harm using face surveillance.
 
Privacy Intrusions
 
As for the grave threats face surveillance tools pose to individual privacy, Smith says that the law should require that entities using face surveillance notify customers. He also says that consumers should have to consent to the use of face surveillance services when they enter premises or proceed to use online services.
 
Unfortunately, “notice and consent” is a model that has demonstrably failed to protect individual consumers. Have you ever tried to find answers about the apps on your phone? What about information on the safety and security of a software update from, say, Microsoft? Although people often have notice and consent in instances where their personal data is being collected, they frequently go ahead and click through the agreement anyhow because if every entity is collecting personal data, it becomes impossible to opt out, making consent meaningless.
 
Putting the onus entirely on individual consumers to navigate the high stakes of face surveillance technology creates an unfair, unreasonable burden that is based on the flawed assumption that consumers have real choices in the marketplace, when many, in fact, don’t. For example, people with disabilities, lower income people, and those with limited mobility do not have a meaningful opportunity to “opt out” if the only grocery store in their neighborhood is one that uses face surveillance.  Notice and consent could become the illusion of privacy protection that fuels the broad spread of face surveillance in both the public and private sectors.
 
The Coming Police State
 
Moving from privacy to the big picture, Smith is entirely right when he asserts that our faces deserve the same constitutional protection as our phones, and that “we must ensure that the year 2024 doesn’t look like a page from the novel ‘1984.’” But what follows is not reassuring. Smith believes the answer lies in new legislation to permit law enforcement agencies to track specific individuals using face surveillance only when there is a court order or “where there is an emergency involving imminent danger or risk of death or serious physical injury to a person.”
 
This provision is actually very permissive, despite sounding protective. Similar emergency provisions for surveilling specific individuals enacted after 9/11 were often ignored by law enforcement in their targeting of specific communities—they would use the technologies to gather evidence, and later deny doing so. It’s unlikely that face surveillance will turn out differently.
 
Even more dangerous is Smith’s focus on individual tracking to the exclusion of general surveillance. This makes it entirely possible for the government to adopt general, widespread face surveillance without suspicion, as some police agencies, such as Orlando, already seem to be contemplating. That kind of face surveillance infrastructure would make anyone’s actions in public spaces subject to government surveillance virtually all the time.  This would be similar to China’s current reality, where ruling authorities are relying on surveillance to maintain social control. And face surveillance can do much more than just identify you—already, some forms of this technology purport to tell whether someone is angry or happy, or even someone’s propensity to be a terrorist, simply by analyzing their face. Face surveillance is dangerous whether it is individually targeted or not.
 
The grim reality is that, if a face surveillance infrastructure is ever fully realized in the U.S., it will be so powerful that the average individual will have little chance of defending against it, and this is especially true for those who are most likely to be targeted. Here in Washington state (according to a recent statewide public records request conducted by the ACLU of Washington), dozens of police agencies have access to at least some form of face surveillance technology.
 
Microsoft says “it’s time for action” to prevent these dangers. We agree. But we can’t have a debate about whether we want widespread face surveillance if the technology is already spreading rapidly.  Microsoft should stop selling face surveillance to the government and join the ACLU’s call for a Congressional, federal moratorium on government use of this technology.