Assessment: Troubling
King County, home to Seattle, has become the first county in the nation to ban the use of facial recognition technology. It’s not the first political jurisdiction to do this: thirteen cities, including San Francisco and Boston, already have their own facial recognition bans on the books. So does the entire state of Vermont. But this move by King County remains a notable first at the county level, as well as an opportunity to revisit the arguments against letting this particular technology run wild.
The new King County ban on facial recognition also offers a relatively rare example of elected officials in the United States putting a certain innovation on ice because of worries about its potential harms. Which raises the question: Should we be doing more of this?
The concern about governments using facial recognition is well explained by the case of Robert Julian-Borchak Williams, who was wrongly arrested in front of his family and booked for larceny after a facial recognition program incorrectly identified him as having stolen some high-end watches. Williams is black, as are other men who’ve been wrongly accused of crimes in recent years because of faulty facial recognition matches.
Though horrifying, these wrongful arrests should not be surprising given that studies have repeatedly shown facial recognition technology is too-often inaccurate when it comes to black faces (particularly black female faces). The technology also has problems with accuracy when it comes to Asian faces and gender identities.
So there was plenty of evidence behind the unanimous June 1 vote by the King County Council to ban the use of facial recognition by the county’s employees and departments, including the King County Sheriff’s office. The unanimity of the vote meant Republicans and Democrats were for once on the same side of a hot issue, yet another reminder that alarm about government surveillance and technological overreach now crosses party lines.
Afterward, King County Councilmember Jeanne Kohl-Welles, who sponsored the bill, noted that some people believe this is all going to end up a passing concern because facial recognition’s flaws will soon be erased by technological improvements. But, she asked in an interview with NPR, “Even if this technology were to become perfect, is that something we want?”
This seems to be the question right now: Do we really want what powerful new technologies are promising us?
There are no easy answers. But in light of what just happened in King County, it’s worth noting how rarely the question is even asked by those with power. It’s unusual for new technologies to be treated by public officials with the skepticism and regulatory speediness that’s now being applied to facial recognition.
Think about the ad-hoc coalition of bans on facial recognition now developing around the country. Think about Amazon’s ongoing ban on police use of its facial recognition products. And now consider, because it’s at the center of everything these days, social media.
Despite long-building evidence of social media’s harms to individuals and democracy, we’ve essentially decided, as a society, that we’re going to let our digital platforms show us all the harms they can possibly cause before we decide do anything about them in a systematic manner.
We are in a perpetually reactive posture when it comes to these platforms, whereas when it comes to facial recognition a growing number of people with clout have decided we need to be proactive and get ahead of its problems.
Others, like Kohl-Welles, are suggesting we go a step further and use the time bought by various facial recognition bans to ask deeper questions, like whether it’s a good idea for humans to have this particular tool to use on each other. We can build a lot of amazing things. Do we really want this thing?
Such cautious approaches inevitably lead people to ask: Where will it end? With academics and other wizened wonks sitting around a table pondering a technology’s pitfalls in advance of its widespread adoption? Perhaps these same experts will also suggest governmental limits or regulations that might lessen the technology’s social downsides while encouraging its upsides? That might sound like an absurd scene, unlikely to ever occur at a national level in this country. But in fact the opposite is true. It’s happened before in America, and there are some who fervently hope it will begin happening again.
In a great lecture I happened to catch in May, the political theorist and technology scholar Landon Winner resurrected the history of the “technology assessment” movement. That movement, in his telling, emerged in the 1960s among experts who observed how machines and computers were already transforming society. These experts foresaw that the pace of technological change was only going to accelerate, with potentially destabilizing and destructive results coming along with new benefits. So they proposed that groups of experts be deployed to guide us away from easily foreseeable tech-fueled disasters and toward better tech-powered outcomes.
In 1972, as an outgrowth of this movement, the US House of Representatives created its own Office of Technology Assessment. This advisory group’s task was to help members of Congress think about science-based technologies and how to regulate them. Professor Winner was part of the group’s work, which included sponsoring hundreds of studies and, naturally, holding lots of meetings in which experts in various fields would come together and talk, symposium style. In his recent lecture, Professor Winner described something he came to see as significant about those meetings:
Physically speaking, you would sit in a room, in a rectangle of chairs, around which sat experts on a topic. For example, data privacy in computing. And they would share their views on what the problems were and what recommendations made sense. But there was also in these rooms a larger outside rectangle of observers in chairs who would listen in and occasionally join the conversation.
These were usually people from business firms or other prominent organizations with a financial or particular policy interest in the topics under discussion. The attitude of people in the outer rectangle was consistently that yes, technology assessment is all well and good, but that Congress should definitely not seek to contradict or limit the emerging plans and projects of moneymaking enterprises.
The Office of Technology Assessment survived until the early 1990s. Then Newt Gingrich took power as the Republican Speaker of the House and eliminated all of its funding. Not too many years afterward, the Internet era began. One can wonder whether the office might have had some useful input as the era we now live in got rolling.
Maybe some of the summoned experts would have shouted out early warnings. Maybe they would have prevented us from arriving at the current trend in technological critique, which Professor Winner describes as mostly involving “elaborate lamentations” after the societal damage has already been done (and, worse, “after these maladies become entrenched patterns likely beyond any conceivable remedy”).
There are presently “some rumblings” in DC, he said, about reinstating the Office of Technology Assessment. In the current climate, it’s easy to imagine the idea being derided for creating yet another elitist body filled with government-appointed eggheads who gather for what are basically Technology Death Panels. But Professor Winner said there’s a still-unrealized vision for the Office of Technology Assessment that’s exceedingly democratic. It involves the creation of “participatory citizens councils” in every Congressional district that would also weigh in on new technologies, bringing popular input to what would otherwise be a cloistered process. He hopes that vision could be realized as part of any revived Office of Technology Assessment.
To prove how readily foreseen some of our current technological maladies were, Professor Winner recalled being in an Office of Technology Assessment meeting decades ago when the now-burning issue of data privacy arose. A prominent legal scholar at the time weighed in, saying, in Professor Winner’s recounting: “Well, the crucial point is that new computing systems ought to recognize the right of the individual to control the data being stored on them.”
Professor Winner continued: “And in a room full of people who had been thinking about such matters, they all agreed: Yes, that would be a wonderful principle to guide systems. In other words, that one would be formally asked what would happen to the data being kept on you.”
Again, this was decades ago. Yet before too long the office and its meetings were shut down and in the decades that followed, the digital rush toward “surveillance capitalism” took off.
Some of the things I’ve been reading this week:
• Having a YouTube channel does not automatically make you a member of the media — So ruled the Washington State Supreme Court recently, raising questions about exactly where the definition of journalist begins and ends.
• Being an elected official no longer gives you the freedom to say harmful things on Facebook — Political leaders will now be treated like everyone else, rather than being given a “newsworthiness exemption” that in the past allowed people like Donald Trump to break Facebook’s speech rules with impunity.
• Amazon’s routing algorithm sends drivers sprinting across highways on foot — The drivers shared their concerns with Vice, for a story that looks at the burdens they face in the name of “stop consolidation.”
• Facebook has clarified and extended its Trump ban — It will last at least two years and, at the moment, is set to expire in 2023 (just before the next presidential election).
Questions? Tips? Comments? wildwestnewsletter@gmail.com