"A Decision Best Made by People"
Last week something unusual happened on Twitter, but not, at first, in the main “feed” that users see when they log on. It happened instead in a company blog post that ended up being passed around by tech journalists and others who were impressed by the post’s unusual openness and self-critical tone.
The focus of this notable post was the manner in which Twitter uses algorithms. Given the costly education we’ve all been receiving in the ways social media algorithms shape our politics and consciousness, this particular blog post was destined to be closely examined. The fact that it concerned allegations of racial bias in Twitter’s algorithm for cropping photos made it even more certain to garner attention.
But what actually ended up receiving the most attention, at least in my Twitter feed, was the way Twitter ultimately addressed the problem. Rather than follow Facebook’s recent strategy of reminding people “it takes two to tango” when it comes to social media algorithms, Twitter listened and looked inward. Over the course of six months, it examined the concerns about its photo-cropping algorithm and whether they were valid. After experiments Twitter conducted showed the concerns were valid, the company announced last week that it agrees with its critics in this case.
The algorithm in question had long decided where the center of any uploaded Twitter photo should be located. The center-point then determined how the photo would be cropped in order to standardize the photo’s size with every other user-generated Twitter photo scrolling through people’s feeds. The algorithm was known as a “saliency algorithm,” the company explained in its blog post, because it “works by estimating what a person might want to see first within a picture.”
To become an expert on visual saliency, Twitter’s algorithm had gone through intensive training on “how the human eye looks at a picture.” It then used that knowledge “as a method of prioritizing what’s likely to be most important to the most people.”
In practice, Twitter said, this meant that “the algorithm, trained on human eye-tracking data, predicts a saliency score on all regions in the image and chooses the point with the highest score as the center of the crop.”
What Twitter’s own research found was that its “saliency algorithm” indeed tended to center white people rather than black people:
• In comparisons of black and white individuals, there was a 4% difference from demographic parity in favor of white individuals.
• In comparisons of black and white women, there was a 7% difference from demographic parity in favor of white women.
• In comparisons of black and white men, there was a 2% difference from demographic parity in favor of white men.
What to do about this?
In general, when confronted with proof that their technologies are producing harmful results, the response of American social media giants has been to propose technological solutions. The algorithm’s doing something bad? Well, then it just needs more training! No one’s perfect! Give us a moment and we’ll come back with a new and improved algorithm. And so on.
So it’s remarkable that Twitter’s major take-away from its racially biased photo-cropping algorithm was that the company needed to scrap the entire idea of algorithmic photo-cropping. Instead of reaching for a new technological solution, Twitter reached for an old human one.
“One of our conclusions,” the company said in the blog post, “is that not everything on Twitter is a good candidate for an algorithm, and in this case, how to crop an image is a decision best made by people.”
If there is a common thread to all the criticism of America’s powerful social media giants in recent years, it’s that their success rests on their willingness to jettison the idea that human judgment should decide what’s broadcast to the masses. Instead, these companies rely primarily on machines, machine-learning, and, of course, algorithms to determine how they should sort, filter, deliver, and sometimes block the countless pieces of human- and advertiser-generated content they’re displaying to audiences around the globe every instant.
A major reason the companies deploy algorithms is speed. As Twitter noted when it weighed out the issues at play in the algorithmic photo-cropping problem, one thing weighing in favor of the algorithm was “the speed and consistency of automated cropping.” On the other hand, the company noted, there were “potential risks” of algorithmic cropping, including “insensitivities to cultural nuances” and “the representational harm of the automated algorithm when people aren’t allowed to represent themselves as they wish on the platform.”
In this (rare) case, it was decided that the potential harms linked to an algorithm outweighed its benefits.
Going forward, Twitter users will now be given “more control over how their images appear,” including being able to see “a true preview of the image in the Tweet composer field, so Tweet authors know how their Tweets will look before they publish.”
Firing a photo-cropping algorithm and transferring its job into the hands of users is a refreshing response, at least in the world of social media, but it won’t work for all of the industry’s challenges. The moderation of written content, for example, won’t be done better if it’s left in the hands of users. But the mere admission by an American social media giant that “not everything… is a good candidate for an algorithm” feels hopeful. So does Twitter’s expression of gratitude to its users for “sharing your open feedback and criticism of this algorithm with us.”
The company says it’s continuing to investigate “the potential harms that result from the use of algorithmic decision systems,” and it promises “more updates and published work like this in the future.” Yes, please.
Some of the stories I’ve been reading this week:
• “The Anxiety of Influencers” — A great longread from Harper’s.
• Parler is (selectively) cracking down on hate speech — Why? Because it decided it needed to get back in to Apple’s App Store.
• The wide world of actions taken by governments against misinformation — Poynter has a good round-up.
• Biden cancels Trump executive order that would have weakened Section 230 protections — The move won’t end Congress’s long (and so-far-unproductive) fixation on the subject. It just ends a particularly Trumpy approach to the problem.
• Meet Citizen, the “subscription law enforcement service” — Vice reports that the app “recently offered a $30,000 bounty against a person it falsely accused of starting a wildfire,” and it’s presently doing something mysterious in LA with a vehicle that looks like a police cruiser. “The app's original name, before being removed from the Apple App Store, was Vigilante,” Vice notes.
• Facebook is still recommending anti-vaccine groups — Yet another disturbing find from The Markup’s Citizen Browser Project.
• Another great Twitter thread by Kate Starbird, whose earlier thread on “Participatory Disinformation” was the subject of last week’s newsletter:
Questions? Tips? Comments? wildwestnewsletter@gmail.com