A Costly Education on Algorithms
Last Tuesday, more than a decade after Facebook began using algorithms to pick what users see, a Senate subcommittee held a hearing on “Algorithms and Amplification.” It was the beginning of an educational process for key members of Congress, admitted the committee’s chair, Democrat Chris Coons of Delaware. “As quaint as some might think it,” Sen. Coons said at the outset, “I plan to use this hearing as an opportunity to learn about how these company’s algorithms work.”
The committee’s ranking member, Republican Ben Sasse of Nebraska, shared that objective. Both men made clear they weren’t even showing up to the hearing with any “specific regulatory or legislative agenda.” Not surprisingly, a common take afterward was that Congress is “way behind” on dealing with the threat posed to our democracy by the algorithmic amplification of speech on social media—speech that too often spreads rumors, lies, disinformation, hateful incitements, and other types of attention-generating dreck.
As Sen. Coons explained in his opening remarks, every single day algorithms “impact what literally billions of people read and watch—and impact what they think.” On YouTube alone, he noted, 70 percent of views happen because the platform’s recommendation algorithm suggests people watch certain videos that the algorithm has decided they’re likely to click on (videos that, if clicked on just as the algorithm suspects they will be, can then be used to sell ads, YouTube’s primary means of generating revenue). That’s a lot of power for a non-human decision-maker to have. It’s the power to shape trends, politics, and people’s sense of reality.
High-level executives from Twitter, Facebook, and YouTube were on hand to offer corporate insight into how this all works, and to share what’s being done to keep social media’s algorithms from trying to maximize user engagement using socially unhelpful strategies, such as sending people down addictive rabbit holes of misinformation or showing them ever-more incendiary content to keep their eyes glued to the site.
Monica Bickert, Vice President for Content Policy at Facebook, explained how Facebook’s algorithm takes in signals from many directions, including people’s “likes” and comments, to create the “news feed” each individual sees. “The process results in a news feed that is unique to each person,” Bickert explained, perhaps inadvertently offering a primer on the creation of information silos. She also outlined the ways Facebook algorithms are now being used to limit the reach of content the company deems worrisome, harmful, or illegal.
Alexandra Veitch, Director of Government Affairs and Public Policy for YouTube, emphasized the control users are increasingly being given over what information algorithms can draw on when recommending videos. Like all the other representatives, Veitch assured lawmakers: “We will continue to do more.” Lauren Culbertson, head of US Public Policy for Twitter, contended that as “the go-to place to see what’s happening in the world,” Twitter has no incentive to algorithmically encourage toxic discourse.
Both Coons and Sasse seemed skeptical, especially when it came to this last idea. So did other Democrats and Republicans who asked questions at the hearing. Given the direction of American politics during the decade-plus that social media and its algorithms have been ascendant, it seemed a reasonable skepticism. It’s not simply random coincidence, many Senators noted, that over this period we’ve seen increased polarization, a growing “infodemic,” and an attempted coup during the 2020 Electoral College certification.
“The business model is addiction, right?” Sasse asked the social media company executives. “I mean, money is directly correlated to the amount of time people spend on the site.”
In response, the executives stuck to their lines about having no incentive to create digital cesspools and wanting to work with Congress on solutions. But the first of two outside experts called by the panel, Tristan Harris of the Center for Humane Technology, said the creation of corrosive information environments is “intrinsic” to the design of contemporary social media, a product of “the fundamentals of how it works.” A former design ethicist at Google, Harris served as an expert voice on the Netflix documentary “The Social Dilemma” and was unsparing in his assessment of the social ills connected to Facebook, YouTube, Twitter, and similar platforms.
At the end of the day, a business model that preys on human attention means that we are worth more as human beings, and as citizens of this country, when we are addicted, outraged, polarized, narcissistic, and disinformed, because that means that the business model was successful at steering our attention using automation.
And we are now sitting through the results of ten years of this psychologically deranging process that [has] warped our national communications and fragmented the Overton window and the shared reality that we need as a nation to coordinate to deal with our real problems, which are existential threats like climate change, the rise of China, pandemic, education, and infrastructure.
The second expert, Dr. Joan Donovan, teaches at Harvard’s John F. Kennedy School of Government and is research director for the Shorenstein Center on Media, Politics, and Public Policy. She focused on what she termed “misinformation at scale,” and how it’s uniquely enabled by social media platforms.
“When I say misinformation at scale, I’m not complaining that someone is wrong on the internet,” Dr. Donovan said. “What I’m pointing to is the way that social media products amplify novel and outrageous statements to millions of people faster than timely, local, relevant, and accurate information can reach them. Post-2020, our society must assess the true cost of misinformation at scale and its deadly consequences.”
To underline this point, Dr. Donovan warned: “The cost of doing nothing is nothing short of democracy’s end.”
“We have these hearings, and I appreciate them,” said Republican John Kennedy of Louisiana when, toward the end of things, it was his turn to ask a few questions. “But we never get down to it. We all talk. I’m as guilty as anyone else. But at some point you gotta get down to it… What are we going to do about it?”
The hearing offered no clear answers, at least from lawmakers. A number of the committee’s members, including Sen. Kennedy, floated their favored plans for limiting the legal immunity given to digital platforms by Section 230. Those plans generally involved revoking the immunity when companies are found to be using algorithms in certain harmful ways. But Sen. Sasse scolded both Republicans and Democrats for this detour, suggesting it was “well off point to the actual topic at hand.” He said that reaching for Section 230 reforms in response to problems generated by algorithms was politicians just picking up “the most ready tool” to hammer away at an issue they still don’t fully get.
Sen. Coons, offering his perspective at the end of the hearing, seemed to be staring at a slowly balancing scale while yearning for more discussion.
“None of us wants to live in a society that, as a price of remaining open and free, is hopelessly politically divided—or where our kids are hooked on their phones and being delivered a torrent of reprehensible material,” Sen. Coons said. “But I also am conscious of the fact that we don’t want to needlessly constrain some of the most innovative, fastest-growing businesses in the West. Striking that balance is going to require more conversation.”
The outside experts, however, said the the need to act now is urgent, with technology advancing and challenges to society cascading faster than Congress can keep up with. Unlike the committee’s leadership, they came with action plans.
“We should begin by creating public interest obligations for social media timelines and news feeds, requiring companies to curate timely, local, relevant, and accurate information,” Dr. Donovan said.
Harris, of the Center for Humane Technology, said social media platforms are using algorithms to conduct massive human behavior experiments and should therefore be subject to Institutional Review Board-style oversight, just like researchers at universities who conduct experiments using human subjects.
He also called for a type of regulation that’s already been used in the market for basic utilities. America’s private providers of electricity and water could rake in far more money if they encouraged their customers to keep faucets open all the time and lights on constantly, Harris noted. But because of the social and environmental harms this would cause, the government incentivizes utilities, through taxes on excessive use, to make that sort of behavior financially painful for consumers. Any tax revenue that’s generated from excessive use is then put into a fund for useful things like modernizing the electric grid.
A similar system could be set up for social media companies, Harris suggested, with taxes on some of the profit made through algorithmically-generated attention used to fund fact-checkers, journalists, and people who are designing technologies with the public interest foremost in their minds.
“Either we figure it out,” Harris warned, “or the American experiment may be in question.”
Some of the stories I’ve been reading this week:
• Dark patterns — They’re a new frontier in online manipulation, and they need to be regulated, writes Greg Bensinger.
• Amazon is promoting extremist information — That’s according to a report by the Institute for Strategic Dialogue, which blames the company’s recommendation algorithms.
• Facebook’s Oversight Board orders criticism of Indian Prime Minister restored — “The decision comes as India's government is putting pressure on social media companies to remove critical posts, particularly related to the rising numbers of COVID-19 deaths in the country,” Politico reported. “The same group also will soon rule whether Donald Trump can have his account reinstated on Facebook — a decision now expected in early May.”
• The feud between Apple and Facebook — The New York Times digs into “how Mark Zuckerberg and Tim Cook became foes.”
Questions? Tips? Comments? wildwestnewsletter@gmail.com