There Will Be Consequences
Long before the November election, it was clear a Biden presidency would bring new scrutiny to America’s digital platforms. There’s now intense urgency and no small degree of fury propelling calls for a major tech reckoning, given last week’s deadly assault on the US capitol. The attack on the seat of American democracy was conducted by thousands of digitally-summoned insurrectionists who’d plotted their deadly violence on social media, were riled up on Twitter by President Donald Trump, and then, once inside the offices and chambers of Congress, shamelessly used their destruction as a backdrop for money-generating livestreams and MAGA-pleasing selfies.
Naturally, with a new reckoning on the horizon we’re now seeing the usual cycle of tech giants trying to get out in front of the problem by rushing to further regulate themselves—this time in dramatic, head-spinning fashion.
After years of mostly turning the other way, Twitter permanently suspended President Trump, locking him out of a communications platform that was integral to his rise. Apple and Google booted Parler, the permissive social media platform favored by right-wing agitators, from their app stores. Amazon yanked Parler’s access to its web servers. YouTube began a further crackdown on videos spreading the lie that voter fraud cost Trump the election. Shopify locked the doors on two online Trump merchandise stores. Stripe stopped doing payment processing for the Trump campaign web site. And platforms such as Instagram, Reddit, Twitch, Snapchat, TikTok, Discord, and Pinterest have all taken actions to either block Trump himself or silence his propagandists.
All of these companies make moral, legal, and/or policy arguments to support their moves, but it’s hard to imagine they’re not also motivated by the new reality they face after the January 6 attack on American democracy by a mob aiming to overturn the result of a free and fair election. The offline target of these insurrectionists was both houses of Congress at the very moment lawmakers were meeting to certify President-elect Joe Biden’s victory, an incursion that left every single member of the US House and Senate with a visceral, frightening experience of what it’s like when speech is allowed to go so far off the rails.
The rioters set up a gallows in front of the Capitol building. They chanted about wanting to “Hang Mike Pence!”, who, as required by the Constitution, was presiding over the Electoral College vote count at the time. Members of Congress were made to hide under seats and rush through underground tunnels to escape. It was the kind of thing no one forgets, including lawmakers who will be in a position to take up tech policy reforms after Biden’s inauguration next week.
The aftermath of this deeply disturbing attack has also reinvigorated the long-churning public discussion about online speech rules and rights. Revision of Section 230 is once again front-and-center, with liberals still calling for major changes and conservatives still wanting to repeal the law altogether. Content moderation is also back in the spotlight, and the next Congress will no doubt discuss whether leaving the country’s tech giants to make up online speech rules on the fly constitutes a threat not just to our political discourse, but to democracy itself.
Meanwhile, the silencing of Trump across multiple digital platforms offers an unprecedented display of digital power. Plenty of think-pieces have been launched as a result, along with a lot of funny Twitter takes:
This upsetting moment has also clarified how fragmented and poorly understood America’s existing free speech rules are, especially when it comes to political speech.
The confusion is exacerbated by the ever-increasing complexity of the media landscape, which is genuinely hard to get a comprehensive read on, even for professional communicators. America’s myriad digital platforms are appropriately under the microscope at the moment, but they’re hardly the only avenues available to a person wanting to use their speech to influence our politics. A very short list of those other avenues includes cable and broadcast TV, talk radio, newspapers, magazines, direct mail, e-mail, billboards, old-fashioned posters, and the even more old-fashioned shouting of speech in an actual, not virtual, public square. Digital platforms are also not alone in having been used by propagandists to push the lie that this election was rigged and stolen from Trump.
So it’s easy to have sympathy for those confused about who’s allowed to say what—and where, and when—especially when one looks at all the different ways speech is presently regulated across different American spaces and contexts. Consider:
• Many Americans have heard they’re not allowed to falsely shout “Fire!” in a crowded theater. But they can also see that the president is allowed—or was allowed, until Friday—to use a Twitter megaphone to shout falsehoods and incitements to a virtual crowd of more than 88 million followers.
• People understand they’re unlikely to be offered a chance to organize a riot using the pages of The New York Times or the broadcast frequency of CNN. But recent years have shown that if they want to try their hand at scheduling and planning a riot using social media, all they need to do is sign up for a free account.
• On American radio stations, one’s words to the masses are likely to be broadcast after a standard “tape delay,” so that those words can be cut off before reaching anyone’s ears if they cross a line. Over on Facebook, Parler, Twitch, or any similar online service, one’s words will be broadcast instantly, no questions asked.
• If the words one decides to broadcast over social media wind up slandering or defaming someone, the company that provided the free megaphone isn’t legally responsible. The person who spoke or typed the words is. But, if those same words were typed for The New York Times or spoken on CNN and, through their publication, ended up slandering or defaming someone, both the speaker and the company that provided the platform could be sued for damages.
• In a courtroom setting, if lawyers tell a judge an election was fraudulent but possess zero evidence to back up their claim, they can be sanctioned and potentially lose their law licenses. But if those same lawyers want to use their Twitter accounts to make baseless claims of election fraud, they’ll be allowed to do so and Twitter, Inc. will have zero liability. But, if those same lawyers go on to shout in public speeches and through more traditional media channels about an election-rigging conspiracy involving Dominion voting machines, they can be personally sued for more than $1.3 billion in defamation damages by Dominion. (And, the more traditional media outlets that broadcast their claims can expect they might soon be sued for defamation by Dominion, too.)
• Cross over into paid political speech and it gets even more convoluted. Under federal law, television and radio stations must keep and disclose records about federal election ads so that the voting public can have some insight into each ad’s origins and cost. There is no federal requirement for digital platforms to do the same.
• This leaves tech giants to make up their own, ever-shifting rules about political ad transparency—as well as rules about who can and cannot buy political ads, and rules about what’s allowed in those ads. For example, if you’re a politician you’re allowed to run false ads on Facebook. But if you’re a Political Action Committee, you’re not allowed to run false ads on Facebook.
• On the other hand, if you’re a television station in Georgia (or anywhere else in America) then you’re required by federal law to air that same false ad if it’s being purchased by a politician running for election. (And if that politician ends up libeling, slandering, or defaming someone in the false ad? In that instance, television stations are granted immunity.)
• Back over in the unpaid content realm, the maze of confusing rules continues when it comes to stopping violent speech. As Casey Newton has pointed out, Twitter’s interpretation of its ever-evolving restrictions has meant that Trump could menacingly tweet about starting a nuclear war years before he was banned for potential “incitement of violence.”
• And then there’s the global view, where things get even more complex. As Jillian C. York recently wrote, Facebook and Twitter have policies in the United States that leave speech from political leaders subject to fewer restrictions than speech from average users. Not only is this idea backwards (because, as York writes, it is “at odds with plenty of evidence that hateful speech from public figures has a greater impact than similar speech from ordinary users”), it also isn’t embraced by the companies as embodying any sort of universal truth. Political leaders are easily banned in other countries—sometimes before they can even speak.
Following reports of genocide in Myanmar, Facebook banned the country’s top general and other military leaders who were using the platform to foment hate. The company also bans Hezbollah from its platform because of its status as a US-designated foreign terror organization, despite the fact that the party holds seats in Lebanon’s parliament. And it bans leaders in countries under US sanctions.
At the same time, a different survey of the wide world of Twitter and Facebook rules, this one by Evelyn Douek, notes:
The Taliban’s official spokesperson still has a Twitter account. As does India’s President Narendra Modi, even as his government cracks down on dissent and oversees nationalistic violence. The Philippines’ President Rodrigo Duterte’s Facebook account is alive and well, despite his having weaponized the platform against journalists and in his “war on drugs.” The list could go on.
• Finally, there’s this very good point from CNN’s Oliver Darcy about the non-angelic nature of more traditional platforms:
Further:
Not surprisingly, Facebook spokesperson Andy Stone has been particularly interested in Darcy’s line of inquiry.
There are historical, legal, and political reasons for the wildly different speech rules now governing the many different forums for expression in America. When it comes to online speech, there’s also the challenge of trying to keep up with all the novel ways people can now use and abuse our fast-multiplying communication platforms. It’s not easy to come up with simple—or even complex—solutions to our current predicament.
But if last week’s attack on this country’s government was inextricably linked to online speech—as America’s tech platforms seem to be acknowledging by their recent actions to quiet Trump—then one part of the problem may trace to this country’s elected leaders. They’ve waited a long time to get serious about the hard work of crafting meaningful rules for the internet. In the wake of the the Jan. 6 attack on the US Capitol, it seems unlikely they’ll remain in a wait-and-see mode any longer.
A few must-reads from these disorienting first few days of 2021:
• Online planning for the next siege is underway — “Calls for widespread protests on the days leading up to the inauguration of President-elect Joe Biden have been rampant online for weeks.”
• “Run!” — A harrowing reconstruction of how aides and elected officials inside the stormed Capitol building tried to find safety and summon reinforcements.
• “Racist-ass terrorists” — Black Capitol Police officers on what it was like to try to stop them.
• “It all left me with so many questions” — A Twitter thread from Michelle Obama, here and here.
• “American abyss” — Brought to you, in part, by “the gamers and the breakers.”
Questions? Tips? Comments? wildwestnewsletter@gmail.com