An Alarm Too Far?
Like many people, I began paying attention to the collision of technology and democracy during the 2016 election. The news that America’s digital platforms had been hijacked by Russians looking to help Donald Trump win the presidency was alarming, and it led to an obvious next question: Did the Russians succeed?
As we’ve seen over the last four years, that’s a very difficult question to answer. In part, that’s because any answer involves presuming to know the true aims and honest definitions of success for those doing the interfering.
But there’s another tricky aspect. How would one even begin to accurately measure, much less prove, the success or failure of any online attempt at manipulating large groups of people?
The belief that online manipulation campaigns are both easy to pull off and swiftly poisoning our democracy is widespread. But can we draw a scientifically valid line from something that briefly appears on a smartphone screen to opinions later developed or votes later cast?
Professor Shannon McGregor studies the role of social media in our political process and was the co-author of a pre-election article that cut against the grain of conventional thinking when it comes to the impact of online misinformation. Her argument that we’re “too worried” about digital misinformation affecting voter choices has stayed with me over the weeks since Election Day, so I reached out to ask Professor McGregor some questions about her take and she graciously agreed to answer them.
ELI SANDERS: In your very interesting October 30 piece in Salon, you warn against Americans worrying that misinformation is swaying large numbers of voters because when it comes to serious studies, there’s actually “little evidence” that misinformation is changing votes.
When I read your piece, I found myself wondering: Given how complex people are, and how many information streams the average American interacts with every day, could the absence of evidence here be connected to the extraordinary difficulty in proving a direct line between any one piece of information a person consumes and their later behaviors?
PROFESSOR SHANNON MCGREGOR: That is part of it, absolutely! Especially because the way we, as social scientists, determine the effect of media (in this case, misinformation) on a person’s beliefs is through experiments. Necessarily, experiments allow you to control for lots of variability that you might find in the real world in order to be able to determine causal effects. But because of that, most experiments aren’t able to capture the multiplicative and multiplatform nature of media exposure that you note.
It’s not that I don’t think misinformation is harmful or that people don’t believe misinformation—both of those things are obviously true.
Rather, it’s the idea that people take in information (whether it is true or not) and make reasoned political decisions based on that. We have even less evidence to support that. Most people are not very interested in politics—so rather than gather and consider information to make their voting decisions, they rely on partisanship. This is what drives most of people’s political behavior.
From a research perspective, what would be required to demonstrate a clear cause-and-effect chain between the mis- or disinformation a person takes in and a vote they later cast?
It depends on what you might be looking to find out. For example, are we concerned that mis- or disinformation makes someone more or less likely to vote? We could examine that through media exposure data and voting records, or even through an experiment that exposed people to mis- or disinformation and then asked them about their voting intentions.
But much of the post-2016 academic work and journalistic coverage focused on, or implied, the fact that 2016 Trump voters were duped into voting for him through misinformation—either from him, his campaign, or foreign interference. The notion that someone would have voted for Hillary Clinton were it not for the misinformation they encountered that drove them to vote for Trump would be difficult to prove. On the other hand, we do have considerable evidence that cleavages along the lines of political (and racial) identity drove voters to make their 2016 choice for Trump.
There was a Stanford study that got some notice the week after the election (including in The Wall Street Journal) that claimed to show a relationship between Trump’s tweets and people’s attitudes about the legitimacy of the election. What do you make of this study, which has been portrayed as showing a cause-and-effect dynamic?
To clarify one thing, this study does not find broad effects of Trump’s tweets attacking the legitimacy of the election on people’s belief in democracy or support for political violence. What it does find is that for Trump supporters, exposure to these tweets diminished trust in our electoral system.
On the one hand, this is not surprising—that supporters of a politician would be impacted by what that politician said. Elite rhetoric is powerful! On the other hand, this particular rhetoric is illiberal—and so if it causes people to lose faith in elections, then that is concerning.
The authors also note—to your point about the busy information streams to which people are exposed—that many of the study participants (especially those Trump supporters among whom the study finds effects) were likely already exposed to those tweets in other, real-world settings.
Another point I think is important, which the study’s authors point to as well, is that the impact of these tweets is likely different depending on how one encounters them. In this study, participants saw images of the tweets as part of an online study. But my own work finds that journalists often embed presidential tweets directly in news stories—even in news stories whose content disputes the claims in the tweet. All of that framing and contestation, potentially undermined by embedding the tweet, matters for how people may be impacted by claims in said tweets.
One of the challenges in discussing these topics is using the right terms for different online phenomena. (Never mind getting everyone to agree on what each term truly describes.) I imagine “online radicalization” is one of these contested phrases that can mean different things to different people. Still, what do you make of last week’s news that a study by The New Zealand Royal Commission found the shooter in the 2019 Christchurch mosque attack was radicalized on YouTube? Is proving a link between online radicalization and violence easier or more difficult than proving a link between online misinformation and voter behavior?
Similar to the points earlier, it is of course extremely difficult to prove a direct line between someone’s media exposure and their behavior. But we can make logical conclusions—like those in the report.
This is important because it links, as others have, mainstream sites (like YouTube and Facebook) with radicalization. The thing about “proving” a link here is that there is no counterfactual. Would the shooter not have killed people if YouTube didn’t exist? We can’t know that—because extremism does exist on YouTube, and the terrorist did commit heinous acts of violence. And—as [New York Times technology columnist] Kevin Roose pointed out—the live-streamed mass murder was so “horribly online.”
I know you’ve supported increased transparency in online political ads. Even though you don’t see much evidence of American votes being swayed by manipulative messages online, why do you think it’s important for the public to know who’s behind paid political messages that appear on digital platforms?
Transparency behind political ads is important for a number of reasons. First, despite platforms’ efforts (uneven as they are), malicious ads designed to mislead or dissuade people from voting do appear online. Greater transparency around online political ads means advertisers who violate platform guidelines or act in undemocratic ways can be held accountable by journalists, researchers, and the public.
Greater transparency—especially around targeting—means that campaigns or groups could target that same (or a similar) audience with counter-speech, mitigating the risk of any potential deleterious effects. Also, transparency is pretty popular—even a bi-partisan group of political professionals support greater transparency as one aspect of ethical political digital advertising.
Finally, I believe the public has a right to know by whom, and how, they are being targeted online. We all have an idea of who we think we are—and I think it is incredibly revealing to understand who the people or organizations that target us think we are. This is especially true because my research shows that political campaigns rely on data from online political ads as one way to understand public opinion.
That last point Professor McGregor made—“the public has a right to know by whom, and how, they are being targeted online”—reminded me of an argument buried deep in the filings for a court case Wild West is tracking, Washington State vs. Facebook.
One of the things Washington state now requires political ad-sellers like Facebook to reveal is the targeting information for any ads that attempt to sway votes in Washington’s elections. The theory is that it would be valuable for the public to know, for example, if a campaign were targeting African-American voters in certain zip codes with messages meant to decrease their participation in an upcoming election. (A strategy that has been used before.)
But in endeavoring to have Washington state’s disclosure law struck down as a violation of the First Amendment’s free speech protections, Facebook, in its legal filings, cites the Supreme Court’s Citizens United decision from 2010 and argues that the forced disclosure of targeting information for online political ads is “wholly unrelated” to “the only governmental interest the Supreme Court has recognized as sufficiently substantial or compelling to justify burdening political speech.”
That is, “exposing quid pro quo corruption or preventing the appearance thereof.”
Facebook’s lawyers add:
It is the source of money behind political speech, not the identity of the audience (let alone the method of payment), that helps constituents understand to whom their officials may feel beholden.
In his October 28 appearance before the US Senate, Facebook CEO Mark Zuckerberg identified the debate over targeting disclosure as a “challenging” area for those trying to come up with a potential federal law mandating transparency in online political ads.
Professor McGregor’s work has left her convinced that people have a right to know how they’re being targeted online. But Zuckerberg’s lawyers, for their part, appear convinced that Supreme Court precedent bars state and federal officials from turning this into an actual, enforceable legal right in the United States.
Some of the stories I’ve been reading this week:
• “AI wont’s save us” — So says a Facebook “civic integrity” product designer who recently resigned, according to Buzzfeed. The former Facebook employee reportedly wrote:
The implicit vision guiding most of our integrity work today is one where all human discourse is overseen by perfect, fair, omniscient robots owned by [CEO] Mark Zuckerberg. This is clearly a dystopia, but one so deeply ingrained we hardly notice it any more.
• “Complicit” — Allegations from a worker rights group that Apple stood by while its suppliers violated Chinese labor laws.
• Elon Musk is moving to Texas — Part of what appears to be a tech elite mini-trend.
• The new Facebook antitrust case — Explained.
• The year in memes — Fifth in a series.
• A massive hack — It was allegedly conducted by Russian government hackers using a malware strategy dubbed “Sunburst.” The intrusion has reportedly “compromised” a number of US agencies, including the Treasury and Commerce departments.
Questions? Tips? Comments? wildwestnewsletter@gmail.com