Crisis Discipline
Social media is changing the way human beings make sense of one another. It’s also altering our ability to take action together. But how, exactly, are these changes occurring? What mechanisms are driving the most profound changes and how might those mechanisms be altered to humanity’s benefit? An international group of academics who study human collective behavior say we urgently need to answer these questions in order to begin designing digital communication platforms that will support healthy societies rather than tear them apart.
In a paper published last month in the Proceedings of the National Academy of Sciences, these academics warn that humanity’s ability to work collectively relies on fragile and complex systems, and that such systems can only take so much rattling. “When perturbed, complex systems tend to exhibit finite resilience followed by catastrophic, sudden, and often irreversible changes,” the scholars warn.
Their paper calls for the study of digital systems and how they’re altering human behavior to become a “crisis discipline.” I recently got a chance to talk with one of the paper’s co-authors, University of Washington Biology Professor Carl T. Bergstrom, and asked him what it all means. This interview has been condensed.
ELI SANDERS: You argue in this paper that the rise of global, interconnected, digital media, and the rise of social media platforms in particular, means that academics need to start treating the study of human collective behavior in the digital age as a crisis discipline. Why do you and your co-authors believe we’ve reached a crisis point?
PROFESSOR CARL T. BERGSTROM: In order to make the decisions that are necessary for managing threats that come our way—everything from pandemic to war to climate change—we need people to have access to good information. We need groups of people to be able to come to sensible collective decisions. That’s what democracy is predicated on.
And the way our world has become interconnected through social media, combined with some of the novel features of social media that we discuss in the paper, means that the kind of information, and very likely the quality of information that people are receiving has changed a lot. There are new vulnerabilities in the way that information is distributed, both because of the rise and spread of accidental misinformation, and also due to targeted malicious disinformation. Because of this, we’ve lost the ability to handle some of the events that we could have handled more smoothly 20 or 30 years ago.
To take just one example, we began working on this paper several years ago, and in the original version of the paper we warned: “You could have a pandemic and people wouldn’t believe public health officials, and then it would be a disaster.” And of course we ended up taking that sentence out because it became real life.
So the crisis is that we don’t understand how information flows through our current social networks. And we don’t understand the vulnerabilities to misinformation and disinformation that seemed to be substantial. These vulnerabilities seem to be causing large-scale failures like what we saw on January 6th, or the widespread disbelief of the existence of the pandemic for its first year, or the current strong anti-vaccine sentiments that are prolonging the pandemic.
The idea is that if we want to be able to continue to make good collective decisions, which is essential to any form of government that I would want to live under, then we need to understand how information flows today and we need to possibly take some kind of proactive role in the stewardship of how that process operates. That doesn’t mean we need to have people monitoring content and censoring stuff that we don’t like. It means we need to be thinking about the structure of these systems. We need to think about whether a system that has been designed essentially for the sole purpose of keeping people engaged to sell ads—whether that’s likely to be conducive to the flow of accurate information and to furthering human wellbeing.
SANDERS: In order to do this type of thinking in a serious way, you would would need a lot more access than researchers currently have to information that’s controlled by private companies that run these digital platforms.
PROFESSOR BERGSTROM: For sure. Right now, the situation we’re in with tech companies is like if we were trying to fight climate change, but ExxonMobil had all the thermometers and they kept telling us, “Yeah, sure, it seemed hot last week, but trust us, it was nothing out of the ordinary.”
One of the most valuable things that we could have right now would be better information about what’s actually happening online, both in terms of inputs, like for example what algorithms are being used, and in terms of outputs. Like, when pieces of disinformation are posted, how broadly are they seen, what are the dynamics of their spread?
SANDERS: What does the phrase “crisis discipline” mean? What’s this different way of studying that you’re proposing in response to this problem?
PROFESSOR BERGSTROM: It’s very explicitly an homage to previous uses of the term. Conservation biology, for example, is a sort of crisis discipline in the era of climate change. It’s a situation where you have a complex system that we need to act to manage and take care of now, but we lack an adequate description of the system to understand fully how it works. As a result, we find ourselves in this position where we don’t have time to wait 30 years for people to do all of the ecology or climate modeling or, in the case of digital media, all the work to completely understand how information flows on social networks, and only then, after 30 years, take action.
Rather, we need to start acting now at the same time as we’re trying to diagnose and understand the problem.
In order to do this effectively, we need a highly transdisciplinary, all-hands-on-board situation. We need people from a very wide range of disciplines, many of whom have already been thinking about different aspects of this problem. We need to start figuring out: What do we know? What do we not know? How can we act based on what we know now? How do we monitor to see whether those actions are effective? So we’re building the theory at the same time as we’re taking the active response. That’s a crisis discipline.
SANDERS: And the complex system that you’re talking about here is the way information flows on global digital platforms, plus the way those information flows interact with mass human behavior. In the paper, I noticed that you and your colleagues write about how there’s an ancient debate over whether large scale behavioral dynamics are essentially stable and self-correcting. I was not aware of this debate, and I would imagine most people aren’t. So can you just describe the contours of this debate and why, with global digital platforms, you feel we’re past the point of self-correction?
PROFESSOR BERGSTROM: There’s this notion that a lot of complex systems reside at some kind of stable equilibrium. For example, the way a mature ecosystem responds to perturbations and fluctuations. Like, when you have a really dry year and the ecosystem rebounds rather than spiraling into devastation. Or when you have some new species come in, but then things kind of go back to a new equilibrium with that new species present. So we have this sense of the stability of complex systems in the natural world. Another place we very much have this notion is in our sense of the way market economies work. There’s this idea that there’s some sort of invisible hand that will lead a market economy back to efficient function in response to perturbations from new technologies and so forth.
So the question is: Well, is something like that the case for information flows along these new channels? And there is not any reason that I am aware of to believe there’s anything like an invisible hand for information that would sort of take over if people started receiving bad information, or return things to a stable point where people once again had access to reliable, accurate information without vulnerability to external manipulation, and disinformation, and so forth.
A system designed essentially to keep people online, and to sell ads, is not necessarily going to be stable in terms of providing people with accurate information.
SANDERS: One of the things you point out in this paper is that for most of human history, people had interactions with a very limited number of other people during their lifetimes. I wonder if you’re suggesting that we’ve basically invented a way for humans to have more contacts and more connections than we’re really capable of having, or than is conducive to a healthy mass society.
PROFESSOR BERGSTROM: That would be the most pessimistic interpretation, that human societies just don’t function when you connect them up at the scales that we’ve connected them up at. I’m not that much of a pessimist. I think that if these connections are designed properly, I don’t see any inherent reason why they should be incompatible with healthy societies.
At the same time, what we can do now is just crazy. I’ll get on the internet and write a thought I had the night before and in 12 hours, a million people have read it. And I’m not anybody special, it’s just the way social media operates. If the thought is of interest to people, it just takes off. That’s just completely different than anything that’s ever existed. And so what are the consequences of that? Well, they’re going to be dramatic and we need to think about how that kind of communication can be organized in ways that aren’t destabilizing and aren’t disruptive, and that provide people with good information and support human well-being instead of creating bubbles of disinformation and waves of hostility.
SANDERS: You write in the paper about previous instances of people looking at complex systems in the natural world. The ways schools of fish move together and respond collectively to threats, or the fascinating ways that flocks of birds behave. I found that part of the paper so pleasing, because it was such a nice break from thinking directly about technology and it was also a nice analogy to the challenge of understanding human behavior in the online world. Can you offer another story from the natural world about a complex system that science, whether in “crisis” mode or not, eventually cracked?
PROFESSOR BERGSTROM: You know, I think the bird story you’ve already alluded to is pretty good. When you watch a murmuration of starlings, or something like that, you have this sense of this extraordinary degree of coordination. We don’t see this in the U.S. very much at the same scale you do in Europe, but, you know, if you see 50,000 starlings and a giant murmuration in these sweeping waves, cutting back and forth against each other, and separating and splitting as a predator comes through, and then folding back together, it’s as if there’s some greater power controlling all of this. And so, what is controlling all of this? There’s not some mega brain that’s out there telling them all what to do. So how do they do it? How are they communicating so that they could possibly do this?
Early theories were about, well, maybe they have this form of telepathic communication because how the hell else could they be so coordinated? But it turns out with starlings there’s a very, very simple set of rules about how they stay aware of their near neighbors and maintain a fight trajectory with respect to their near neighbors. When you put those rules together, that is sufficient to explain the behavior that we see.
And then you can go and do simulations of agents that are following those same rules and see that they look, to the eye, very much like a murmuration.
SANDERS: And to tie this back into the digital world, if we’re all the starlings and a Trump tweet is like a perturbation at the edge of a group of starlings, you could, if you understood how starlings like us behave, predict what our whole flock would do in response to some Trump tweet.
PROFESSOR BERGSTROM: That’s right. And then you can also think about the ways that changes to the network structure could cause some of the negative features of digital communications, whether they’re tweets or posts or anything else, to be self-limiting.
SANDERS: People like an action item. And as I understand it, the point of any crisis study is to eventually come up with proposed actions for policymakers and lawmakers. For now, the one action item from your paper is the suggestion that people working in the tech sector begin taking a kind of Hippocratic oath. Like, Mark Zuckerberg raises his hand and promises, “First, do no harm.”
PROFESSOR BERGSTROM: Yeah, my coauthors won’t necessarily be happy to hear this but I’m not enormously enamored of that suggestion. I think we need to understand the ethical components of what we do. Whether a Hippocratic oath is the right way to structure that, or even appropriate for thinking about our ethical responsibilities in a very distributed system, as opposed to the one-on-one interaction of a physician looking out for the individual wellbeing of patients—I’m less certain there.
I do think there are things we can do. Careful regulation, increased transparency of digital systems, giving people more control over their online information environments. These are the sorts of things that I think we need to be thinking about and exploring.
But this really is one of those things where we’ve just discovered that the wing of the aircraft is on fire. We don’t know where it’s on fire. We don’t know why it’s on fire. But we’ve just discovered it’s on fire and we gotta figure out what to do. And we don’t have time to land the plane. So we’ve got to be acting and figuring out at the same time. And I think this paper is saying, “This is on fire, and we’re going to need to do something about it fast, and we don’t quite have that answer yet, but we need to get everybody up here figuring it out, trying to understand what’s happening and understand what kinds of interventions might be helpful.”
Some of the things I’ve been reading:
• Not as complicated as advertised? “Facebook and YouTube’s vaccine misinformation problem is simpler than it seems,” writes Will Oremus.
• Exactly how many Americans are viewing vaccine misinformation on social media? Facebook, for one, won’t say.
• Disinformation for hire: “Private firms, straddling traditional marketing and the shadow world of geopolitical influence operations, are selling services once conducted principally by intelligence agencies,” reports The New York Times. “They sow discord, meddle in elections, seed false narratives and push viral conspiracies, mostly on social media. And they offer clients something precious: deniability.”
• Next, the metaverse: “As June came to an end, Facebook CEO Mark Zuckerberg told his employees about an ambitious new initiative,” Casey Newton reports. “The future of the company would go far beyond its current project of building a set of connected social apps and some hardware to support them. Instead, he said, Facebook would strive to build a maximalist, interconnected set of experiences straight out of sci-fi — a world known as the metaverse.”
• Want more from Professor Bergstrom? Here’s a widely-shared Twitter thread of his, which took apart Facebook’s response to President Biden’s criticisms of Facebook’s role in spreading vaccine misinformation:
Questions? Tips? Comments? wildwestnewsletter@gmail.com