Social Media and Knowledge Production with Yoel Roth

Ceejay Hayes:

This is CounterPol. Today we're talking with Yoel Roth about social media platforms. If you've spent any time online, then you know that any opinion that can be shared has been shared on the internet. And some of the most divisive subjects, vaccines, politics, rap beef, take on a life of its own on social media. Polarization feels the most visible on socials, in my opinion. So I'm curious about what insights we can gain on the polarization of democracies by looking at social media communication. Yo goes into the forces that influence social media infrastructures, how it facilitates the types of communication that happens on these platforms, and what can be done to make these digital spaces more egalitarian. With that, I hope you enjoy.

Yoel Roth:

I'm Yoel Roth. I am a Knight Visiting Scholar at the University of Pennsylvania in the Department of Communication. And before that, I spent about eight years at Twitter working in the trust and safety department, including as the company's head of trust and safety. I've been researching and studying online safety and security issues for going on 15 years now, since before trust and safety was really a thing that we talked about. My work these days is really focused on what the future of trustworthy tech and tech governance looks like. How should platforms be governing themselves? And how can we think about building a more trustworthy public sphere in the context of the platforms we all use every day?

Ceejay Hayes:

So let's just start now today with a really big picture question. At its best, what do social media platforms like Twitter, now known as X, Facebook, Instagram, what do they facilitate in terms of communication and engagement?

Yoel Roth:

When I look back at the history of social media, I think the big shift we've seen with modern social platforms is that they've made the attention and information economies that we exist in more egalitarian. If you go back to the days of radio being the primary source of news, Our communication ecosystem has always been built around specific elite gatekeepers of knowledge. This idea that you listen to the news and there is one person who will tell you what the news of the day is and you trust them and that's the facts. And people then consume the news and get to learn from these elite gatekeepers of knowledge, and they engage with their friends and their family about it. It's not totally a top-down thing, but it's very structured. The media ecosystem has always been very structured around some version of top-down knowledge production. And I think Twitter and other social media platforms transformed that. They didn't dismantle elite communication and influence, but they did make two really critical things possible. The first one is grassroots knowledge production. So there's now a greater ability for you or me or anybody else to more easily attain reach on social media when we are sharing knowledge and information about things that happen. Think about one of the defining moments in the history of Twitter as a platform, the so-called miracle on the Hudson. This is a moment when a U.S. Airways plane went down in the Hudson River and everybody survived, which is kind of crazy in the history of aeronautical disasters. And in that moment, we saw people sharing photos in real time of what was happening. And it was this amazing moment of grassroots knowledge production. And that's new. That's something we hadn't seen before. And it was a really radical transformation. The second big shift that we saw is the ability of audiences to talk back to elites in real time. and for there to be more dialogue and accountability in new ways between some of the elites producing knowledge and information and the publics and audiences consuming them. Now, again, this isn't totally new. Newsrooms throughout the 20th century had public editors who were responsible for being an interface between newspapers and readers. But this happens a lot more and a lot more quickly on social media. And it's not just about news. People can tweet at celebrities and musicians and elected officials, and those public figures listen and they engage. That's really fundamentally transformed what communication looks like in the 21st century.

Ceejay Hayes:

You jumped right in there with an interesting point of one of the impacts of the sort of new digital media space, which is this grassroots knowledge production, which is making all kinds of information, all kinds of worldviews and actual knowledge accessible and just even visible. I wonder what the danger of that sort of grassroots knowledge production is, because I think Prior to, say, 2016, with Donald Trump entering the presidential race and prior to now, in the sort of pre-Musk acquisition era of X, I think we can link this to also changes in how content is moderated. This grassroots knowledge production is both a blessing and a curse. And so what are some of the benefits of this grassroots knowledge production, but also what are some of the risks?

Yoel Roth:

The benefits are access. There are more voices and particularly more diverse, global, and inclusive by every metric, voices producing knowledge. There is an ability for people to contribute to public conversations and public understanding in a way that previously there wouldn't have been. And I can't stress enough what a profound transformation in the media environment that is. We're no longer even thinking about gatekeepers like TV networks or satellite operators. This is now a media environment where, theoretically, anybody can participate. And that doesn't mean that inequality is instantly solved. That doesn't mean that anybody will choose to participate. But there is at least the possibility of that. And that's a really positive shift, in my view. The perils of that are that knowledge production in a sense of verified authoritative information has been undermined. And that's a really fraught question historically. I tend to think about social media issues in historical context because none of these problems are truly new. I just think we're seeing them in a new technical context. Go back to something like the knowledge production of the recognition that the Earth is not at the center of the universe, but actually that there's a sun at the middle of our solar system and the planets all revolve around that. The production of that scientific knowledge was complicated. It was a process that took many years. The knowledge that the Earth is at the center of the solar system was heretical when it first came onto the scene. And it was only over time, through a public contestation of scientific fact, that we came to understand that actually this thing is true. Thomas Kuhn has written really persuasively about the history of scientific knowledge production. And I think social media fits right into that narrative. You think about social media as a space for debate of what's happening in the world, and you see that people bring all sorts of different facts to the table, and we hash them out. And there's the possibility there that somebody, some anonymous Twitter account, is offering the next world-breaking insight, like the sun being at the center of the solar system. But there's also the possibility that it's somebody who's totally making it up. Or worse yet, that it's somebody with malign intent who's intentionally trying to manipulate the information environment. And so I think the egalitarianism that can lead to knowledge production also creates enormous risks when it comes to the production of fact and scientific knowledge. And those tensions are not new or unique to social media, but they certainly exist in this current moment, and I would argue have been accelerated by some of the technical developments we've seen.

Alan Jagolinzer:

As a business professor, I'm curious your thoughts about the ownership structures of these institutions and the degree to which they are profit-oriented businesses. We're talking about egalitarian sharing, but we're also talking about a very defined ownership structure that is by no means egalitarian, that is also doing quite a bit of wealth accumulation. Can you speak a little bit to the influence that might have throughout the system and the implications?

Yoel Roth:

I don't think you can understand an institution like a social media platform without understanding all of the influences on it. And economics is front and center as an influence. During the acquisition of Twitter, we had a lot of conversations within the company about the so-called fiduciary obligations of Twitter's board of directors and C-suite. Twitter's employees, there were about 8,000 of us, were in many cases upset about the company potentially being sold to a new owner. And there was this feeling that Twitter is important in the world, the work that we're doing is important in the world, and it seemed weird that all of a sudden one person with this incredible accumulation of capital could just like come in and buy the platform. And executives kept saying, yeah, that's all true, and we still have a fiduciary obligation to the shareholders of this corporation to pursue their financial interests, and an offer of $54.20 per share is maximizing shareholder interests. It might not be maximizing the public's interest, but it is maximizing shareholder perspectives. And that was a really bitter pill for folks to swallow when you think about a platform like Twitter. And so I'll zoom out a little bit from Twitter and say, I think this is true of so-called user-generated content platforms more generally. Look at some of the recent backlash to policy changes on Reddit. Reddit is a platform that is extraordinarily community-driven, even more so than Twitter. Users produce the content, but they also organize the whole community. They create subreddits, they moderate their own subreddits. And then all of a sudden, Reddit incorporated the company, made a series of central decisions that reduced community agency and ownership, and changed the ability of people to access and work with their data in the way that they wanted, and the community felt betrayed by those decisions. And you have to see it from both sides. On one hand, Reddit could not exist but for the contributions of the community. But on the other hand, Reddit Incorporated is a company that operates according to the rules of Wall Street, and they have a P&L line, and they need to think about how they're going to make money consistent with running the servers and paying a dividend to their shareholders and all the stuff that they're expected to do. Those are always going to be factors. And I think it very much does influence the ways that companies operate. That's not the only influence on corporate behavior, right? Economics is a big one, but I'll throw out a couple of other things that I think are major influences on how social media platforms work. The first is some notion of their own values and perspectives. Companies have ideas and companies are built of humans. And especially in Silicon Valley, there's like this myth of the founder. People have values and goals and desires and they bring those goals and desires to the products that they're building. It matters that some of Twitter's founders and executives had very strong views about free speech. It matters that the company's new owner has a very strong view about free speech. And so those values are one influence. A second influence is regulation. We talk about this one a lot, but governments have opinions about the right ways for social media to operate. And so you have the Online Safety Act in the UK, the Digital Services Act. In the US, you have Section 230 and a big set of question marks about whether we'll see other laws. But regulation is certainly an influence on how companies operate. A third factor is advertisers. They certainly are driving the economic considerations we were talking about, but advertisers have opinions. This notion of brand safety has emerged as a really dominant discourse in platform governance. And it's really rooted in advertisers saying, if we're going to pay money to platforms, we are going to have certain expectations of what shows up and doesn't show up next to our advertisements, or the money will stop. And platforms are shaped by the desires of those advertisers. Fourth, users. I think we sometimes forget that user-generated content platforms actually can't exist but for the contributions of users. And so it's worth noting that there have been successful user advocacy campaigns. And I'll turn again to Reddit, right? Users have objected to some of the company's recent decisions. And in some cases, Reddit didn't back off. But in other cases, they did. And so there is a power that users and communities have to drive change in platform decision making and governance. And then the final one I'll throw out here, and this one is I think the least studied and scrutinized, but in my opinion, the most important. App stores are an incredibly significant influence on how platforms make decisions. These days, the vast majority of folks accessing social media are doing so on their phones. And nearly everybody is doing so through apps distributed by either the Apple App Store or Google Play, which have their own rules, their own logic, and their own expectations of what is acceptable and what isn't. And all of that influences how social media platforms operate. So this is all a very long way around of saying economics is one influence, but there's a whole bunch of other factors that are also concurrently shaping the decisions that social media platforms make.

Ceejay Hayes:

I mean, there is a lot in there you just talked about in terms of how social media executives make their decisions. You wrote an article about this in the New York Times, where you sort of talked about the experience that you had post your departure from Twitter and how that is emblematic of the ways that governments and particular interests are reshaping how how communication happens on social media platforms to protect a certain brand, a certain sphere of narratives. If you could just sort of give a quick summary of that article, how your experience in your departure from Twitter, that sort of interaction with Elon Musk, and then other examples of seeing what kind of speech can happen on social media platforms, how that exemplifies how this impacts the kind of decision that social media companies make.

Yoel Roth:

I think a lot about the transformation of the world, really, that's happened when we layered global social media platforms atop the traditional geopolitical order of nation-states. And for a while, it seemed fine because social media platforms were like where we would post pictures of what we had for lunch and where you would talk to grandma. But then all of a sudden, platforms became really politically significant, and the turning point really was the 2016 election in the United States. That was a moment where, for many people in the West, they woke up and realized that the political stakes for social media were considerable, and that, one, there were nation states already involved in the politics of platforms, and two, we needed to think really seriously about how to appropriately govern and regulate and manage some of these spaces. In the years since then, I think the defining trend has been that content moderation has become this incredibly politicized question. But interestingly, the politics pull in wildly different directions in the US, the UK, Europe, and elsewhere around the world. There's a great article out from the Knight First Amendment Institute at Columbia by Anu Bradford that talks about the fall of American digital empire. And the idea behind this article is that for many years, the dominant Silicon Valley platforms, Facebook, Twitter, Google, YouTube, have been structured by American regulatory supremacy. This idea that Section 230, as providing an immunity shield to platforms, gives platforms an enormous amount of space to be as hands-off as they want to be about governance. It empowers companies to make these decisions for themselves. What Anu Bradford argues is that there's actually two other emerging dominant regulatory regimes these days that have supplanted the American model. They are the European model, and especially the influence of the Digital Services Act and GDPR, and then the Chinese model and the influence of the Belt and Road Initiative and Chinese governance and top-down control of the internet. And her argument is that if we think about internet regulation just in an American context, we're kind of missing the point. I think it's worth positioning these questions about government's relationship with social media in a global context that says it's really complicated and it pulls in opposing directions. In the United States, the dominant narrative is censorship, and this is really what I've been writing about since I left Twitter. The argument, particularly coming from the American right, is that there is an over-moderation of free speech, and there's a focus on the political bias of moderation processes. I should say the alleged political bias of moderation processes, because it turns out if you look at every piece of peer-reviewed academic research that has ever been published about content moderation, you find that there is zero empirical support for the claim that conservatives are systematically censored by Silicon Valley platforms. But whether or not the claim is true, it still plays out again and again and again in congressional hearings, in lawsuits, and I would argue in attacks on some of the people who are working on and studying these issues. I experienced some of those attacks myself after I left Twitter, but there have also been a number of researchers working in this space studying disinformation and election security who are, I would say, courageously doing this work under constant fire. They're doing research while responding to subpoenas and Freedom of Information Act requests and threats, And all of that creates an environment of a chilling effect on research and then also in moderation more generally. Platforms, in my view, are shying away from making moderation decisions that they think could provoke some type of political outrage. And you can see the effects of this in programs like Crosscheck at Meta, which is a program for giving some extra scrutiny to moderation decisions that impact prominent users and politicians. Those users get an extra layer of review that ordinary folks like you and me do not. I also wrote in the New York Times about some of Twitter's decisions about politicians and where I observed the company not disregarding the rules, but at least bending them a little bit to try to avoid angering politicians. And so I think all of that really matters. And in the American context, it's pulling platforms towards a more laissez-faire perspective on speech. But in Europe and in the UK, things are pretty different. If you look at the Digital Services Act, and especially at some of the recent statements from Thierry Breton, the expectations are that platforms will aggressively moderate. If you look at some of what Ofcom has been saying since the Online Safety Act received royal assent, Ofcom are adopting a perspective that platforms will need to effectively moderate against egregious harms. The U.S. perspective increasingly seems outside of the global consensus of how to govern platforms. And that's a really interesting and new development. I kind of see

Ceejay Hayes:

at least in the American context, the way that content moderation gets attacked, gets labeled as anti-conservative or having a conservative bias, and this very sort of grassroots knowledge production that social media facilitates. Because these people are now hip to the way that social media validate them as a source of knowledge, they can go into these congressional meetings and say, I actually am living proof that this content moderation inhibits conservative speech on the Internet. And so I think that tension kind of always exists. I wonder how much it encourages active listening versus just being heard, because I as a user, I see that being heard, being listened to is rewarded with engagement, whereas listening doesn't produce the same reward. So there isn't the essential If you have a conflicting opinion, perhaps engage in a way that's more of a dialogue than a debate. Is listening built into social media platforms? Can it be built in? How do we deprioritize just being listened to across social media?

Yoel Roth:

It's not obvious to me that even IRL people have especially good active listening skills. It's something that we have to teach our students, it's something that we have to coach employees on, and we have to remind ourselves that in our relationships with our loved ones and spouses. Active listening is hard and humans are bad at it. We really like talking, we don't so much like listening. And so I think that dynamic plays out on social media too. But it's a bit of a mixed bag. One of the truisms of social media is the so-called 90-10 rule, this idea that the vast majority of folks using social media are lurkers. And so by the metrics, most people who use social media platforms actually are listening. There are the vast majority of folks on a platform like Twitter or Instagram or Facebook or Blue Sky or Mastodon who aren't posting, but are consuming. And we observed this at Twitter, and I think every social media platform sees that there are a lot more lurkers than there are posters. Where it gets interesting for me is why that's the case. And who knows, right? It could be that people just don't have anything that they feel is important enough to post publicly. But another explanation, and one that I think is really important, is that many people are scared to participate on social media with the ways that platforms are currently structured. Last year, UNESCO conducted a study of women journalists and some of the threats that they face when they engage online, and the results of the study were really striking. They found that 30% of respondents in their survey, and these are women journalists, these are people whose job requires them to be public and engaged and to produce knowledge in public. So of that group, 30% indicated that they self-censor themselves on social media because of the abuse that they receive. 20% of women journalists surveyed by UNESCO said that they've withdrawn from all online interaction on the basis of concerns about abuse. When we would do research about safety and participation at Twitter, we saw the same thing. For many years at Twitter, we saw that the number of new tweets posted on the service were steadily declining. And this was true before Elon Musk bought the company. It's accelerated even more since then. But the number of tweets on Twitter was declining from when I joined the company in 2015 to when I left in 2022. So the question was, why is that happening? And we would ask users, we'd say, hey, you're on Twitter a lot, why don't you post? What we would hear again and again was, I don't want to get harassed. I don't want to get dogpiled. I don't want to get canceled. I don't want all of these really negative effects that can come along with participation. And so I think that's an interesting obstacle and challenge. We have platforms that are incentivizing only the people who are the most powerful and feel the safest already to participate. And I think those folks probably tend to not be as invested in active listening. They tend to be more invested in broadcasting a viewpoint, building an audience, and getting attention. And so I think there's some challenging dynamics to unpack there. But for me, it all comes back to the central task of how do we make more people feel comfortable participating on social media?

Alan Jagolinzer:

As I reflect back again on this as a for-profit enterprise, what I've seen is that if we run this as a game and we get to the point where people who no longer feel safe getting into a dialogue are kind of withdrawing and there are basically the types of trust and safety mechanisms that might allow it to be safe for dialogue or being withdrawn, then you only get a disproportionate weighting of one type of avalanche of information. And I don't see how that can generate profit at some level. In fact, if you look at the companies that are basically only one way going, I don't see the amount of support financially from a traditional model of advertising, because advertisers are withdrawing, users are withdrawing, there's a whole array of social media platforms. And there's also, hey, let's just shut down our computer and go for a walk and talk. So where does this play out? How can these companies sustain themselves in an environment where we can only platform the loudest barker who's got the most muscle and influence?

Yoel Roth:

I think the answer is you can't just platform the loudest barker and you shouldn't just privilege their interests in moderation. And so I'll kind of break apart each of those things. The first is platforms have to see this as exactly what you said, which is an existential financial risk. Declining participation on social media is a problem. We know what the solution to that problem is. We have lots of research that supports it. The answer is moderation. If you look at one of the platforms that has most successfully attained widespread participation by folks, it's TikTok. Why is TikTok getting the audience that it's getting? Why is it attracting content creation? One explanation is that it is the most moderated platform of any of the mainstream platforms out there. I can't say for sure, but I would speculate that people feel more comfortable participating on TikTok because they know that it is a safer, moderated space. That's not sufficient, but I would say moderation is a necessary component of what will encourage broad participation. The second piece is kind of unrigging the deck when it comes to prominent voices in particular. A lot of how we've constructed social media platforms and social media governance to date has been focused on the prominent voices. Folks like Elon Musk and Donald Trump, Chrissy Teigen, Hillary Clinton, Barack Obama. We think of these big voices on social media as being the folks that governance has to solve for. We spend a lot of time, many podcast hours, talking about the right way to moderate Donald Trump. And the solution usually focuses on giving these prominent voices more space to speak their mind and be uninhibited. What if we turned that model on its head? What if we said, look, if you're a public figure with an audience of tens or hundreds of millions of people, You actually have a responsibility to be more constrained in what you are saying than somebody with an audience of five or ten people. That you have a responsibility that platforms will implement in their policies not to cause disproportionate harm as a result of your prominence. That's anathema in Silicon Valley right now. The entire way that Silicon Valley platforms have been configured at the moment is about courting prominent figures, attracting them to use your platform, and then giving them space to do whatever the hell they want. The result of that, again and again, has been prominent voices causing disproportionate harm through social media platforms. It's a risky proposition to change that, but in my view, having more aggressive moderation focused on prominent figures could start to shift some of these dynamics in a way that encourages them to be more responsible, and that also creates more space for ordinary folks to participate safely in these conversations.

Alan Jagolinzer:

We just did a discussion on journalism in democratic backsliding. And one of the things that came up from one of our speakers was that basically, be careful what you're collaborating with, because if we take it to the endgame of a totalitarian type environment, then you're basically told what your model is and it's not clear that you can actually survive financially. And the media at large, including traditional media sources, they need to be really careful about what activity they're participating in and where it leads as an outcome and their survivability beyond that. I don't know if you have any thoughts on that, but that was just a comment.

Yoel Roth:

I think it's exactly right. For me, what that comment suggests is that there is tremendous value in having a diverse and competitive media environment, certainly for news, but also even amongst platforms. And a lot of that plays out at the level of governance, but I think it matters for consumers that they have a choice between different platforms, that those platforms have different philosophies about governance, and that gets enacted in different ways that result in different experiences. And so I can choose, do I want to be on a less moderated platform or a more moderated platform? I think there's value in that, in the same way that I can choose what news station to watch or what newspaper to subscribe to. I think the issue right now is we're sort of getting pulled in part because of all of these influences we've been talking about towards a more homogenous version of the internet that in the United States is pulled away from moderation, which is bad for consumers in certain ways. But it also is sort of resulting in this like largely undifferentiated media environment where all of these products have basically the same tools and basically the same lackluster policies and basically the same crappy enforcement. And in the end, that's bad for consumers.

Ceejay Hayes:

When you're talking about the onus of large audience social media users, people who have tens or hundreds of millions of people following them and adding another layer of responsibility of regulation of moderation onto their content, I mean, but it's also kind of a massive talking point in the UK in terms of free speech and having the right to say what you want. Here I hear a lot about the right to offend, not the duty to offend, but the right to offend.

Yoel Roth:

I like the idea of a duty to offend, a core responsibility we each have to offend others. I can get on board with that one.

Ceejay Hayes:

Look, I love a good read, you know, so I think I'm contextual. I have a duty to read. But when you're talking about adding this layer of moderation onto people with huge platforms, I wonder how it comes in tension with people's conceptualizations of free speech and not just for these sort of people with large followings, but across the board, you know, people on Say, I have a right to say what I want to say on these social media platforms. And I feel like for social media companies, what you're suggesting, adding this sort of level of moderation, people with high followings, comes both in tension with free speech, but also with the engagement that produces ad revenue. Because those kind of polarizing, those hot takes, as you would say, are what get people on the internet and get people talking. These are what get follows, you know? I wonder, If there is actually a version of social media where a moderated space is also a profitable one, where like a non-hostile social media digital space is actually something that can generate profit.

Yoel Roth:

I think fundamentally, content moderation is a profit-promoting enterprise. There's this idea that moderation is censorship, censorship means less speech, less speech means less activity, and therefore there's a doom spiral of not making money, and that just empirically is untrue. Let's sort of zoom out for a minute. Let's talk about human rights. A lot of social media companies, Twitter included, at least in the old days, would talk about content moderation being rooted in a set of human rights principles. Twitter would talk a lot about the Universal Declaration of Human Rights. And the UDHR lists a number of rights that all people have, things like safety and political self-determination and free speech. But it doesn't explain how to prioritize them or to manage the tensions against them. In human rights legal lingo, they are non-hierarchical. There's a whole bunch of stuff and you're just like, well, these are all important rights, like, there we are. Trade-offs between those rights are inevitable, right? You're saying, like, there is a trade-off between safety and moderation and speech. And so what matters here is how you balance those factors and whose rights, interests, and well-being you are privileging in the process. A key tension that emerges again and again and again for social media platforms is where the free speech rights of one user ends and where the free speech rights of another user begins, or to put it even more directly, where one person's free speech rights contribute to diminishing participation by others. And so this goes back to the research I mentioned earlier, which is the chilling effects of abuse. This is well documented. It's documented for journalists, it's documented for women, it's documented for queer folks, it's documented for everyone. There is a reason most people on social media are lurkers. The typical perspective from a free speech maximalist is something like, well, people shouldn't self-censor themselves. If we got rid of moderation, then anybody can speak and you can participate. And so just go ahead and speak up. That doesn't work. If you look at the actual study of participation on social media, you find that that approach does not, in fact, maximize the quantity of free speech in existence in the world. If you are truly a free speech maximalist, your goal is not to make only the powerful 0.1% of speakers feel confident speaking. Your goal is to have more speech out there in the universe because you believe speech is good. And so, go back to the 90-10 rule. If only 10% of a community feels comfortable speaking, that's a whole lot of speech from the other 90% that you are leaving on the table. And we know how to get the other 90% engaged. The answer is moderation. It always has been.

Ceejay Hayes:

That's interesting because I see the 90-10 rule playing out, but I can imagine that different communities have a different understanding of who that 10% is and who that 90% is. There'll always be these free speech maximalists who are like, the 90% are the people who don't get to say what they want to say, you know?

Alan Jagolinzer:

That's true in classroom discussion. That's true in any group dynamic.

Yoel Roth:

100%. And the more that prominent voices keep speaking, keep being prioritized, keep being suggested algorithmically, you end up in a cycle where people feel those are the only voices worth listening to, which discourages participation, which incentivizes prioritizing those voices, and on and on and on until you end up in a media environment that is actually as top-down and elite-focused as what we saw in the 19th century. I agree with the impulse to move away from an elite-dominated media environment, but the way to do that is not by only protecting the interests of elites, which is what a lot of existing content moderation approaches do.

Ceejay Hayes:

There's something in there that you just said that we don't have time for, but it's how echo chambers function and how echo chambers and algorithms silo us into particular trains of thought. And so all of these 10 percent of communities actually do exist in silos, but aren't interacting with each other across ideologies. And so we hear what we want to hear very often, but then also At times people feel overrepresented and overheard on social media because they exist in these echo chambers. You are steeped in the social media space. So I trust your judgment in what would a healthy, less acidic, social media landscape look like? What platforms do you see that are creating the kind of social media space that you think is healthy? But also, what kind of regulations are contributing to a more hospitable digital social media, digital communication space?

Yoel Roth:

I don't think anyone has found a perfect solution here yet, but I'll talk about a couple components of what I'm most optimistic about. The first is competition. I think for the first time in many years, we are in a space where there is genuine competition and innovation in the social media space. Because of, let's call it, Twitter's precipitous decline in the last year, I wonder what changed, we're seeing a lot of new companies enter the space. We're seeing Mastodon attain a lot more popularity than it has in years past. We've seen Blue Sky. We're seeing threads from Meta. And we genuinely don't know which of them, if any, is going to replace Twitter. It could be something else entirely. That space for me is exciting because it represents for the first time in many years, genuine innovation about what social media could look for. A critical component of what we're seeing from a lot of the Twitter competitors is a focus on decentralization. And so there's a lot of like technical reasons why decentralization and federation are interesting, but one of the most significant is that it's focused on user control and empowerment. This idea that instead of having decisions be made centrally for a community that has three billion people in it, which is what Instagram is, you instead can have smaller communities. Think of it like the difference between New York City and a suburb in New Jersey. The idea is that if you have more smaller communities that can connect with each other, governance starts to work at a more human scale and is more effective. And so I can choose a community whose values and moderation approach I agree with. I think that's super promising. There are real challenges there. I think a lot of the emergent social platforms are running directly into all of the predictable safety and security challenges that every social media platform has to wrestle with. And they're doing it with none of the experience and technology and resources that the big centralized platforms have. I'm excited about this moment of competition, particularly because it's focused on promoting user choice. So that's one component, competition and choice. The second component that I think has to exist in a solution is a recognition of a shared floor of unacceptable content. A lot of solutions that we've seen to date have focused kind of narrowly on a space of objectionable content that is really focused on things like child sexual abuse and the promotion of terrorism, things that are patently reprehensible, and certainly that's important to deal with. But I think we need a broader theory of online harms that recognizes other types of content that could be dangerous. For example, we've seen how certain types of misinformation can inspire people to take violent acts. We've seen the ways that dehumanizing rhetoric and insults can inspire genocides. Do I think it should be any one government's prerogative to designate certain types of speech as objectionable? Like, no, I worry about centralizing that power with Ofcom, with the European Commission, with the U.S. House of Representatives. I worry about that. But I do think we need a theory of online harms that goes beyond just CSAM and terrorism and helps us think through and manage harmful online conduct in a more robust way. And then the final component of a solution here, in my view, is transparency. We talk a lot about transparency. It's one of the foundational components of the Digital Services Act. It's all about getting data out of platforms, and of course that matters. But what types of data? Are we getting the right data and is it actually helping consumers understand what's going on on social media? And I would argue that we're not there yet. There's a lot of information that we know about platforms like TikTok and Instagram and Twitter, but there's a lot that we don't know yet. We don't know enough about their governance structures. We don't know enough about their relationships with governments. We don't know about who's trying to coerce platforms. For all that these issues have become really politicized in the United States, I think there's a kernel of truth there, which is we still don't have enough visibility into the black box of Silicon Valley companies, and the quantitative data that they're putting out isn't by itself sufficient to address the public needs there. in a future state of more trustworthy social media, I would love to see a broader, let's call it an epistemology of transparency that goes beyond just numbers and figures about how many accounts they ban and starts to think a little bit more holistically about the data that consumers actually need.

Ceejay Hayes:

You talked about a question ago about decentralizing social media. One of the places I see that is in the community notes feature of X, you know, it's offloading this sort of moderation feature to users. Do you think that's a healthy way of actually content moderation? Then sort of secondarily to that, do you think like interventions like pre-bunking from circulating in this space?

Yoel Roth:

I'm a researcher at heart and so for me it always goes back to what does the data show? I think we have some data about pre-bunks that suggests that they're super effective. Twitter used pre-bunks extensively during the 2020 election in the United States. And we observed that when people saw prebunks, they reported to us that they would pause and question what they were seeing at a kind of astonishing rate. Like, something like half of people who saw a prebunk about some misinformation narrative said that they really thought about it and questioned it. Something like 40% of people who saw prebunks said that they were less likely to believe those types of claims in the future. And that's one study from one platform in one context, but it was promising. And the more that we can study these types of issues, the better. I'm hopeful we'll see more of this data coming out of the Facebook and Instagram 2020 election study. We've seen the first batch of those papers, but there's a lot more coming soon. I would love to see those studies bear out how labels work and how pre-bunking works. But the early data on pre-bunks is, in my view, some of the most promising information we've seen. Community notes is trickier. I like the idea. I love the idea of empowering consumers and communities to intervene in moderation and to define how this works. But I think it sort of runs headfirst into some of the challenges around knowledge production on social media. I think of knowledge production as really being about the three Vs, volume, veracity, and velocity. Volume, like there's a lot of stuff on social media. A product like Community Notes can only do so much and can only intervene on so much. But there's promise there, right? Like you're expanding the aperture of moderators from a team of platform employees to, theoretically, every person on the planet. There's some possibility there. Veracity is tough. In some ways, Community Notes is architected with a reputation system. It is built to prioritize notes that the community identifies as high quality. They've built new features to require people to link to their sources, the way that we tell students to cite things in their papers. Like, that all seems good, but a lot of the research about how Community Notes works in the real world has found that it tends to be a bit of a partisan battlefield. There's research, the title of the paper is great, it's called, Birds of a Feather Don't Fact Check Each Other. And the finding is that people tend to use fact checks like an ability to dunk people on the other side of the political aisle. And so that sort of sucks. That undermines the promise of a feature like this as an intervention into veracity. But then the biggest risk here is the last v, velocity. Things move way too quickly for something like community notes to be effective. At Twitter, we found that the overwhelming majority of impressions on a tweet, between 90 and 95%, were in the first two to three hours after a post would go up. If that's your aperture for intervention, and you're dealing with community interventions from hobbyists who have day jobs and families and kids and other stuff that they're doing, How in the world can you possibly hope for a community-sourced intervention to have an impact fast enough to matter? And again, if you look at some of the recent data from researchers who have studied community notes during the conflict in Israel and Gaza, they've found that even when community notes are posted, they appear hours or days after the fact, which means most people are not going to see them. There's a community note, but if 95% of people have already seen the tweet before it was community noted, does the community note even matter? That's unfortunately where I think the feature runs into its limits.

Ceejay Hayes:

Yoel Roth, thank you for such an interesting and engaged conversation. Alan, do you have anything to say? Any last words?

Alan Jagolinzer:

No, I just want to say thank you. I really appreciate your time.

Yoel Roth:

No, it was a pleasure. Thank you so much for having me.

Ceejay Hayes:

Of all the things Yoel touched on, I'm most fascinated by how social media facilitates grassroots knowledge production. That's quite a powerful tool accessible to anyone with an internet connection. And I wonder if the benefits of broadening the bandwidth of authoritative, legitimized figures on a given subject matter outweigh the risks. I don't want to underestimate the benefits. I think grassroots knowledge production is massively useful in a generative way to activism and resistance movements. However, its influence and accessibility makes grassroots knowledge production a conduit for those with nefarious intent. If I think long enough, I end up at content moderation, which takes me down another rabbit hole of questions. Ultimately, I believe that social media infrastructures will be significant in any kind of systematic strategy to depolarization. But I get the sense that this may be one of the biggest obstacles to overcome, especially in the context of free speech rights. Thanks again to Yoel for sharing his knowledge with us, and to Alan Jagolinzer for co-hosting with me today. Thanks to Jac Boothe for our editing prowess, and thanks to you, the listener, for giving us just a bit of your time. If you enjoyed this episode of CounterPol, please share and leave us a review. Until next time.

Previous
Previous

Misinformation, Disinformation with Lee McIntyre

Next
Next

Origins of Polarization with Yphtach Lelkes