A New Study Uncovers How Information Spread on Facebook in the Lead up to and After the 2020 Election

Professor Sandra González-Bailón and colleagues analyzed the spread of over one billion Facebook posts to reveal how information flowed on the social network.

By Hailey Reissman

The U.S. 2020 presidential election took place amidst heightened concern over the role of social media in enabling the spread of misinformation. Facebook’s part was particularly concerning, given previous worries about its impact on the 2016 election.

In a newly published study in the journal Sociological Science, Annenberg School for Communication Professor Sandra González-Bailón and colleagues analyzed over one billion Facebook posts published or reshared by more than 110 million users during the months preceding and following the 2020 election.

Sandra González-Bailón
Sandra González-Bailón, Ph.D.

“Social media creates the possibility for rapid, viral spread of content,” González-Bailón says. ”But that possibility does not always materialize. Understanding how and when information spreads is essential because the diffusion of online content can have downstream consequences, from whether people decide to vaccinate to whether they decide to join a rally.”

The research team paid particular attention to whether political content and misinformation spread differently than other content on the platform. They also looked at whether Facebook’s content moderation policies significantly impacted the spread of information.

They discovered that, overall, Facebook Pages, rather than users or Groups, were the main spreaders of content on the platform because they broadcasted posts to many users at once.

Misinformation, however, was primarily spread from user to user, suggesting that the platform’s content moderation created an enforcement gap for user-transmitted messages.

“A very small minority of users who tend to be older and more conservative were responsible for spreading most misinformation,” González-Bailón says. “We estimate that only about 1% of users account for most misinformation re-shares. However, millions of other users gained exposure to misinformation through the peer-to-peer diffusion channels this minority activated.”

Analyzing Content Diffusion

The research highlights three paths by which content made its way to a user’s Feed on Facebook during the 2020 election. 

One involves content flowing directly from friends. Another is Pages, which are the typical mechanism for celebrities, brands, and media outlets to share content. The third is Groups, which users can join to connect to other users. 

Content shared via friends, Pages, and Groups generates different propagation patterns, which the researchers mapped using “diffusion trees,” representations of the width and depth of information sharing. In addition to these patterns, the researchers also analyzed the reach of that propagation or the number of people exposed to a given post. 

In this video, Professor Sandra González-Bailón provides insight into her research on how information propagated on social media during the U.S. 2020 presidential election

“Most people online are lurkers, which means that most users view but rarely produce or re-share content,” González-Bailón says, “so merely calculating the number of re-shares doesn’t show the whole picture of what happens on social media. That’s why we also look at exposures, that is, the number of views a given post or message accumulated.”

The researchers found that Facebook is predominantly a broadcasting medium of exposure, with Pages (not users or Groups) acting as the main engine behind this broadcasting. 

However, misinformation behaved differently, relying on peer-to-peer transmission.

“Pages initiate most of the large diffusion trees in our data, including trees classified as political,” González-Bailón says. “However, misinformation trees are predominantly initiated by users, and they can accumulate as many views as content propagated through broadcasting.”

Content Moderation

During the study period, Facebook employed emergency measures that intensified its content moderation. These measures are known as “break-the-glass” because, as the name implies, they were designed to respond to extreme circumstances and mitigate heightened risks, like the "Stop the Steal" campaign that erupted right after the election. The researchers found that the periods of high-intensity content moderation were generally associated with drops in the propagation of information and in exposure to misinformation, specifically. These drops are indicative of the influence that content moderation efforts may have at crucial junctures – including the moments when those efforts are rolled back. 

What’s Next?

Social media platforms are evolving rapidly, adopting AI and other emerging technologies. With these changes comes the potential for misinformation to spread in new ways, as well as opportunities to discover more effective ways to curtail it.

According to González-Bailón, platforms need to work with external researchers to understand these changes and assess the effectiveness of their content moderation policies.

“The ability to control information flows gives much power to platforms, and this power should not be exercised outside of public scrutiny,” she says. “The public can only assess how effective platforms are in their content moderation efforts through publicly shared data and analyses.”