What Big Data Reveals About Online Extremism
Homa Hosseinmardi and her colleagues at Penn’s Computational Social Science Lab studied browsing data from 300,000 Americans to gain insights into how online radicalization occurs — and to help develop solutions.
As extremist groups and fringe movements like QAnon have gained mainstream awareness, their ability to rapidly proliferate misinformation and conspiracies has put social media platforms under heightened public scrutiny. Facebook, Twitter, and other tech companies have been reprimanded by Congress and media outlets alike for failing to seriously address online radicalization among their users. As the United States has grown increasingly politically polarized, the question of whether these platforms’ algorithms — unintentionally, or by design — help users discover extreme and misleading content has become more urgent.
But as Homa Hosseinmardi, Ph.D., points out, one major platform has surprisingly gotten less attention: YouTube. Hosseinmardi, a senior research scientist and lead researcher on the PennMap project with the University of Pennsylvania’s Computational Social Science (CSS) Lab — part of the School for Engineering and Applied Sciences, Annenberg School for Communication, and the Wharton School — notes that while it’s often perceived as an entertainment channel rather than a news source, YouTube is perhaps the largest media consumption platform in the world.
“YouTube has been overlooked by researchers, because we didn't believe that it was a place for news,” she said. “But if you look at the scale, it has more than two billion users. If you take that population and multiply it with the fraction of news content watched on YouTube, you realize that the amount of information consumption on YouTube is way more than on Twitter.”
Hosseinmardi’s research is driven by questions about human behavior, especially in online spaces. Her Ph.D. research addressed harassment and bullying online, particularly on Instagram and Askfm, a semi-anonymous social network. Before coming to Penn, she was a postdoctoral research assistant at the University of Southern California’s Information Sciences Institute, where she studied personality, job performance, and mental health in work environments using employees’ physiological signals such as heartbeat and breathing.
In 2019, she joined the CSSLab, directed by Stevens University Professor Duncan Watts. In her work with the Lab, Hosseinmardi uses large-scale data and computational methods to gain insights into issues including media polarization, algorithmic bias, and how social networks affect our lives.
Several years ago, a team of researchers including Hosseinmardi and Watts became interested in the relationship between online radicalization and YouTube news consumption. To what extent do YouTube’s algorithms foster engagement with heavily biased or radical content, and to what extent is this influenced by an individual’s online behavior?
“We’ve all heard anecdotes about YouTube radicalization — a person watched one video and ended up at a conspiracy theory,” Hosseinmardi says. “We realized that general audiences perceive this as proof of a systematic problem with the algorithm.”
She aims to answer this question: If people start from somewhere on YouTube, after watching a few videos consecutively, will they end up in the same destination?
The team tested this inquiry by studying individual browsing behaviors, on YouTube and across the web, of more than 300,000 Americans from January 2016 through December 2019. They also obtained users’ demographic data including age, gender, race, education, occupation, income, and political leaning, ensuring a representative sample of the U.S. population.
In August, they published their results in Proceedings of the National Academy of Sciences of the United States of America (PNAS), finding that the conventional wisdom isn’t always right.
The researchers observed no evidence that far-right and “anti-woke” content engagement is systemically caused by YouTube recommendations. Rather, it largely reflects user preferences and the broader online content ecosystem. Far-right content consumers arrive via various pathways such as search engines, other sites, and previously watched videos.
Yet, the fact remains that individuals continue to be radicalized online. As Hosseinmardi and other experts see it, ongoing and solution-oriented research is paramount to addressing this issue. In early November, the team’s related 2020 study, "Evaluating the scale, growth, and origins of right-wing echo chambers on YouTube", was cited in a U.S. Senate Committee on Homeland Security and Governmental Affairs hearing that examined social media platforms’ role in the rise of domestic extremism.
Hosseinmardi and her colleagues regard their work in this area as continually in-progress. PennMAP, an interdisciplinary and nonpartisan research project run by the CSSLab, is building technology to evaluate media bias and misinformation patterns across the political spectrum, and to follow how information consumption affects individual and collective beliefs. The team is creating a scalable data infrastructure to analyze tens of terabytes of television, radio, and web content, alongside studying representative panels of about 500,000 media consumers over several years. They're also working with researchers from the Swiss Federal Institute of Technology to monitor the effects of YouTube policy changes and de-platformation on its users across the web and find out whether extreme content consumption is consequently reduced, or if users simply replace YouTube with another platform that hosts similar content.
However, Hosseinmardi cautions, societal issues can’t be solved through policy reform alone. To a large extent, platforms like YouTube reflect the offline world. Misinformation is part of a larger cycle, wherein biased or misleading content can dehumanize marginalized and minority groups, affecting others’ levels of empathy towards them in ‘real’ life. As misinformation continues to flow, hate and harassment keep spreading, and affected groups grow increasingly silent.
Ultimately, Hosseinmardi emphasizes, it’s our responsibility to think critically about the information we consume and accept as truth.
“I can’t claim that there's no fault for any platform, but we shouldn’t forget our role as a society, and that people with a certain appearance, or people of a certain race or religion are being victimized," she says. “The platforms are reflections of big problems in society that we need to care about more, compared to just pointing fingers at the platforms. They should do their part, but we should do our part as well.”