Balancing Quantity and Quality: How X/Twitter’s Algorithm Influences Our Consumption of News
A new paper by Penn researchers, including Annenberg doctoral candidate Shengchun Huang, explores the dissemination of news on X/Twitter.
Are we only seeing the kind of news we want to see on social media? What effects do personalized algorithms have on our perception of news quality? Do algorithms help us serendipitously encounter information that we didn’t expect? These are the questions researchers are now asking as AI and algorithms infiltrate the information environments we turn to for political news.
To understand if political polarization could be an issue exacerbated by social media algorithms, researchers from the Annenberg School of Communication and the School of Engineering and Applied Science at the University of Pennsylvania conducted a study of 243 X/Twitter users and over 800,000 tweets during a three-week period in late 2023.
The research team — Raj and Neera Singh Term Assistant Professor Danaë Metaxa, Department of Computer and Information Science doctoral candidate Stephanie Wang, Annenberg doctoral candidate Shengchun Huang, and Annenberg alum Alvin Zhou (Ph.D. '22), assistant professor at the University of Minnesota Twin Cities — set out to investigate how news from users’ algorithmic feeds differ from news in a chronological X/Twitter news feed composed only of accounts users follow.
Their study, which Wang presented at the 27th ACM Conference on Computer-Supported Cooperative Work and Social Computing in San José, Costa Rica, performed a sociotechnical audit. This type of audit includes traditional methods of content and browsing history, deploying them directly with real users, alongside user experience surveys to simultaneously gain insight into users’ perceptions of information at multiple points in time.
At the highest level, the researchers confirmed that the X/Twitter algorithm significantly influences what users do and do not see relative to users’ independent choices in the accounts they follow. Why is this an issue?
“As we consume information, we form opinions and take actions based on those opinions,” says Huang, a member of the Center for Information Networks and Democracy at Annenberg. “When you only consume news aligned with what you believe, you miss a lot of what is happening in the world and are less able to see things from other perspectives. It is possible that algorithms of X/Twitter and other social platforms are filtering information and contributing to incomplete views of the world."
But how exactly are these algorithms influencing the information we see? The researchers hypothesized that each user’s feed would contain more extreme political content aligned with their beliefs that could push them to farther ends of the political spectrum. However, when they compared the personalized algorithm to the chronological timeline news content, they found the opposite.
“It turned out that, during the time we performed this audit, X/Twitter’s algorithmic feeds were presenting users with information that was milder and overall less polarizing than the chronological timeline,” says Wang. “We also found that the algorithmically-curated feeds presented users with less news content in general, specifically less content containing links to news articles and more content featuring other types of information.”
While personalized algorithms were not observed to push polarizing or noticeably controversial news, their significant influence on the type of content users were seeing has implications for the use of X/Twitter at any point in time.
“During that particular moment, the information may not have been very extreme or disruptive, but this doesn’t mean we can rely on these algorithms to continue to operate in that way,” says Metaxa. “What concerns me is that users of these platforms have very little control over the algorithms. The lack of transparency, restricted APIs and the current controversies surrounding the direction and ownership of X/Twitter make it a challenging space for people to find and trust quality news.”
And even when users were able to find news content from legitimate sources, they tended to question its credibility.
“Users reported that just the fact that they read the news on social media made it less credible,” says Zhou. “Additionally, if that news content expressed opposing views or opinions, the user reported it as being even less reliable. It’s a very interesting phenomenon that touches on our natural human instinct and behavior. We don’t like to see things we don’t agree with, so we tend to doubt their credibility. Without the surveys in this audit, we would not have captured this insight.”
This is the first time sociotechnical auditing has been used in a social media news consumption study, and Metaxa plans to continue using this tool to investigate other social media platforms.
“These types of audit studies are very important for any system that aims to instigate human action or behavior change,” says Metaxa. “Search engines, generative AI, targeted advertising and social media all fundamentally rely on human interaction, and they influence people and society. We need to incorporate users’ experiences into our audits to evaluate how well these systems work.”
The data gained from the sociotechnical audits in this study shows just how sensitive our perceptions are to the news we see on social media. But, rather than putting all of the responsibility on the user, the research team believes platforms such as X/Twitter should take measures to provide a safe, reliable and informative media environment. In today’s era of “fake news,” the team believes effective solutions will be made at the institutional level rather than the individual level.
This study was supported by research funds from the University of Pennsylvania and the University of Minnesota in addition to Amazon Web Service’s AI ASSET award which supported lead author Stephanie Wang’s doctorate degree.
This post originally appeared on the Penn Engineering blog, Penn Engineering Today.