abstract image of digital connections with brain icon
Milton Wolf Seminar on Media and Diplomacy

AI’s Impact on our Information Sphere

By Timothy Dörr

AI’s Impact on our Information Sphere

2024 is a pivotal year for democracy with major elections in Mexico, the United States, and the European Union. Amidst these political events, we also mark the second anniversary of OpenAI's Chat-GPT, a technology that has ignited debates about its societal impact. ChatGPT is a groundbreaking Large Language Model, a type of generative AI that has been extensively trained on vast amounts of text data. It excels in understanding and generating human-like responses, making it the most advanced and successful model of its kind to date.

This year's Milton Wolf seminar, aptly titled 'Bots, Bombs, and Bilateralism,' focused on AI's influence on journalism, diplomacy, and democracy. A key question emerged: Will AI contribute to a more misinformed public during this crucial election year? To explore how AI might reshape news production and consumption, we must consider four key questions:

  1. Will AI increase false information in legacy media?
  2. Will AI increase the amount of fake news online pretending to be media?
  3. Will AI change the way we consume news?
  4. What are some potential positive developments that AI might facilitate?

Before answering these questions, it is essential to delve deeper into the concept of fake news and its implications.

Fake news, and the difference between misinformation, disinformation, and bias

"Fake news" is commonly used to describe any untruthful or misleading information. Former President Trump popularized this term by labeling various sources, such as CNN, Joe Scarborough, and mainstream media, as fake news. However, in academic circles, "fake news" is defined more precisely as "fabricated information that mimics news media content in form but not in organizational process or intent" (Lazar et al., 2018, p. 1094).

Two other important terms to understand are misinformation and disinformation. According to Lazar and colleagues (2018), misinformation refers to misleading or false information spread without harmful intent, while disinformation is false information deliberately spread to deceive people. Thus, the crucial difference here is the intentionality behind creating and sharing misleading information, rather than the content itself.

Additionally, we must consider media bias. Unlike the previous terms that relate to the truthfulness of information, media bias involves presenting information in a way that leads readers to false conclusions without explicitly stating falsehoods. This can occur through selective reporting, word choices, and other deceptive strategies.

Having defined these concepts, let's now turn to the questions of interest.

Will AI increase false information in legacy media?

While misinformation is a hot topic, recent research suggests that it might not be as prevalent as feared. Only about 0.15% of the content the average American consumes is actually misinformation (Allen et al., 2020). However, this data was collected before the widespread availability of AI tools like ChatGPT. Consequently, a key topic at the Milton Wolf Seminar was whether AI might increase misinformation in established and reputable media, such as the New York Times or the Washington Post.

People involved in the US news production system, such as journalists, expressed skepticism about this possibility. They noted that AI tools are used sparingly and that articles undergo stringent checks after being written. Instead of relying on ChatGPT to write articles or answer questions, journalists primarily use the technology for word finding, overcoming writer's block, and summarizing breaking news. Therefore, according to journalists, it is unlikely that reputable news sources will increase misinformation due to AI, at least for now.

That said, the low prevalence of actual misinformation does not mean readers are not misled. Rather than focusing solely on fake news, we should pay more attention to slanted and biased information. This type of content might not contain outright falsehoods, but can still mislead readers and distort their worldview. It is arguably even more powerful at misleading than mis and disinformation, as it is closer to the “truth” but only twists it a bit, and thus sounds mostly familiar making it harder to spot.

Will AI increase the amount of Fake News online pretending to be media?

A more likely way AI can increase misinformation is through fake news websites that mimic actual news sites, but lack journalistic rigor. These supposed news-sites are often strategically placed to mislead people, and AI poses a legitimate threat by lowering the cost of such operations. ChatGPT can assist not only in writing or rewriting stories and headlines but also in backend coding for websites and creating bots to push articles or headlines from these fake news sites. This dramatically decreases the cost of content creation, especially when article quality is not a priority, and the cost of creating the surrounding infrastructure.

For instance, the New York Times recently documented how Russian actors created various news sites resembling legitimate ones to spread strategic misinformation. Names like D.C. Weekly or the Miami Chronicle are intentionally similar to real news sites to mislead readers. Additionally, most links shared on social media are shared without the sharer reading the article, meaning that fake news sites only need a convincing name in the link to push their narratives through headlines alone.

Will AI change the way we consume news?

Beyond simplifying the creation and sharing of fake news websites, another concern is how AI might influence the news sphere through personalization. News personalization—reranking stories based on user engagement or other information—is already common, even among reputable media sources. While this aims to show users the stories they care most about, it can also reduce the shared news sphere and increase social distance, as your news reality might differ significantly from someone else’s (Møller, 2022).

A potential future development is that some newspapers might take personalization further by not only ranking news stories differently for each user but also creating personalized news stories. As for-profit businesses, newspapers aim to keep users on their sites as long as possible, and AI makes it easy to generate content tailored to individual interests. Even if this does not lead to misinformation, it could result in different aspects of a news story being emphasized. Consequently, not only would we read different stories, but even if we read the same story, it would be presented differently.

This trend risks further eroding a shared understanding of reality. While there are no reports of this happening yet, the drive for user engagement might make it too tempting for news sites to resist.

What are some potential positive developments that AI might facilitate?

While focusing on the potential negative impacts of AI on the information ecosystem is important, it's also valuable to consider positive interactions. Just as AI can reduce costs for bad actors, it can also enable citizen reporters. AI can drastically reduce the effort needed to turn notes into well-written arguments, making it easier for everyday people to share what's happening. This could counteract the decline of local journalism, which is dying in many areas, especially in rural communities.

However, we have seen similar hopes with the advent of social media platforms. Maria Ressa (2022) was initially hopeful that Facebook would enhance citizen journalism and increase information access. Unfortunately, these hopes did not plan out and these platforms have often done more harm than good in spreading information.

Despite this, we cannot dismiss the possibility of positive AI applications in journalism. A pertinent question is under what circumstances AI can benefit journalism and promote an informed public. One promising idea is using AI to adjust the difficulty of news stories, making them more accessible. Financial topics, for example, are often written for a more educated readership, which can be challenging for those with less background knowledge. AI could rewrite the same story at different difficulty levels, making all stories accessible to everyone.

This concept is not new and has been around for over a decade, but it required human effort, making it hard to scale. With AI, we could envision a "difficulty slider" next to news articles. If a story is too complex, readers could adjust the slider, and AI would simplify the content. This would make news more accessible to all, provided the original article is accurate and well-written.

Conclusion

As with all technological developments, predicting the future impact of AI on the news sphere is challenging. While we can't be certain, we can speculate. Often, new technologies elicit swings between extreme optimism and pessimism, but the reality usually falls somewhere in between. In this post, I have outlined some potential interactions between AI and the news sphere. Although I don't believe AI will diminish the quality of traditional news sources, it does have the potential to flood the internet with low-quality and misleading news masquerading as legitimate news.

We often evaluate how a new technology will disrupt our current society as it stands. However, society will also evolve in response to new technologies, making it even harder to predict their long-term consequences. For example, one possible response to the influx of low-quality online content might be a greater emphasis on media literacy, better integrated into school curricula. This could mitigate the negative effects of increased fake news by equipping people to scrutinize sources more effectively.

What will happen? Only time will tell. Personally, I am not overly optimistic at the moment. However, I believe we should not only focus on preventing potential negative outcomes but also consider how AI can amplify positive developments.

References

Allen, J., Howland, B., Mobius, M., Rothschild, D., & Watts, D. J. (2020). Evaluating the fake news problem at the scale of the information ecosystem. Science advances, 6(14), Eaay3539.

Lazer, D. M., Baum, M. A., Benkler, Y., Berinsky, A. J., Greenhill, K. M., Menczer, F., ... & Zittrain, J. L. (2018). The science of fake news. Science, 359(6380), 1094-1096.

Møller, L. A. (2022). Recommended for you: how newspapers normalise algorithmic news recommendation to fit their gatekeeping role. Journalism Studies, 23(7), 800-817. Ressa, M. (2022). How to Stand Up to a Dictator: The Fight for Our Future. United States: HarperCollins.

More in AI’s Impact on our Information Sphere