“This is a Huge Wakeup Call”: Tech’s Role in Russia’s War on Ukraine
Media scholar Courtney Radsch says tech platforms should have been faster to address Russian government propaganda, misinformation, and censorship.
As Russia’s invasion of Ukraine continues into a fifth week, it has become unequivocally clear that the battlefields of this crisis involve both a literal ground war and a highly destructive digital war. Social media platforms, news sites, and apps are embroiled in an ongoing information war, where harrowing images and stories from ordinary citizens and independent journalists battle against media manipulation, disinformation, and propaganda campaigns coordinated by Russian state media.
As part of the government’s continued assault on free speech, nearly all independent Russian-language media outlets have been blocked or shut down since the war began, and the number of non-state media outlets continues to drop. The government has also blocked the Russian-language sites of foreign news outlets including the BBC, Meduza, and Voice of America. Social media is under strict censorship, as well: Facebook, Instagram, Twitter and other platforms have been banned in Russia.
For Courtney C. Radsch, Ph.D., this rampant information war comes as no surprise. A journalist, researcher, and free speech advocate, Radsch works at the intersection of media, activism, and technology, with a particular focus on freedom of expression, human rights, and independent media sustainability. As she explains, tech platforms have been slow to act when it comes to issues like data transparency and oversight, platform regulations, and developing policies for addressing their use by governments and state officials.
In this Q&A, Radsch – a Visiting Scholar at the Annenberg School’s Center for Media at Risk with fellowships at the UCLA Institute for Technology, Law and Policy, the Center for International Governance Innovation, and at the Center for Media, Data and Society at Central European University – shares her insights on how platforms have allowed Putin’s propaganda to spread, why disconnecting the Russian internet in retaliation is an unsound idea, and what actions tech companies need to take now.
You’ve written about how Information warfare is a central part of Russia's offensive. How are disinformation and propaganda affecting the daily lives of Ukrainian and Russian people?
Propaganda is a tried-and-true tactic in warfare, stretching back through history. What is new and different about the digitally inflected environment of propaganda is that the scale, scope, speed, and sophistication of propaganda is on an unprecedented scale. What that means for people in Ukraine, Russia, and around the world is that information warfare is a central part of the broader conflict. Controlling the narrative and undermining efforts to convey any sort of truth or fact is part and parcel of information warfare.
The prevalence of digital images and videos, and the amount of information out there, makes it increasingly difficult for the average person – or even experts – to determine authenticity. Deepfakes are a growing concern. But even shallow fakes, like what we saw with a Zelenskyy video circulating online recently, mean that it is really difficult to determine truth.
What role do tech companies play in enabling or proliferating information warfare?
Social media and the firms on which we rely for communication have designed their platforms in a way that nurtures the spread of propaganda and disinformation, and facilitates the use of these platforms in information warfare. They’re designed around engagement, and engagement is based on extremism. Things that do better on those platforms tend to be more extreme, or potentially less factual.
They’re also based on an economic system of surveillance capitalism, which has allowed information about people to be collected and datafied, then used to create correlations that enable microtargeting and the creation of groups. They combine different types of individuals into groups and connect them in a way that was never possible before on this scope and scale. You have conspiracy theorists connected with climate change deniers, connected with QAnon supporters, and connected all throughout this with the Russian internet agency, which has been very adept at leveraging the design of these platforms in its information operations.
And that isn’t new – it stretches back to at least 2016. The fact that these platforms are once again playing a role in information warfare isn’t surprising. They now realize that they have a responsibility of some sort, but I don't think they quite understand what that responsibility is. They reacted quickly after the first bomb started falling and bullets started flying, but the fact is, there have been years leading up to this, setting the groundwork to have this approach in the invasion of Ukraine.
For example, Russian state media and officials were allowed to have presences on these platforms. And not just a presence, but millions of followers. RT Arabic and RT Spanish rank among the top news sites in Latin America and the Arab region. In these countries – and I would add, Iran, China, and other places that repress the internet and social media for their own populations – they’re allowed to have a presence, even though those platforms are restricted in the country.
Platforms haven’t effectively grappled with what to do in these situations, even though, again, this isn’t the first time it has come up. They're essentially making up policies as they go. On one day, they're censoring violent speech against soldiers and the next day they're allowing it, then they're going back. We saw Syria. We saw Myanmar, Afghanistan, and India. There are many examples of where platforms have been manipulated and haven’t taken responsibility for how they’re used, and where they actually help facilitate it. Let’s remember: Facebook has worked with many leaders in countries that have repressive tendencies to help them use the platform better.
Ukraine wants to disconnect the Russian internet, which you wrote about earlier this month. Can you explain “kicking Russia off the internet” sets a harmful precedent?
There's allowing Russian entities to have a presence on specific platforms or to get services – for example, we've seen PayPal, Airbnb, and credit card processing companies withdraw from Russia, and that’s their right and potentially their responsibility as private companies. But what’s different is to say that we should take down all .RU or all Cyrillic websites. The Russian population – and particularly independent media, civil society groups, and academia— have websites and other services hosted on that part of the internet, which is run by Russia as part of the top-level domain name system. A lot of applications aren’t going to work if you end up taking off that part of the root zone. That would potentially have more negative ramifications and second order effects than, say, denying access to all these services. It sets a precedent for weaponizing the logic of how our open, interoperable internet works.
All sorts of media would lose access to their websites. We don't know what the ramifications on their archives might be. I don't think that's ultimately in our best interest, as people or countries that support democracy and a free and open internet. It will simply encourage countries to develop their own splinternets – essentially, their own version of the internet – so they can be self-reliant and not have to plug in to the open internet.
Whenever we talk about sanctions or what private companies can do in their domain, I think the goal is to target those who are responsible for the invasion – the ruling class and political leaders – but not those who will depend on the internet for getting out or alternative perspectives from the ground for protesting. What would've happened if the international community had decided to shut down the internet in Egypt, Tunisia, or anywhere else during the Arab Spring? That would've been really problematic.
What can social media platforms do to help mitigate these issues?
One of the challenges is that a large part of consumer and general user-focused internet application layer services are dominated by U.S. and Western companies. And on one hand, there is the ability to really cause pain and pressure by denying services. So of course, U.S. companies are required to comply with U.S. government sanctions, but that has sent a strong signal around the world. I worry that is going to strengthen, say, the hand of China and its model of internet infrastructure, which is embedded in a surveillance regime and governmental centrality. I think that's dangerous, and it has emphasized the need for regulation of platforms and requiring that they develop policies for addressing their use by malign state actors.
Facebook, Twitter, and Google's YouTube have policies on coordinated and authentic behavior and state media, but that's not sufficient. That needs to be something that all services consider, and it needs to be thought out further in advance. Something that arose during the Taliban’s takeover in Afghanistan is that there aren't clear policies on these platforms for what happens to official state accounts. They only have a published policy for the U.S., but their services are used around the world, so they need to have other policies in place.
As the war continues, what do tech companies need to focus on in their efforts to enact change?
If you're going to do business in a given country or in a given language, you need to have resources devoted to supporting that service before it rolls out. Many application layer services don't have sufficient language support outside of English or other dominant languages. Addressing inequity in terms of how resources, staff, and attention are paid to different parts of the world is a core responsibility for U.S.- based platforms that claim to have a focused response to diversity, equity, inclusion, and structural racism.
Ukraine has tried for many years to get more attention from the platforms to what was happening in its information space, with little avail. We're now seeing that the hegemony in power of these platforms is further enabling problems around the world, without sufficient oversight or accountability. All platforms, and frankly, all companies, should be conducting human rights impact assessments of their products and services as a regular part of doing business.
Overall, this is a huge wakeup call about the fundamental need for greater transparency and oversight of data. Right now, we trust platforms about what they're doing and what's happening behind the scenes. Regulators need to come up with a privacy-protecting regime for data preservation and analysis that allows independent experts to assess how information flows and the impact of different platform actions, so that we can better understand the role of state-sponsored propaganda and how it infuses into the platforms and our communication systems.