Thirty-six million tweets and counting.

When Ukraine native and UNLV disinformation expert Mary Blankenship began researching the “information pollution” surrounding the Russia-Ukraine war, she had 12 million tweets to sift through and analyze.

In three weeks, that number has tripled.

“It’s a very rich field right now. It’s hard to keep up,” said Blankenship, a graduate student in the Department of Chemistry and Biochemistry and researcher for Brookings Mountain West. “One of the biggest portions of this war is the information warfare that is going on. And it can have widespread repercussions.”

In Russia, social media sites are now banned, and it’s illegal to refer to the conflict in Ukraine as anything but a “special military operation.” Calling it a “war” or “invasion” could result in a 15-year jail sentence for a Russian citizen.

International news sites are also prohibited in Russia, and the country is cracking down on the ability for people to access virtual personal networks (VPNs).

“The population there is now really isolated in terms of the information that they’re getting. It’s becoming very much a repressed society, I would say close to a North Korea-level of isolation,” Blankenship said.

All the while, Russia’s messaging in the public sphere has moved from one goalpost to another as the conflict in Ukraine has escalated and evolved, and as they see what messages are working, and which ones aren’t. 

And as these kinds of disinformation messages get filtered to Russian citizens across the country, it’s a common occurrence, Blankenship said, for some to not believe Ukrainian relatives when presented with on-the-ground information that runs counter to what they’re hearing.

Not only that, but the disinformation about the war varies from region to region, complicating an already fraught situation where any delay in decision-making can imperil lives.

We caught up with Blankenship to understand the implications of information pollution on the ever-evolving conflict and its eventual outcome, why Russia is creating propaganda, how different regions of the world are responding in real-time to social media disinformation, and strategies to combat it.

What kind of disinformation messages are Russian citizens being exposed to, and what does the free speech situation in Russia look like right now? Has it worsened since the start of the war?

Russian citizens risk a lot when they speak out and express their opinions against the war.

It’s actually illegal to call this a war, and Russian journalists are hounded the most. You can go to jail for 15 years if you spread what the government terms as “fake news,” what they consider an inaccurate description of calling the situation in Ukraine a war.

The situation has definitely gotten worse, especially in the last three weeks. Prior to the war, you still had sort of this autocratic control, and you had your Russian-funded media agencies like Russia Today, but you were still able to access independent news sources and Western news outlets. Now, there’s none of that. They’ve even blocked social media websites, like Facebook, Instagram, and Twitter, as well as Deutsche Welle, which is an important media company in Europe.

Something else that makes the situation difficult is that VPNs are illegal. It’s used all the time especially if you want to have a more secure connection, and the trick with VPNs is that you can set your location to somewhere else. And if something is blocked in your country, you just say you’re in New Zealand, for example, and you can get access to it. It’s used a lot in oppressive regimes.

How does the believability of the disinformation vary among the Russian citizenry?

The older you are, the less education you have, and if you live in rural areas of Russia and have less wealth, the more likely you are to side with Putin and the media propaganda. Crowds that are younger, more educated, have more wealth actually aren’t so supportive. If you think about it, that has a lot to do with access to information.

If you’re older, you’re not going to check the social media sites and internet for different sources, and therefore, you wouldn’t know that Facebook and Twitter have been banned recently. Your access to information is most likely going to be the Russian state-funded media sites.

And the Russian state media is becoming increasingly unhinged. They are actively talking about what a war — not just in Ukraine, but in Lithuania, Estonia, and Moldova — would be like. If you look at the historical perspective, Russian propaganda didn’t start just three weeks ago. It picked up right where the Soviet Union propaganda/disinformation left off. The sentiment that Russian state media has consistently tried to put out is a stance that is very much against Ukraine. That’s why, for example, when Russia first invaded Ukraine eight years ago in the Crimea/Donbass region — you didn’t have a mass uprising within the Russian citizenry. For the most part they still support Putin.

This also applies to sanctions. The people who believe Putin the most are not relying as much on Western goods. A babushka in a small Russian village is not going to care if Starbucks has left or not. So, the sanctions really don’t have as much of an impact on those demographics as they do on those that can afford Western goods and imports.

How has the Russian narrative evolved since the start of the war?

Before the invasion, most of the propaganda/disinformation was aimed at the motivation: try to make it seem like Russia had a reason to invade the country. For example, they put out claims that ethnic Russians were experiencing genocide in Ukraine and in the Donbass region. They manufactured videos and images of people getting beat up.

As the war started, it very quickly shifted to a lot of doctored videos and photos making it look like the Russian forces were completely winning and overwhelming all of the Ukrainian forces. So, the purpose switched from the motivation to intimidation, specifically aimed at Ukrainian forces and people to intimidate them into surrendering. But as we’ve noticed that hasn’t worked.

And so, very recently, in the last five days, the disinformation switched back into motivation. And this time the focus is on motivating very specific and egregious Russian actions, for example, the use of bioweapons and nuclear attacks on Ukraine.

What are the differences between disinformation and misinformation and how can these forms of communication cause harm, especially in such a dangerous situation like the Russia-Ukraine war?

Disinformation is the spread of incorrect information with an intent to harm. This is, for example, Russian bots or Russian actors spreading lies that all Ukrainians are Nazis. The intention there is to harm and to make Ukraine look like the bad guy. Misinformation, on the other hand, is the spread of incorrect information without an intent to harm. So, this could be a concerned person, or a social media user, who sees information and wants to let everyone else know because they think it’s an important piece of information — but they haven’t verified its accuracy before sharing it.

This ‘information pollution’ shifts the focus from the actual issues into discussion of what is real and what isn’t, which can delay decision-making, or altogether stop decision-making. In a volatile situation like this, where so many people’s lives are at stake, even a small delay in decision-making to discuss this disinformation can have serious repercussions.

How does the information pollution vary from region to region across the world?

In Canada and the U.S. some of the biggest disinformation narratives falsely claim that this war was staged as a distraction. They contend that the real war that’s going on is in Canada or within the U.S. And this narrative mostly comes from conservative activists.

In the Middle East and in Africa, there’s a very different focus. And the focus there is termed as “whataboutism.” What about this war in this country? What about this conflict? Why did no one care about A, B, and C? In these regions, Russian disinformation and propaganda is really exploiting the past hypocrisies of the West, particularly the U.S. and their invasion of Iraq. These frustrations are valid, but they don’t help the current situation and only serve to feed into Russia’s narrative.

A lot of countries within Africa are in this really precarious state because they depend a lot on the European Union, but then also a lot on Russia, so it’s very difficult for them to pick a side. These are countries where disinformation can have a huge impact in terms of which way they sway and who they support at the end.

How can everyday people help to combat information pollution?

In a situation like this, it’s very difficult.

First, emotions are running high, and disinformation preys on emotions of fear and anger, especially towards another group. The second is that so much is changing so fast that it’s hard to confirm data right away. And because of social media, it’s a blessing and a curse. We really get to see live updates about what’s happening directly from people on the ground that could be very valuable, but also can be utilized by disinformation bots and actors to plant these false seeds.

If you come across information, especially a post that elicits a severe emotional reaction, it’s very important to take a step back and to wait either for the information to be confirmed, or try to do research yourself, especially through local Ukrainian news. It’s especially important to verify videos and images because a lot of them are old or doctored. There are even cases of videos and images being taken from video games. In the beginning of the war, there were videos circulating from previous Russian Air Force parades. These videos were being used to inaccurately show Kyiv being bombed.

Twitter users can use an app called Botometer. It evaluates an account based on several variables and gives you a score that shows if the account is more bot-like or human-like. You can also check when the account was made. There’s a large portion of accounts that were made in February and March that only discuss the war. And if the account name has a lot of random values and numbers, it’s likely a bot.

If you do come across information you think is fake, it’s good to report it, but do not interact with it: do not repost it, do not like it, do not comment on it. Even if you’re disagreeing with the post or trying to disprove it, social media algorithms are noticing interaction within that post, so they’re going to continue pushing it out. One idea is to take a screenshot of the post and ask people to report it.

Are there any silver linings that you see at this current juncture?

Truthfully, not many. But I will say that I think the majority of people are seeing how wonderful Ukrainians are, and are standing behind Ukraine. They’re seeing that Ukraine wants to be a Democratic nation. It wants to be a Western country. And it will sacrifice everything to be that.