Policy brief within Kremlin Watchers Movement project – November 2022

Policy brief within Kremlin Watchers Movement project – November 2022

Photo: ua.depositphotos.com / viperagp
20 January 2023
FacebookTwitterTelegram
1978

The following text analyzes the main trends in disseminating disinformation in the Czech Republic. It elaborates on prevalent disinformation narratives, defines key vulnerable groups, and highlights policy recommendations based on good practices from Estonia and Denmark.

This report was published by the Kremlin Watchers Movement team in December 2022. VoxCheck team adapted the text for its readers.

Kremlin Watchers Movement is a project which is running for almost 3 years now within the effort to fight Russian malign influence and disinformation in Europe. Gathered authors, junior analysts are producing content about Russian malign influence and disinformation on social media, informing not only expert society but also wide population about latest events in this field. 

Main disinformation narratives

As a part of their monitoring efforts, Vox Ukraine discovered 32 cases of Russian disinformation being disseminated by the 11 Czech media outlets that were analyzed for a study. The organization has defined three main disinformation narratives that appear in the analyzed media outlets (with the number of cases in parentheses):

  • Ukraine is a “terrorist state” (7);
  • Nazi ideology is widespread in Ukraine (6); and
  • the West controls Ukraine for its own purposes (4). 

First narrative: Ukraine is a “terrorist state”

A Czech disinformation media outlet (in this case, Protiproud) reported that the terrorist attack on the outskirts of Moscow in which Russian Daria Dugin died was organized by “Nazis from Azov” and the attack on Kerch Bridge was the pinnacle of “Western terrorism in Ukraine.” A significant number of false reports claimed that Ukrainian forces were carrying out targeted strikes against civilians in Russian-occupied territories.

Second narrative: Nazi ideology is widespread in Ukraine

The second prevalent narrative concentrates on alleged Nazism in Ukraine. One of the key messages is that the “Nazi-terrorist Bandera regime” in power in Kyiv is carrying out terrorist activities against its population. The Czech media spread a false report that the commander-in-chief of the Armed Forces of Ukraine, Valery Zaluzhny, wears a swastika bracelet as alleged evidence that high-ranking military officials are adherents of Nazi ideology.

Third narrative: The West is controlling Ukraine for its own purposes

In the context of the narrative that the West is using Ukraine for its own ends, the disinformers at Sputnik News wrote that Ukraine has already become “American property” and that European aid and loans from the United States only contribute to prolonging the war. In addition, the Czech disinformation media outlet Parlamentní listy wrote that Russia is no longer fighting “the Ukrainian army armed by NATO, but the NATO army made up of Ukrainians.”

Dissemination of disinformation

Disinformation narratives are disseminated in various ways, such as on social media and websites, and through email. Disinformation flourishes on those platforms because they are not costly to operate, reach many users, and have the ability to spread information quickly.

In the Czech Republic, fake news is largely disseminated on Facebook, where there are many groups catering to people harboring pro-Russian and anti-Western sentiments. Disinformation spread in these groups usually originates from pro-Kremlin media outlets with a Czech web presence, such as Sputnik News, Aeronet, and První zprávy. The administrators of these groups also often take inspiration from Russian state media; some members translate Russian news directly into Czech. Fake news disseminated through email also often originates from the same websites; whole paragraphs are often copied and pasted from these disinformation sites. The nonprofit groups Czech Elves and Manipulátoři regular monitor these disinformation activities.

Vulnerable groups

Determining the groups most vulnerable to disinformation is critical for building resilience to its ill effects. Research shows that elderly people are most likely to receive and share disinformation. The same applies to children, who may be exposed to disinformation through their interactions with peers, parents, or educators.

According to Jaroslav Valůch, head of the Media Education Department of Transitions, although elderly people do not lack media literacy, it is difficult for them to navigate through the digital media landscape to fact check. As Valůch argues, senior citizens understand very well the messages they receive from the media, but they cannot evaluate why a particular message was created to reach them. Many older people use the internet today, and some are also active social media users. However, because they are not digital natives, there is much they need to learn about modern information technology. Younger people, in contrast, better understand the internet, how it works, and the fact that it is flooded with various information disseminated with various intentions. Older people are accustomed to a less chaotic information ecosystem.

Valůch also points out the phenomenon of chain emails, which target almost exclusively seniors. Email can be used to easily spread information, and because it is one of the more traditional ways of communicating on the internet, it is easy for most seniors to use. As Valůch implies, people who forward chain emails do not do so with the intention to cause harm. They do so to let those close to them know about an alleged threat or to help them. Although Valůch claims that there is “too much panic” over chain emails targeting Czech senior citizens, we do not agree. Although chain emails may not be the most important source of disinformation all year round, they do possess strong potential to influence election results in favor of certain parties or candidates. Therefore, chain emails certainly deserve attention. Chain emails are regularly monitored by the Czech Elves, who have created a database of these types of messages that can be analyzed more closely.

Children and students comprise another group vulnerable to disinformation. According to the co-founder of Manipulátoři, Petr Nutil, the key issue is the lack of media literacy education in Czech elementary schools and secondary schools, which do not prepare students to critically evaluate the information they receive.  Students are therefore unable to tell trustworthy media outlets from disinformation websites and are not well informed about the potential risks on the internet. Today’s children are exposed to the internet from a very young age; even when they are still too small to actively use the internet themselves, they are exposed to various information through their caregivers. As a UNICEF article suggests, children have the potential to learn how to tackle disinformation themselves when provided with quality media education.

Countering disinformation: recommendations and examples of good practice 

To effectively counter disinformation, researchers from the University of Cambridge have created an overview of which interventions at the individual level can affect the spread of, susceptibility to, or impact of misinformation.

Boosting interventions

Boosts aim to “improve people’s competence to make their own decisions.” Reinforcing interventions tend to seek to reduce individual susceptibility to misinformation. There are three types of such interventions: pre-education, critical thinking, and media and information literacy.

Nudging interventions

Nudges focus on behavior. Nudging is easy to implement on social media (for example, Twitter now asks people if they are sure they want to retweet an article if they have not yet read it), is cost effective, and is mostly nonintrusive.

Debunking

A popular approach to combating misinformation is to debunk misconceptions after they have already spread. Initiatives such as Snopes, FullFact, and StopFake are numerous, and some of them have many followers on social media. Some technology companies, notably Meta (formerly Facebook), use debunking to moderate content on their platforms, drawing on both automated and human-centered methodologies.

Automated content labeling

Online platforms have become masters at using automation to quickly tag content at scale.

There are different types of content labels, such as fact-checking labels (e.g., “this article has been assessed as false by independent fact-checkers”), general or specific content warnings, and news credibility labels.

Examples of good practices in other countries 

In the Handbook on Countering Online Disinformation at the Local and Regional Level, the European Committee of the Regions highlights two countries that have managed to effectively counter disinformation and could be used as models for similar activities in other European regions: Estonia and Denmark.

Estonia

Estonia has long been among the frontrunners in the European fight against disinformation. Over the past few years, it has implemented several measures that have helped the country effectively defend itself against disinformation. These measures include increasing media literacy, developing strategic communications, and raising awareness among public officials.

Why is Estonia such an experienced country in the fight against disinformation? In 2007, Estonia was subjected to a wave of cyber-attacks that hit banking systems, government websites, and the media. This event limited people’s access to services and the ability of journalists and the government to communicate normally with the population for several weeks. Although responsibility has not been officially attributed to the Russian government, the attacks likely originated in Russia.

Building on national security policy, the State Electoral Office developed its capacity to counter disinformation in the 2017 local elections and the 2019 national and European elections. As individual government departments generally did not have the capacity to cover the full range of skills necessary for countering disinformation, intense collaboration between different parts of the government was essential to achieving these objectives. An electoral working group was formed that brought together different government agencies.

In 2019, a guide for dealing with information attacks was published and made available to the public. The guide includes advice on how to prepare for and respond to disinformation attacks, as well as information on common methods of influence, information on bots, and lessons for the future:

Threat assessment

Threat assessment will allow an organization to understand what event is occurring and allow it to signal to others that it is aware of the threat and is planning a response. This can be done by mapping the situation, monitoring social media and other types of media, and verifying the facts surrounding the alleged misinformation event. This should be followed by contacting key partners and institutions, contacting journalists, and then engaging in initial communication to inform the target group that there is a problem, it has been noticed, and is being responded to.

A more detailed public response

Provide an official report that is as transparent as possible; correct information, for example, through a frequently asked questions section; include links to external trusted sources; highlight your organization’s values where appropriate.

Proactive communication

Communicate with key partners and target audiences; make information supporting your position available on your website and ensure it is search engine optimized; tell stories to make your messages easy to understand and supportive; use opinion leaders; use existing events, websites, and initiatives to spread your message (there is no time for new ones).

Retaliation

There are several options here: ignore (unless the misinformation has spread far, and the impact is small), report to the police if laws have been broken (can only be used if you are sure this has happened), delete (should only be done if the message has broken the law or the platform’s rules – should be done by contacting the platform if possible), detect the attacker (if the attacker is known with certainty and the potential harm is outweighed by the potential positive impact)

Estonia is also an example of how the authorities can cooperate with other organizations in the fight against disinformation. Government officials have established cooperation with a local fact-checking organization that provides additional capacity for monitoring media outlets and social media for disinformation. It then passes on information about potential disinformation campaigns to the government so that the authorities are aware of trending narratives and can assess whether they need to act against them.

Denmark

An important step in the fight against disinformation is to (re)establish trust between local and regional authorities and their constituents. As numerous studies and reports have shown, the two are closely linked; on the one hand, exposure to fake news affects trust in both the media and politicians. Trust is an important part of the constructive model of journalism developed by Danish journalist Gerd Maria May referred to as STEP (Solutions, Trust, Engagement, Perspective).

STEP’s main message is that journalists should not be “out of touch” with their audiences and that free and independent journalism is essential to the functioning of healthy democracies. Within the STEP model, journalists – without giving up their watchdog role – go beyond the traditional model of their profession by not only investigating problems but also promoting solutions.

The “solutions” part of the model is closely related to another project developed by May in 2019; the Room of Solutions (Løsningernes Rum in Danish) is a debating method that contributes to building trust between different local actors by engaging them in debates aimed at finding viable and broadly acceptable solutions to local problems.

Through the involvement of a trained moderator, “finger-pointing” and “blaming” are avoided during the debate, and all participants can make suggestions, raise doubts, or comment on others’ proposals. The solutions that are developed during the participatory debate are designed with a long-term perspective in mind, which is another way to counteract misinformation.

A key element of the “room” solution is its voting system. Each person who enters the solution room is given a green card and a red one, which they are instructed by the moderator to hold up in response to proposals made during the meeting (the former is used to agree, the latter, to disagree).

While it may be too early to assess with certainty the success of the Room of Solutions model, based on the information available, the initiative appears to have the potential to contribute effectively to solving local problems and strengthening local participatory democracy. Importantly, it also has the potential to serve as a tool for increasing mutual trust within local communities and as a means of preventing the spread of misinformation.

Attention

The authors do not work for, consult to, own shares in or receive funding from any company or organization that would benefit from this article, and have no relevant affiliations