A year of war in Ukraine, what we learned about disinformation


At first it was the ghost of Kiev. Then the actresses of Mariupol, the fake corpses of Bucha, the misdeeds of the refugees, the US-funded biological weapons laboratories and hundreds of other false, almost false, fabricated, distorted stories, too good to be true and too ugly because, someone thinks, public opinion can receive them as they are.

A year of war, in addition to thousands of deaths and millions of refugees, has also brought with it a surge in digital disinformation. A deluge that has polluted the news ecosystem and has probably conditioned the perception of the conflict by some (even if we don’t know how many and how much). But which, also thanks to the greater attention it has received, has allowed us to see more closely how the mechanisms of fake news work, especially those of Russian origin, and how we can try to counter them.

Here are some examples.

An air ghost and synthetic warfare

If it’s too good to be true then it probably isn’t, the saying goes. Especially in an era where the ability to create images and videos are within anyone’s reach thanks to specific computers and computer programs. This maxim does not always apply, but in the case of the so-called “Kiev ghost” it does. The war had just begun and a video of an alleged Ukrainian air force ace shooting down enemy planes was already circulating on social networks. The clip ended up being shared by the Ministry of Defense as well.

The “ghost,” as the alleged pilot had been dubbed, was actually a true digital spectrum, meaning it didn’t exist except in a world of pixels. It was the product of what is technically called “synthetic media”. Unmasked, among others, by the Associated Press news agency, the video had in fact been made with a flight simulator video game (the Ukrainian authorities would later admit the entirely “mythical” nature of the character). In war, public opinion needs heroes and the pilot was a perfect example. Perfect but made with the computer, and therefore fake.

That fake Zelenski

Still on the “synthetic media” front, the war in Ukraine gave us the first use of a deepfake in war propaganda. That is to say a step forward towards a scenario long feared by disinformation scholars. The one in which artificial intelligence becomes so skilled in reproducing the appearance, movements and voice of real and influential characters that it can create realistic clips that can be mistaken for real, the so-called “deepfakes”.

Fortunately, that was not the case with the video of Ukrainian President Volodymyr Zelensky proclaiming surrender that was released in the first month of the conflict. It was indeed a “deepfake”, but it was also too cheesy to seriously deceive. The rest was done by the communicative promptness of the number one in Kiev who quickly denied the content and did so with another video, so that the comparison was clear. Narrow escape, but only for now. The event demonstrated that technology exists and can also be used in crisis situations for purposes that are certainly not benevolent. Which means that in the future someone, perhaps with more sophisticated products, will try again. Better get ready.

Suspicious fact-checking

Fact-checking has long been considered the best countermeasure against disinformation. Digital information verification techniques and practices are increasingly popular among journalists and others. But precisely this diffusion carries with it risks. And one of these, which emerged clearly during the war year, is that someone appropriates this activity and uses it for purposes opposite to those for which it should be practiced.

An example? The video shared in March 2022 by some pro-Russian social media accounts in which they flea at an alleged clip circulated by the Ukrainians: an explosion in the city of Kharkiv which, according to digital investigators in Moscow, dates back to 2017 and not to the ongoing war as claimed by Kiev supporters. Yellow-blue propaganda, at least in appearance, unmasked by full-blown fact checking work, packaged and carried out with all the trappings of the case.

If it weren’t for one detail: there is no evidence that the original video, that of the explosion, was actually circulated. Therefore, fact checking is there but the facts to be verified are not. The objective of the presumed verification is not then to clarify what is not clear. If anything, it sows further uncertainty and confusion, the best breeding grounds for disinformation, especially when done in the name of restoring the truth.

Neither true nor probable

A hospital that was no longer a hospital. A woman who actually neither impersonated two. Wounds that weren’t wounds and shocks that weren’t shocks, just skilled acting. Except then take it all back. Without admitting it. But who said that propaganda must be refined and sophisticated? Certainly not the Russian authorities who – this year of war has taught us – in the face of detailed accusations supported by evidence, have no hesitation in employing any means to deny, obfuscate and overturn them, even at the cost of contradicting themselves.

As in the case of the bombing of the Mariupol children’s hospital in the first weeks of the “special operation”. When the tweets of various Moscow embassies around the world began to sow doubts about what happened without worrying too much about the verisimilitude of the statements. The hospital, they said, was no longer operational but occupied by the Azov battalion (and this despite the fact that the hospital itself had asked for fuel in the previous days to have enough energy to continue operations). The pregnant woman injured and photographed after the explosion – they proclaimed – was in reality an actress paid for the purpose who also impersonated another patient lying on a stretcher (although the photographs showed two clearly distinct people).

All false claims and easily proven so from the evidence produced by journalists and Internet users. But, above all, denied by the interviews with the same woman that some pro-Russian media carried out in the following weeks. In these conversations the actress was no longer an actress but, suddenly, a hospital patient again. Her story became reliable and served to dismantle the “Western propaganda” on a Russian air attack on the hospital.

Because what is said today can always be changed tomorrow. Disinformation – 12 months of Russia-Ukraine conflict tell us – does not seek consistency, which is at best a tinsel, but the systematic questioning of any source of information. Final goal: generalized distrust.

The open answer

There are no magic wands, then. And, if needed, twelve months of war on the borders of Europe have proved it once again. But they’ve also shown that the same technologies that make it so easy to spread fake news can help us ascertain the truth of the facts. It’s not easy, it requires time, patience and mastery of specific techniques. But it can be done, and the growing success of Bellingcat, an organization that uses open digital sources to carry out verification activities, is there for all to see. Whether it’s geolocating an image to understand where a certain event took place, identifying the type of weapon used through photos of debris left by the explosion, ascertaining the time of a firefight from the shadows on the ground, the community of Osint enthusiasts (Open source intelligence) offers a decisive contribution to clarifying what actually happened, in Ukraine as elsewhere.

It is a contagious example. Some of the best journalistic works published during this war year were also made thanks to the use of Osint’s techniques. Starting with the investigations of the New York Times about the crimes committed by the Russian army in Bucha. Right from the start, Moscow’s propaganda claimed that the bodies found in the city had been placed there after the Russian army had abandoned the area. The satellite images viewed by the American newspaper, the videos collected in the field, the messages left on social networks by the soldiers and found by journalists have helped to deny the Russian reconstruction, to reveal what had actually happened in the town near Kiev and to identify a part of those responsible.

In conclusion, on February 24, 2023 we have the same fears as a year ago about the risks of disinformation. Maybe some more. But we also have more knowledge of the phenomenon having seen it at work for so long. And, thanks to investigative work such as those of the New York Times and of Bellingcatwe are aware that we have a few more tools in our toolbox that help us shed more light on what is happening in the world.





Source-tg24.sky.it