Pursuit of strategies to expose fabricated media
It is becoming increasingly easy to create fake images, texts and videos that are so realistic that they are difficult to distinguish from reality. There is a growing risk that the technology is being misused by malicious actors. The FOI report, Detection of Fabricated Media, surveys how fake media can be detected.
During the United States presidential election campaign in 2020, a video clip was widely circulated. In the clip, Joe Biden steps onto a stage and shouts “Hello, Minnesota!” to the audience, something that makes him seem confused, because the poster in the background says Tampa, Florida. The clip was seen more than a million times on Twitter before it was revealed as manipulated. The then presidential candidate actually was in Minnesota, but the background in the video had been manipulated.
The incident is an example of how fabricated media can be created and used for harmful purposes. This is likely to become more common in the future, according to Fredrik Johansson, deputy research director at FOI’s Defence Technology Division and one of the authors of the report, Detection of Fabricated Media.
“We know that there are actors who have invested substantial resources in producing fake media that have then been spread on social platforms in order to promote certain narratives and create political division. Many people are worried that the automation of this type of abuse will be enabled even more as new technology is developed,” says Fredrik Johansson.
Detection of advanced counterfeiting is difficult
Fabricated media has become increasingly sophisticated as a result of developments in artificial intelligence, AI, and so-called generative modelling. By analysing large amounts of data, AI can be trained to produce new artificial information that seems reasonable and realistic, such as news articles and pictures of people.
FOI’s report examines different methods for detecting fabricated media in the form of text, image, speech and video. The researchers’ assessment is that current methods are capable of detecting false media that has been created with the most easily accessible approaches.
“But in our study we see that if someone applies more effort by, for example, further training the AI models with more data, or uses new techniques to hide the traces of the fabricated information, the detection methods fail quite quickly,” says Fredrik Johansson.
There is much to suggest that state actors lie behind several cases, as part of influence operations. Even if the public is not fooled by every manipulated image or text, in the long run this can contribute to uncertainty and destabilisation.
Playing cat and mouse
So far, more research has been conducted on methods of creating fabricated media than ways to discover it, according to Fredrik Johansson.
“Now, the research world has woken up to the problem, since the risks are obvious. But researchers are a couple of years behind. It will be a kind of game of cat-and-mouse.”
However, FOI also sees opportunities in the new technology. Generative modelling can be used, among other things, to improve the Swedish Armed Forces’ exercise scenarios.
“From the perspective of the Armed Forces, we are interested in how such models can be applied to achieving more realistic exercises when using simulators,” says Fredrik Johansson.