Detection of Fabricated Media


  • Fredrik Johansson
  • Andreas Horndahl
  • Hanna Lilja
  • Marianela Garcia Lozano
  • Lukas Lundmark
  • Magnus Rosell
  • Harald Stiff

Publish date: 2021-04-16

Report number: FOI-R--5132--SE

Pages: 78

Written in: English


  • AI
  • deep fakes
  • detection
  • deep learning
  • generative models
  • language models


Breakthroughs in artificial intelligence and generative modeling have led to unprecedented possibilities to fabricate, i.e., create and manipulate, digital media such as images, videos, and texts automatically. This type of techniques can be used in a number of applications, including the movie industry and for allowing speech-impaired users to use their own original voice. However, generative models can also be used for dubious purposes, including criminal extortion and fraud, as well as influence operations carried out by state actors. This makes detection of fabricated digital media an important field of study. This report presents a survey of various detection methods for text, image, speech, and video, and also discusses their robustness against noise and compression, their ability to generalize to data distributions other than those available during training, to what extent the predictions made by the methods are explainable, and whether the methods can be expected to work in the wild. The overall assessment is that existing detection methods are able to detect data from many of the pre-trained generative models available for use by nearly anyone, but that attackers who in various ways make adjustments to these generative models or the data they generate are unlikely to be caught by existing detection methods.