Explainable Artificial Intelligence: Exploring XAI Techniques in Military Deep Learning Applications


  • Linus Luotsinen
  • Daniel Oskarsson
  • Peter Svenmarck
  • Ulrika Wickenberg Bolin

Publish date: 2020-02-24

Report number: FOI-R--4849--SE

Pages: 54

Written in: English


  • Artificial intelligence
  • explainable AI
  • transparency
  • machine learning
  • deep learning
  • deep neural networks


As a result of the advancements in artificial intelligence (AI), machine learning and specifically deep learning, the explainable artificial intelligence (XAI) research field has received a lot of attention recently. XAI is a research field where the focus is on ensuring that the reasoning and decision making of AI systems can be explained to human users. In a military context, such explanations are typically required to ensure that: - human users have appropriate mental models of the AI systems they operate, - specialists can gain insight and extract knowledge from AI systems and their hidden tactical and strategical behavior, - AI systems obey international and national law, - developers are able to identify flaws or bugs in AI systems even prior to deployment. The objective of this report is to explore XAI techniques developed specifically to provide explanation in deep learning based AI systems. Such systems are inherently difficult to explain because the processes that they model are often too complex to model using interpretable alternatives. Even though the deep learning XAI field is still in its infancy, many explanation techniques have already been proposed in the scientific literature. Today's XAI techniques are useful primarily for development purposes (i.e. to identify bugs). More research is needed to conclude if these techniques are also useful for supporting users in the process of building appropriate mental models of the AI-systems they operate, tactics development and to ensure that future military AI systems are following national and international law.