Explaining artificial intelligence with XAI
Artificial intelligence should lead to computers and systems that can work autonomously. But how do AI systems “think”? How can humans interact with them? And what can we learn from them? The research field that seeks to clarify these questions is called XAI, explainable artificial intelligence.

When a computer uses algorithms to “do the programming” the software often becomes too complex for humans to interpret or explain. You can compare such software to a “black box,” where information is stuffed in at one end and a result pops out on the other. Image: Eevamaria Raudaskoski, @eevagraphics .
In “conventional” software, humans decide what the computer will do. This also means that its logic and functions can be interpreted and explained by humans. But, in AI and specifically deep learning, the computer uses algorithms to “do the programming”, and the software often becomes too complex for humans to interpret or explain. FOI researcher Linus Luotsinen compares such software to a “black box,” where information is stuffed in at one end and a result pops out on the other.
“But we often want to have some kind of explanation for why the system makes a decision. If one understands how the computer thinks, then in some respects we humans can learn from the computer’s behaviour,” he says.
Players study computers
The lessons we have been able to draw have proven extremely valuable in board games, such as chess or go. Since a computer often wins when playing against a human, nowadays the best players spend more time trying to understand how the computer thinks, many moves ahead, than in studying other players.
But analyses of how a computer thinks are of course even more important in a military context, where the system’s decisions and recommendations can have a deep impact on human lives. This applies regardless of whether AI is used on the tactical level in surveillance drones or as the basis for decisions by military leaders or politicians.
FOI has therefore initiated XAI research with the objective to create military AI systems that
- Support military end-users in creating mental models of how a system functions, so that it can be used securely and effectively;
- Support specialists in gaining insight and extracting knowledge from the often hidden, tactical and strategic behaviour of AI systems;
- Obey the rules of war and other international and national laws;
- Support developers in identifying flaws or bugs and addressing them prior to a system’s deployment.
An important aspect of the research is to formulate a framework for evaluating how XAI should be designed for military AI applications.
“The aim is to create an environment where we can evaluate, develop and adapt the XAI techniques that are increasingly being proposed in the research literature. The point is to build FOI’s competence and eventually be able to support the Swedish Armed Forces by providing recommendations and requirements during the acquisition and development of AI systems,” says Linus Luotsinen.
Can we trust AI?
This question also implies a major balancing act for future decision-makers. Today, XAI is considered to be an essential research field for developing an understanding of how these machines “think” and in turn influence the extent to which AI can be used in military applications. But, according to Linus Luotsinen, it may well be that we will never be able to completely understand how an AI system functions.
“Not even the designers of today’s AI systems can explain how they work. The fact that we still use AI systems in spite of this is because they can solve problems that we aren’t able to solve using traditional programming. The major challenge facing our politicians and military decision-makers is to assess which types of military AI systems will be allowed. For if an AI-supported military weapons system is shown to work many times better, save lives and make fewer mistakes than equivalent systems without AI, should we allow it to be deployed even if we don’t understand how it works?”