Behaviour Modelling with Imitation Learning

Authors:

  • Farzad Kamrani
  • Mika Cohen
  • Fredrik Bissmarck
  • Peter Hammar

Publish date: 2020-01-20

Report number: FOI-R--4890--SE

Pages: 34

Written in: Swedish

Keywords:

  • Artificial intelligence
  • machine learning
  • deep learning
  • simulation
  • data-driven behaviour modelling
  • computer generated forces
  • imitation learning
  • behaviour cloning
  • interactive imitation learning
  • inverse reinforcement learning

Abstract

This report describes a family of methods for behavioural modelling. All methods are based on the intuition that it is easier to demonstrate a desired behaviour than to formally define it. For example, it is easier for a driver to demonstrate how to switch lanes and overtake than to manually engineer behavioural models for lane switching and overtaking. Using these methods, known as imitation learning, a behavioural model is automatically inferred from demonstrated behaviour. In addition to the usual benefits that comes with automation - models can be developed more efficiently - imitation learning also holds out a promise of producing more realistic models. Models that exhibit a more natural and human-like behaviour are considered particularly desirable in military training simulators. The aim of the work is to investigate and evaluate if and how imitation learning can be used to make computer generated forces more realistic and accessible for the Swedish Armed Forces' simulators. This report explores imitation learning methods, including behaviour cloning, interactive imitation learning and inverse reinforcement learning. The report also describes a series of experimental evaluations of these methods.