28 April 2020

Arms race in cyberspace

By using AI, machines can learn to do things that far surpass human capabilities. The technology can be used for both good and bad intentions and an arms race is currently underway between those who want to defend and those who want to attack systems with the help of AI. These are some of the conclusions of a report from FOI.

A circuit board

AI is seen as the solution for many future challenges, but it can also be misused for criminal purposes, for example in unauthorized access and interference with systems. Image: Marc Bruce.

Artificial intelligence, AI, involves the automation of tasks that until now have been performed by people. AI is seen as the solution for many future challenges, but it can also be misused for criminal purposes, for example in unauthorized access and interference with systems. Even if this is still relatively uncommon, FOI’s researchers have studied several different cases where AI has been used in cyberattacks. An example is the AI programme, “CyberLover,” which automatically created behavioural profiles of individuals in dating chat rooms and sent individually customised messages with malicious links to the chat’s participants.

There is also AI that can pass so-called Turing tests, a type of security measure, or “CAPTCHA,” where the user must prove that they are not a machine, but a person.

“When you deal with CAPTCHA it means that you get a number of images and then you mark those that contain cars, for example. Or else you get some strange-looking letters, which you then have to write in correctly. AI can behave like a person and pass several kinds of tests like these to gain access to and attack web pages,” says Erik Zouave, an analyst in FOI’s Defence Analysis Division.

Important to be prepared

Because of its capability to gather and analyse data faster than humans, AI can also be used to scan for information about vulnerabilities that are reported on the internet.

“AI efficiently finds the intelligence that an antagonist wants to have. In order to conceal a cyberattack, AI can also imitate a targeted network’s behavioural patterns so that from the outside it looks like the network is behaving normally, making the attack difficult to detect” explains Erik Zouave.

There are several AI-supported cyber security tools on the market today. At the same time, research shows that such security solutions might not be enough to protect from AI-supported attackers. Instead, an arms race is underway between those who want to secure systems and those who want to attack them. This is in any event the case on the scientific level, where a number of researchers are trying to develop a variety of safeguards, while others are pretending to be antagonists and trying every possible way to get past them, both with the help of AI.

“At this early stage we can’t take for granted that it’s going to be possible to protect oneself against AI with the aid of AI. Instead, defenders have to prioritise measures that protect against attacks that can potentially be automated at scale. We’re soon going to be in a world where we’ll probably begin to see more of these kinds of attacks, and therefore it’s important to be prepared for how they should be countered,” says Erik Zouave.