Pentagon hacker attacks to improve artificial intelligence systems

It is June 2017 when the People's Republic of China launches its development plan for a new generation of artificial intelligence with the aim of becoming the main center of innovation in this sector by 2030. Only three months later, Russia also comes out into the open with the declarations of its President Vladimir Putin: "Whoever develops the best artificial intelligence will become the master of the world".

It is only the beginning of a real technological challenge, even defined by some as the "challenge of the century", and from which the United States naturally cannot back down. Therefore, in 2018 the Joint Artificial Intelligence Center (JAIC), a center of excellence of the US Defense, was born to give a strong push to AI in the military field, transforming it into a powerful resource in support of the US armed forces . A commission of experts, therefore, which has as its main objective the defense of the technological advantage over China and Russia, in one of the hottest fields of recent years.

The interest that the Pentagon shows in this technology comes from conceiving it as a weapon potentially capable of deceiving and dominating any opponent. The risk, however, is that without the right attention it can even turn into a double-edged sword , offering enemies sensational advantages.

Machine learning, artificial intelligence and possible threats

At the base of artificial intelligence algorithms there is machine learning, understood as the ability to learn to perform specific tasks and improve one's performance on the basis of available data and accumulated experience. Basically, the programmer will no longer have to deal with the rules that the machine will have to follow, as these will be the object of research, starting from the data, of the machine itself. However, this learning process does not always guarantee the achievement of the established goal. What often happens, following a lack of care in the choice of training data, is that the generated models assume unpredictable and undesirable behavior.

"Although machine learning techniques are even a billion times superior to traditional software for different applications, they risk failing in absolutely different ways than traditional ones"

Gregory Allen, director of strategy and communications at JAIC

Suppose an algorithm is trained to recognize enemy vehicles using satellite images. In determining the specific characteristics of the vehicle in question, the algorithm can learn an association between the vehicle itself and some characteristics of the surrounding environment, such as colors or the most common objects. At this point, an enemy could exploit this associative power in his favor, suitably changing the scenario around his vehicles. Furthermore, in the hypothetical case in which the training dataset enters the hands of the enemy, the latter could modify it to his liking, adding any disturbing elements in the photos.

Attacks on machine learning algorithms are already a big problem, especially in fraud detection systems. An example of this is what he saw as a victim in 2016 Tay, a Microsoft chatbot able to answer questions based on the conversations he had previously had. Taking advantage of this feature, many have taught abusive and racist messages to the chatbot, forcing the filmmakers to suspend it after only 24 hours to introduce changes.

The need for a new team within the JAIC to prevent external attacks

artificial intelligence

It is evident that algorithms called to make critical decisions during the various military missions or to offer assistance services to the armed forces such as procurement, require well-tested protection systems. For this reason, the “Test and Evaluation Group” was recently formed within Jaic, a team dedicated to probing the vulnerabilities of internal systems.

The goal, of course, is to simulate a real attempt at external hacking in order to anticipate the opponent's moves, in search of the weak points of already trained models. Although the functioning of the various models of artificial intelligence is clear, very often it is not possible to give a clear explanation as to why they behave in a certain way, making it far from easy to predict the final result. Consequently, the approach adopted is trial and error : changes are applied to the parameters in input to the algorithm in order to understand how these affect the performance of the model.

Thus, while it is highly likely that the Pentagon, as well as the defense departments of other major powers, are developing their offensive capabilities to threaten adversary systems, at the same time it is unthinkable not to strengthen their defense mechanisms.

"The offensive option can be exploited, but you have to make sure it can't be used against us," said Allen. In order to make the most of this new and powerful technology, therefore, one cannot limit oneself to the offensive phase alone. After all, you know: the best attack is always the defense.

Curated by Giovanni Maida

The article Pentagon Hackers to Improve Artificial Intelligence Systems comes from Tech CuE | Close-up Engineering .