Thursday, 20 June 2019

Using game theory to model poisoning attack scenarios

Poisoning attacks are among the greatest security threats for machine learning (ML) models. In this type of attack, an adversary tries to control a fraction of the data used to train neural networks and injects malicious data points to hinder a model's performance.

* This article was originally published here