Machine learning (ML) has become ubiquitous in recent times. It is used in numerous (important) applications and its use will seemingly only increase. The current ML state of the art can be attributed to “nearly 50 years of research and development in artificial intelligence [AI].” Despite strong credentials and ubiquitous use, ML algorithms, especially in “naturally occurring scenarios,” often fail (rather dramatically) against adversarial inputs, which are intentionally designed instances usually through the subtle modification of the original data by an adversary/attacker. In this article, the authors argue that such vulnerability can be attributed to the benign environment of training and evaluating the ML model. Indeed, in the usual setting, the threat of altering the distribution by an adversary at either training or testing time is simply ignored.
With this backdrop, the authors discuss different types of attacks on ML models and consider various possible scenarios in the context of adversarial strength, which is characterized by the ability to access data, parameter values, or even the full architecture. Different types of attacks are technically described, along with comments on tradeoffs in the context of cost, time, success rate, and so on. The authors follow up their discussion on possible attack types with a brief discussion on the possible defense mechanisms that can make a ML system “robust against adversarial inputs.”
Finally, the authors point out an important issue with the methodology used in building ML systems. They argue that testing a trained ML model actually falls short in providing security guarantees: in general, testing only provides a lower bound, whereas an upper bound on the failure rate is necessary to guarantee security. Thus, the authors emphasize the importance of verification in a key insight:
To end the arms race between attackers and defenders, we suggest building more tools for verifying machine learning models; unlike current testing practices, this could help defenders eventually gain a fundamental advantage.