Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Making machine learning robust against adversarial inputs
Goodfellow I., McDaniel P., Papernot N. Communications of the ACM61 (7):56-66,2018.Type:Article
Date Reviewed: Oct 15 2018

Machine learning (ML) has become ubiquitous in recent times. It is used in numerous (important) applications and its use will seemingly only increase. The current ML state of the art can be attributed to “nearly 50 years of research and development in artificial intelligence [AI].” Despite strong credentials and ubiquitous use, ML algorithms, especially in “naturally occurring scenarios,” often fail (rather dramatically) against adversarial inputs, which are intentionally designed instances usually through the subtle modification of the original data by an adversary/attacker. In this article, the authors argue that such vulnerability can be attributed to the benign environment of training and evaluating the ML model. Indeed, in the usual setting, the threat of altering the distribution by an adversary at either training or testing time is simply ignored.

With this backdrop, the authors discuss different types of attacks on ML models and consider various possible scenarios in the context of adversarial strength, which is characterized by the ability to access data, parameter values, or even the full architecture. Different types of attacks are technically described, along with comments on tradeoffs in the context of cost, time, success rate, and so on. The authors follow up their discussion on possible attack types with a brief discussion on the possible defense mechanisms that can make a ML system “robust against adversarial inputs.”

Finally, the authors point out an important issue with the methodology used in building ML systems. They argue that testing a trained ML model actually falls short in providing security guarantees: in general, testing only provides a lower bound, whereas an upper bound on the failure rate is necessary to guarantee security. Thus, the authors emphasize the importance of verification in a key insight:

To end the arms race between attackers and defenders, we suggest building more tools for verifying machine learning models; unlike current testing practices, this could help defenders eventually gain a fundamental advantage.
Reviewer:  M. Sohel Rahman Review #: CR146278 (1902-0051)
Bookmark and Share
  Reviewer Selected
 
 
Machine Translation (I.2.7 ... )
 
Would you recommend this review?
yes
no
Other reviews under "Machine Translation": Date
Functional considerations in the postediting of machine translated output
Vasconcellos M. Computers and Translation 1(1): 21-38, 1986. Type: Article
Sep 1 1987
Sentence disambiguation by asking
Tomita M. Computers and Translation 1(1): 39-51, 1986. Type: Article
Feb 1 1988
The lexicon in the background
Sedelow S., Walter A. J. Computers and Translation 1(2): 73-81, 1986. Type: Article
Dec 1 1987
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy