GlobalSIP 2018:

Signal Processing for Adversarial Machine Learning

[Download the PDF Call for Papers]

Recent studies have highlighted the lack of robustness in state-of-the-art machine learning models. For instance, carefully crafted adversarial perturbations to natural images can easily cause modern classifiers trained by deep convolutional neural networks to yield incorrect predictions, while these adversarial examples can be made visually similar to the natural images, resulting in critical safety and security concerns of services and applications supported by machine learning models. Signal processing and black-box optimization techniques, such as manifold analysis, data transformation and zerothorder optimization, are becoming the core components in the research of adversarial machine learning. They are widely used to generate powerful adversarial examples to deceive target machine learning models and evade detection, as well as to provide robust and effective machinery against adversarial examples. This symposium aims to bring together researchers and practitioners from both academia and industry to report novel advances and to publish high-quality papers, in order to foster the field of signal processing for adversarial machine learning.

Distinguished Symposium Talks

Mingyi Hong Photo

Mingyi Hong

University of Minnesota

Recent Advances of Zeroth-Order Optimization with Applications in Adversarial ML

Abstract
Zeroth-order optimization methods have been popular for applications where the gradient and Hessian information of the problem of interest is either too expensive to compute, or computing such information reveals sensitive information about the underlying model. In these cases, one has to assume that the only knowledge about the problem of interest is by querying a “black box”, which returns the functional value of the underlying problem. Common applications of these methods include simulation-based optimization, online auction, web advertising, adversary machine learning, etc. In this talk, we first review recent algorithmic advances in zeroth-order optimization, including centralized and distributed zeroth-order methods. Second, we will review a recent application of these methods on designing adversary examples for machine learning models. In particular, we will show that how zero-order type optimization methods can be properly modified to build powerful black-box adversarial attacks for existing machine learning models. We will provide a comprehensive convergence analysis on different types of zeroth-order methods, and illustrate their connections and empirical performance of generating black-box adversarial examples in robust ML.

Biography
Mingyi Hong received his Ph.D. degree from University of Virginia in 2011. He is currently an Assistant Professor in the Department of Electrical and Computer Engineering, University of Minnesota. From 2014-2017 he has been an Assistant Professor with the Department of Industrial and Manufacturing Systems Engineering, Iowa State University. He is serving on the IEEE Signal Processing for Communications and Networking (SPCOM), and Machine Learning for Signal Processing (MLSP) Technical Committees. His research interests are primarily in optimization theory and applications in signal processing and machine learning.


Nicholas Carlini Photo

Nicholas Carlini

Google Brain

Making and Measuring Progress in Adversarial Machine Learning

Abstract
The field of adversarial machine learning, despite seeing a similar amount of work as other areas, has had significantly less visible progress. One of the key driving factors behind the leaps of progress in most areas of deep learning has been the abundance of useful metrics and benchmarks. Unfortunately, measuring progress in adversarial situations is exceptionally difficult due in part to the impossibility of designing fixed benchmarks. In this talk, I examine how we are, and discuss how I think we should be, measuring progress in the field of adversarial machine learning. I evaluate our current benchmark tasks and explore the ways in which we have, and have not, succeeded at them. I conclude with lessons for designing good metrics we can draw from other fields.

Biography
Nicholas Carlini is a research scientist at Google Brain, where he studies the security and privacy of machine learning. He has won multiple best paper awards (including one at IEEE S&P and another at ICML), and his work has been widely covered by articles in the New York Times, Science Magazine, and the Communications of the ACM. He received his Ph.D. in computer security from the University of California, Berkeley, in 2018.

Schedule

Thursday, November 29
09:40 - 10:40
DL DL-AML.1: Nicholas Carlini: "Making and Measuring Progress in Adversarial Machine Learning"
11:00 - 12:00
DL DL-AML.2: Mingyi Hong: "Recent Advances of Zeroth-Order Optimization with Applications in Adversarial ML"
14:00 - 15:30
AML-L.1: Adversarial Machine Learning I
15:50 - 17:20
AML-P.1: Adversarial Machine Learning II

Organizing Committee

General Chairs

Technical Program Chairs

Submissions are welcome on topics including:

Paper Submission

Prospective authors are invited to submit full-length papers (up to 4 pages for technical content including figures and possible references, and with one additional optional 5th page containing only references) and extended abstracts (up to 2 pages, for paper-less industry presentations and Ongoing Work presentations).. Manuscripts should be original (not submitted/published anywhere else) and written in accordance with the standard IEEE double-column paper template. Accepted full-length papers will be indexed on IEEE Xplore. Accepted abstracts will not be indexed in IEEE Xplore, however the abstracts and/or the presentations will be included in the IEEE SPS SigPort. Accepted papers and abstracts will be scheduled in lecture and poster sessions.

Important Dates

Paper Submission DeadlineJune 17, 2018 June 29, 2018
Review Results AnnouncedSeptember 7, 2018
Camera-Ready Papers DueSeptember 24, 2018
November 5, 2018Hotel Room Reservation Deadline