Enhancing Resilience of Deep Learning Networks By Means of Transferable Adversaries

M. Seiler, H. Trautmann, P. Kerschke, in: Proceedings of the International Joint Conference on Neural Networks (IJCNN), Glasgow, UK, 2020, pp. 1–8.

Download
No fulltext has been uploaded.
Conference Paper | English
Abstract
Artificial neural networks in general and deep learning networks in particular established themselves as popular and powerful machine learning algorithms. While the often tremendous sizes of these networks are beneficial when solving complex tasks, the tremendous number of parameters also causes such networks to be vulnerable to malicious behavior such as adversarial perturbations. These perturbations can change a model's classification decision. Moreover, while single-step adversaries can easily be transferred from network to network, the transfer of more powerful multi-step adversaries has - usually - been rather difficult.In this work, we introduce a method for generating strong adversaries that can easily (and frequently) be transferred between different models. This method is then used to generate a large set of adversaries, based on which the effects of selected defense methods are experimentally assessed. At last, we introduce a novel, simple, yet effective approach to enhance the resilience of neural networks against adversaries and benchmark it against established defense methods. In contrast to the already existing methods, our proposed defense approach is much more efficient as it only requires a single additional forward-pass to achieve comparable performance results.
Publishing Year
Proceedings Title
Proceedings of the International Joint Conference on Neural Networks (IJCNN)
Page
1–8
LibreCat-ID

Cite this

Seiler M, Trautmann H, Kerschke P. Enhancing Resilience of Deep Learning Networks By Means of Transferable Adversaries. In: Proceedings of the International Joint Conference on Neural Networks (IJCNN). ; 2020:1–8. doi:10.1109/IJCNN48605.2020.9207338
Seiler, M., Trautmann, H., & Kerschke, P. (2020). Enhancing Resilience of Deep Learning Networks By Means of Transferable Adversaries. Proceedings of the International Joint Conference on Neural Networks (IJCNN), 1–8. https://doi.org/10.1109/IJCNN48605.2020.9207338
@inproceedings{Seiler_Trautmann_Kerschke_2020, place={Glasgow, UK}, title={Enhancing Resilience of Deep Learning Networks By Means of Transferable Adversaries}, DOI={10.1109/IJCNN48605.2020.9207338}, booktitle={Proceedings of the International Joint Conference on Neural Networks (IJCNN)}, author={Seiler, Moritz and Trautmann, Heike and Kerschke, Pascal}, year={2020}, pages={1–8} }
Seiler, Moritz, Heike Trautmann, and Pascal Kerschke. “Enhancing Resilience of Deep Learning Networks By Means of Transferable Adversaries.” In Proceedings of the International Joint Conference on Neural Networks (IJCNN), 1–8. Glasgow, UK, 2020. https://doi.org/10.1109/IJCNN48605.2020.9207338.
M. Seiler, H. Trautmann, and P. Kerschke, “Enhancing Resilience of Deep Learning Networks By Means of Transferable Adversaries,” in Proceedings of the International Joint Conference on Neural Networks (IJCNN), 2020, pp. 1–8, doi: 10.1109/IJCNN48605.2020.9207338.
Seiler, Moritz, et al. “Enhancing Resilience of Deep Learning Networks By Means of Transferable Adversaries.” Proceedings of the International Joint Conference on Neural Networks (IJCNN), 2020, pp. 1–8, doi:10.1109/IJCNN48605.2020.9207338.

Export

Marked Publications

Open Data LibreCat

Search this title in

Google Scholar