Please use this identifier to cite or link to this item: http://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/18449
Title: Improving Projected Gradient Descent based Adversarial Attacks
Authors: Αντωνίου, Νικόλαος
Ποταμιάνος Αλέξανδρος
Keywords: Deep Learning
Adversarial Machine Learning
Image Classification
Projected Gradient Descent
Adversarial Attacks
Adversarial Defenses
Issue Date: 14-Sep-2022
Abstract: This Diploma Thesis delves into the phenomenon of adversarial examples, which appeared some years ago in the literature and since then has radically changed our perception about Neural Networks. Adversarial examples, in the task of Image Classification for instance, refer to images that have been tampered with carefully designed perturbations, small enough to remain undetected from a human observer, though exerting great influence to the final output, steering the model towards predicting the image as belonging to any arbitrary class, whereas in the meantime, humans can readily assign the correct label to the distorted image. The emergence of this peculiar phenomenon concerned, and subsequently, motivated researchers to explore ways of enhancing the robustness of Neural Networks, against the prospect of confronting such adversarial inputs. Approaches that attempt to mitigate the downsides of this behaviour are called Adversarial Defenses. In the course of time, it has become evident that evaluating the robustness of proposed defenses is plagued by a paramount issue: this of Robustness Overestimation. Deciding whether a method can really improve robustness, requires the defender to solve an optimization problem which, in the space of Neural Networks, is infeasible. Hence, we must resort to approximate solutions of this problem (exactly akin to the training procedure of such networks, c.f. the Gradient Descent Algorithm). Robustness Overestimation refers to the case where the defender fails to (approximately) solve the problem sufficiently well, hence acquiring a false sense about the true effectiveness of his method. One of the most popular algorithms of obtaining approximate solutions for the optimization problem that pertains to the evaluation of Neural Networks’ robustness is Projected Gradient Descent (PGD). Among several designing choices, the objective function considered during the iterative PGD process is quite influential, with the literature proposing various alternatives. However, even subtle changes in the mathematical expression of this objective may non-trivially affect the obtained results, given the highly complex geometry of such high-dimensional, non-convex optimization landscapes. In this work, we set this observation (backed up by strong empircal evidence) as the focal point of our research, seeking methods of combining different objectives, hoping to reap benefits in the obtained performance. Our experiments empirically demonstrate that a rather simplistic approach, i.e. switching loss functions during PGD, helps the algorithm to yield better final solutions with pronounced constancy, since our findings generalize across 15 different adversarial defenses.
URI: http://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/18449
Appears in Collections:Διπλωματικές Εργασίες - Theses

Files in This Item:
File Description SizeFormat 
DT_Antoniou.pdf7.77 MBAdobe PDFView/Open


Items in Artemis are protected by copyright, with all rights reserved, unless otherwise indicated.