Please use this identifier to cite or link to this item: http://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/19042
Title: Adversarial Robustness Strategies for Neural Networks in Hardware
Authors: Πατεράκης, Αλέξανδρος
Σούντρης Δημήτριος
Keywords: Neural Networks
Image classification
Robustness
Hardware
Versal
Edge Computing
Preprocessing
Adversarial Attack
Issue Date: 2-Apr-2024
Abstract: While Neural Networks have been in research for multiple years, they have experienced a significant surge in popularity only in the last decade. This surge has helped revolutionize various sectors and fields from medical to agriculture to automotive automation and many more, while their capabilities on object detection, computer vision, natural language processing and decision-making have led to multiple breakthroughs with exceptional levels of accuracy and efficiency. However, despite their exponential growth, their security and robustness, which are key factors of Neural Networks, seem to be overlooked. Even though we are becoming more accustomed to using Neural Networks in our everyday lives, their susceptibility to adversarial attacks can create concerns over their reliability and safety. Adversarial attacks, such as HopSkipJump, which exploit vulnerabilities in models by introducing small imperceptible perturbations in the input data, can be a major flaw in safety-critical applications such as autonomous vehicles. This thesis, addresses the crucial issue by investigating strategies to enhance the robustness of image classi- fication networks against adversarial attacks. Additionally, it considers the impact of hardware deployment on a Neural Network’s performance, exploring how hardware resources, including the Versal Platform, can be leveraged to enhance the robustness of models in edge computing environments and emulate real-world deployment. Specifically, this work focuses on popular lightweight model architectures, namely ResNet20, ResNet56 and MobileNetV2. By employing CIFAR-10 and FashionMNIST Datasets, a series of techniques, including adversarial retraining, preprocessing and quantization are applied on these models to improve their robustness. Furthermore, the impact of such mechanisms is quantified using PSNR metrics, demonstrating significant improvements in robustness, with enhancements ranging from 37% to 49% thus contributing to the development of more secure and reliable Neural Networks for practical applications.
URI: http://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/19042
Appears in Collections:Διπλωματικές Εργασίες - Theses

Files in This Item:
File Description SizeFormat 
Adversarial_Robustness_Strategies_for_NN_in_HW.pdf16.01 MBAdobe PDFView/Open


Items in Artemis are protected by copyright, with all rights reserved, unless otherwise indicated.