Please use this identifier to cite or link to this item: http://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/19042
Full metadata record
DC FieldValueLanguage
dc.contributor.authorΠατεράκης, Αλέξανδρος-
dc.date.accessioned2024-04-02T14:40:07Z-
dc.date.available2024-04-02T14:40:07Z-
dc.date.issued2024-04-02-
dc.identifier.urihttp://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/19042-
dc.description.abstractWhile Neural Networks have been in research for multiple years, they have experienced a significant surge in popularity only in the last decade. This surge has helped revolutionize various sectors and fields from medical to agriculture to automotive automation and many more, while their capabilities on object detection, computer vision, natural language processing and decision-making have led to multiple breakthroughs with exceptional levels of accuracy and efficiency. However, despite their exponential growth, their security and robustness, which are key factors of Neural Networks, seem to be overlooked. Even though we are becoming more accustomed to using Neural Networks in our everyday lives, their susceptibility to adversarial attacks can create concerns over their reliability and safety. Adversarial attacks, such as HopSkipJump, which exploit vulnerabilities in models by introducing small imperceptible perturbations in the input data, can be a major flaw in safety-critical applications such as autonomous vehicles. This thesis, addresses the crucial issue by investigating strategies to enhance the robustness of image classi- fication networks against adversarial attacks. Additionally, it considers the impact of hardware deployment on a Neural Network’s performance, exploring how hardware resources, including the Versal Platform, can be leveraged to enhance the robustness of models in edge computing environments and emulate real-world deployment. Specifically, this work focuses on popular lightweight model architectures, namely ResNet20, ResNet56 and MobileNetV2. By employing CIFAR-10 and FashionMNIST Datasets, a series of techniques, including adversarial retraining, preprocessing and quantization are applied on these models to improve their robustness. Furthermore, the impact of such mechanisms is quantified using PSNR metrics, demonstrating significant improvements in robustness, with enhancements ranging from 37% to 49% thus contributing to the development of more secure and reliable Neural Networks for practical applications.en_US
dc.languageenen_US
dc.subjectNeural Networksen_US
dc.subjectImage classificationen_US
dc.subjectRobustnessen_US
dc.subjectHardwareen_US
dc.subjectVersalen_US
dc.subjectEdge Computingen_US
dc.subjectPreprocessingen_US
dc.subjectAdversarial Attacken_US
dc.titleAdversarial Robustness Strategies for Neural Networks in Hardwareen_US
dc.description.pages128en_US
dc.contributor.supervisorΣούντρης Δημήτριοςen_US
dc.departmentΤομέας Τεχνολογίας Πληροφορικής και Υπολογιστώνen_US
dc.description.notesΕφαρμογή μεθόδων ενίσχυσης της ανθεκτικότητας των νευρωνικών δικτύων για Hardware εφαρμογές και εξέταση συνεπειών.en_US
Appears in Collections:Διπλωματικές Εργασίες - Theses

Files in This Item:
File Description SizeFormat 
Adversarial_Robustness_Strategies_for_NN_in_HW.pdf16.01 MBAdobe PDFView/Open


Items in Artemis are protected by copyright, with all rights reserved, unless otherwise indicated.