Please use this identifier to cite or link to this item: http://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/18880
Full metadata record
DC FieldValueLanguage
dc.contributor.authorΓλεντής Γεωργουλάκης, Αθανάσιος-
dc.date.accessioned2023-11-02T09:56:39Z-
dc.date.available2023-11-02T09:56:39Z-
dc.date.issued2023-10-20-
dc.identifier.urihttp://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/18880-
dc.description.abstractIn recent years, Deep Neural Networks (DNNs) have significantly advanced the state-of-the-art in numerous machine learning tasks. Unfortunately, most compact devices that rely on embedded computing systems with limited resources cannot support the deployment of such powerful DNNs. This has driven considerable research efforts towards creating compact and efficient versions of these models. A prominent method among the model compression literature is neural network pruning, involving the removal of unimportant network elements with the goal of obtaining compressed, yet highly effective models. In this thesis, we focus on removing individual weights based on their magnitudes encompassing the sparsification process during the standard course of training (sparse training), therefore avoiding multi-cycle training and fine-tuning procedures. In the first part of the thesis we propose a pruning solution that tackles the problem of sparsity allocation over the different layers of the DNN. Modeling the distributions of the weights per-layer in a novel way as Gaussian or Laplace enables the method to learn the pruning thresholds through the optimization process, resulting in an effective non-uniform sparsity allocation for a requested overall sparsity target. In the second part of this work, recognizing that the Straight-Through Estimator is a crucial component of the aforementioned method and of sparse training in general, we devote our efforts into improving its effectiveness. This leads to the introduction of Feather, a novel sparse training module utilizing the powerful Straight-Through Estimator as its core, coupled with a new thresholding operator and a gradient scaling technique that enable robust state-of-the-art sparsification performance. More specifically, the thresholding operator balances the currently used ones, namely the hard and soft operators, combining their advantages, while gradient scaling controls the sparsity pattern variations, leading to a more stable training procedure. Both proposed methods are tested on the CIFAR and ImageNet datasets for image classification using various architectures, resulting into state-of-the-art performances. In particular, Feather achieves Top-1 validation accuracies on ImageNet using the ResNet-50 architecture that surpass those obtained from existing methods, including more complex and computationally demanding ones, by a considerable margin.en_US
dc.languageenen_US
dc.subjectDNN compressionen_US
dc.subjectsparsificationen_US
dc.subjectweight pruningen_US
dc.subjectsparse trainingen_US
dc.subjectefficient visionen_US
dc.titleEffective Methods for Deep Neural Network Sparsificationen_US
dc.description.pages101en_US
dc.contributor.supervisorΜαραγκός Πέτροςen_US
dc.departmentΤομέας Σημάτων, Ελέγχου και Ρομποτικήςen_US
Appears in Collections:Διπλωματικές Εργασίες - Theses

Files in This Item:
File Description SizeFormat 
Glentis_Georgoulakis_thesis.pdf6.3 MBAdobe PDFView/Open


Items in Artemis are protected by copyright, with all rights reserved, unless otherwise indicated.