Please use this identifier to cite or link to this item: http://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/18880
Title: Effective Methods for Deep Neural Network Sparsification
Authors: Γλεντής Γεωργουλάκης, Αθανάσιος
Μαραγκός Πέτρος
Keywords: DNN compression
sparsification
weight pruning
sparse training
efficient vision
Issue Date: 20-Oct-2023
Abstract: In recent years, Deep Neural Networks (DNNs) have significantly advanced the state-of-the-art in numerous machine learning tasks. Unfortunately, most compact devices that rely on embedded computing systems with limited resources cannot support the deployment of such powerful DNNs. This has driven considerable research efforts towards creating compact and efficient versions of these models. A prominent method among the model compression literature is neural network pruning, involving the removal of unimportant network elements with the goal of obtaining compressed, yet highly effective models. In this thesis, we focus on removing individual weights based on their magnitudes encompassing the sparsification process during the standard course of training (sparse training), therefore avoiding multi-cycle training and fine-tuning procedures. In the first part of the thesis we propose a pruning solution that tackles the problem of sparsity allocation over the different layers of the DNN. Modeling the distributions of the weights per-layer in a novel way as Gaussian or Laplace enables the method to learn the pruning thresholds through the optimization process, resulting in an effective non-uniform sparsity allocation for a requested overall sparsity target. In the second part of this work, recognizing that the Straight-Through Estimator is a crucial component of the aforementioned method and of sparse training in general, we devote our efforts into improving its effectiveness. This leads to the introduction of Feather, a novel sparse training module utilizing the powerful Straight-Through Estimator as its core, coupled with a new thresholding operator and a gradient scaling technique that enable robust state-of-the-art sparsification performance. More specifically, the thresholding operator balances the currently used ones, namely the hard and soft operators, combining their advantages, while gradient scaling controls the sparsity pattern variations, leading to a more stable training procedure. Both proposed methods are tested on the CIFAR and ImageNet datasets for image classification using various architectures, resulting into state-of-the-art performances. In particular, Feather achieves Top-1 validation accuracies on ImageNet using the ResNet-50 architecture that surpass those obtained from existing methods, including more complex and computationally demanding ones, by a considerable margin.
URI: http://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/18880
Appears in Collections:Διπλωματικές Εργασίες - Theses

Files in This Item:
File Description SizeFormat 
Glentis_Georgoulakis_thesis.pdf6.3 MBAdobe PDFView/Open


Items in Artemis are protected by copyright, with all rights reserved, unless otherwise indicated.