Please use this identifier to cite or link to this item:
Title: Collaborative Filtering Based DNN Partitioning and Offloading on Heterogeneous Edge Computing Systems
Authors: Κακολύρης, Ανδρέας Κοσμάς
Σούντρης Δημήτριος
Keywords: Cloud
Edge Computing
Resource Management
Neural Networks
Collaborative Filtering
Issue Date: 31-Oct-2022
Abstract: Deep Neural Networks (DNNs) are an increasingly important part of many contemporary applications that reside at the edge of the Network. While DNNs are particularly effective at their respective tasks, they can be computationally intensive, often prohibitively so, when the resource and energy constraints of the edge computing environment are taken into account. In order to overcome these obstacles, the idea of partitioning and offloading part of the DNN computations to more powerful servers is often being proposed as a possible solution. While previous approaches have suggested resource management schemes to address this issue, the high dynamicity present in such environments is usually overlooked, both in regards to the variability of the DNN models and to the heterogeneous nature of the underlying hardware. In this thesis, we present a framework for DNN partitioning and offloading for edge computing systems. Our DNN partitioning and offloading framework utilizes a Collaborative Filtering mechanism based on knowledge gathered previously during profiling, in order to make quick and accurate estimates for the performance (latency) and energy consumption of the Neural Network layers over a diverse set of heterogeneous edge devices. Via the aggregation of this information and the utilization of an intelligent partitioning algorithm, our framework generates a set of Pareto optimal Neural Network splittings that trade-off between latency and energy consumption. Our framework is evaluated by using a variety of prominent DNN architectures to show that our approach outperforms current state-of-the-art methodologies by achieving a 9.58× speedup on average and up to 88.73% less energy consumption, simultaneously offering high estimation accuracy by limiting the prediction error down to 3.19% when it comes to latency and 0.18% when energy is concerned, while being lightweight and performing in a dynamic manner
Appears in Collections:Διπλωματικές Εργασίες - Theses

Files in This Item:
File Description SizeFormat 
Andreas_K_Diploma_Thesis.pdf3.8 MBAdobe PDFView/Open

Items in Artemis are protected by copyright, with all rights reserved, unless otherwise indicated.