Παρακαλώ χρησιμοποιήστε αυτό το αναγνωριστικό για να παραπέμψετε ή να δημιουργήσετε σύνδεσμο προς αυτό το τεκμήριο: http://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/18531
Τίτλος: Collaborative Filtering Based DNN Partitioning and Offloading on Heterogeneous Edge Computing Systems
Συγγραφείς: Κακολύρης, Ανδρέας Κοσμάς
Σούντρης Δημήτριος
Λέξεις κλειδιά: Cloud
Edge Computing
Resource Management
Neural Networks
Offloading
Collaborative Filtering
Partitioning
Ημερομηνία έκδοσης: 31-Οκτ-2022
Περίληψη: Deep Neural Networks (DNNs) are an increasingly important part of many contemporary applications that reside at the edge of the Network. While DNNs are particularly effective at their respective tasks, they can be computationally intensive, often prohibitively so, when the resource and energy constraints of the edge computing environment are taken into account. In order to overcome these obstacles, the idea of partitioning and offloading part of the DNN computations to more powerful servers is often being proposed as a possible solution. While previous approaches have suggested resource management schemes to address this issue, the high dynamicity present in such environments is usually overlooked, both in regards to the variability of the DNN models and to the heterogeneous nature of the underlying hardware. In this thesis, we present a framework for DNN partitioning and offloading for edge computing systems. Our DNN partitioning and offloading framework utilizes a Collaborative Filtering mechanism based on knowledge gathered previously during profiling, in order to make quick and accurate estimates for the performance (latency) and energy consumption of the Neural Network layers over a diverse set of heterogeneous edge devices. Via the aggregation of this information and the utilization of an intelligent partitioning algorithm, our framework generates a set of Pareto optimal Neural Network splittings that trade-off between latency and energy consumption. Our framework is evaluated by using a variety of prominent DNN architectures to show that our approach outperforms current state-of-the-art methodologies by achieving a 9.58× speedup on average and up to 88.73% less energy consumption, simultaneously offering high estimation accuracy by limiting the prediction error down to 3.19% when it comes to latency and 0.18% when energy is concerned, while being lightweight and performing in a dynamic manner
URI: http://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/18531
Εμφανίζεται στις συλλογές:Διπλωματικές Εργασίες - Theses

Αρχεία σε αυτό το τεκμήριο:
Αρχείο Περιγραφή ΜέγεθοςΜορφότυπος 
Andreas_K_Diploma_Thesis.pdf3.8 MBAdobe PDFΕμφάνιση/Άνοιγμα


Όλα τα τεκμήρια του δικτυακού τόπου προστατεύονται από πνευματικά δικαιώματα.