Παρακαλώ χρησιμοποιήστε αυτό το αναγνωριστικό για να παραπέμψετε ή να δημιουργήσετε σύνδεσμο προς αυτό το τεκμήριο: http://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/19043
Τίτλος: Extending RISC-V ISA for Fine-Grained Mixed-Precision in Neural Networks
Συγγραφείς: Μάρας, Αλέξιος
Σούντρης Δημήτριος
Λέξεις κλειδιά: RISC-V
Neural Networks
Mixed Precision Quantization
Hardware-Software Codesign
Hardware Accelerator
FPGA
Ημερομηνία έκδοσης: 1-Απρ-2024
Περίληψη: The growing interest in deploying machine learning (ML) applications on devices with restricted processing power and energy capacity underscores the necessity for computing solutions that not only excel in power and memory efficiency but also ensure low latency for time-sensitive applications. The RISC-V architecture, with its open-source instruction set and customizable extensions, offers a promising pathway for optimizing these algorithms by enabling more tailored and energy-efficient processing capabilities. Furthermore, recent advancements in quantization and mixed precision techniques offer significant promise for improving the run-time and energy consumption of neural networks (ΝΝ), without significantly compromising their efficiency. In this work, we propose to leverage these advancements to expedite the inference process of Deep Neural Networks (DNNs) on RISC-V processors. To push performance even further, we plan to expand the supported instruction set and incorporate a new functional unit within the processor’s pipeline, specifically designed for executing these new instructions. For rapid prototyping and design exploration, we implement the processor on a Xilinx Virtex-7 FPGA board, enabling us to assess the efficacy of our methodology across diverse Neural Network architectures and datasets. With a modest overhead of 34.89% in the usage of Lookup Tables (LUTs) and 24.28% in Flip-Flops (FFs), our framework manages to accelerate the execution time by 13-23x in classic Multi-layer Perceptron architectures, 18-28x in typical Convolutional Networks, and 6-7x in more complex networks, like MobileNets, with minimal reduction in their accuracy from 1-5%, demonstrating a significant improvement compared to the original processor.
URI: http://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/19043
Εμφανίζεται στις συλλογές:Διπλωματικές Εργασίες - Theses

Αρχεία σε αυτό το τεκμήριο:
Αρχείο Περιγραφή ΜέγεθοςΜορφότυπος 
Alexis_Maras_Thesis.pdf2.3 MBAdobe PDFΕμφάνιση/Άνοιγμα


Όλα τα τεκμήρια του δικτυακού τόπου προστατεύονται από πνευματικά δικαιώματα.