Please use this identifier to cite or link to this item: http://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/19600
Full metadata record
DC FieldValueLanguage
dc.contributor.authorΠανόπουλος, Ιωάννης-
dc.date.accessioned2025-05-16T16:53:40Z-
dc.date.available2025-05-16T16:53:40Z-
dc.date.issued2025-04-29-
dc.identifier.urihttp://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/19600-
dc.description.abstractDeep learning has fundamentally transformed the field of artificial intelligence, enabling significant advancements in areas such as natural language processing, computer vision, and autonomous decision-making. However, the ever-increasing complexity of modern models entails substantial computational demands, rendering the use of powerful cloud infrastructures essential. This dependence on centralized computing introduces limitations in terms of latency, privacy, and availability, posing a challenge for the deployment of AI applications on mobile and embedded systems. This dissertation investigates the intersection of deep learning and efficiency in resource-constrained environments, with the aim of establishing a holistic framework for the efficient development and execution of AI systems at the network edge. The research focuses on three key studies: (a) the development of CARIn, an adaptive inference framework designed to execute multiple neural networks on heterogeneous mobile devices using multi-objective optimization techniques; (b) the thorough evaluation and adaptation of Transformer models for mobile environments through low-cost architectural and hardware-aware optimizations; and (c) the design of A-THENA, an efficient intrusion detection system for IoT networks, based on Transformers with time-aware positional encodings. Through these contributions, this dissertation formulates a comprehensive approach to efficient deep learning in mobile and embedded computing environments. By exploring the interplay between model optimization, hardware adaptation, and real-world application requirements, this work bridges the gap between cutting-edge AI research and its practical deployment. The findings emphasize that efficiency is not merely an optimization objective but a foundational enabler for the future of AI, ensuring that deep learning technologies can operate seamlessly, sustainably, and intelligently across a wide range of computing platforms.en_US
dc.languageenen_US
dc.subjectDeep learningen_US
dc.subjectOn-device inferenceen_US
dc.subjectMobile computingen_US
dc.subjectEmbedded computingen_US
dc.subjectEfficient AIen_US
dc.subjectHeterogeneityen_US
dc.subjectOptimizationen_US
dc.subjectTransformer modelsen_US
dc.subjectEdge deploymenten_US
dc.subjectIntrusion detectionen_US
dc.subjectInternet of Thingsen_US
dc.titleEfficient Deep Learning in Mobile and Embedded Computing Environmentsen_US
dc.description.pages176en_US
dc.contributor.supervisorΒενιέρης Ιάκωβοςen_US
dc.departmentΤομέας Συστημάτων Μετάδοσης Πληροφορίας και Τεχνολογίας Υλικώνen_US
Appears in Collections:Διδακτορικές Διατριβές - Ph.D. Theses

Files in This Item:
File Description SizeFormat 
PhD_Dissertation_Ioannis_Panopoulos.pdf6.57 MBAdobe PDFView/Open


Items in Artemis are protected by copyright, with all rights reserved, unless otherwise indicated.