Please use this identifier to cite or link to this item: http://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/18815
Title: Machine and Deep Learning Algorithms for Radio Resource Management in 5G and Beyond Networks
Authors: Μπαρτσιώκας, Ιωάννης, Δρ.
Κακλαμάνη Δήμητρα-Θεοδώρα
Keywords: 5G
B5G
Deep Learning
Machine Learning
Radio Resource Management
Relay Assisted Transmission
Reinforcement Learning
Q-Learning
Federated Learning
System Level Simulations
Mobile Edge Computing
Issue Date: 16-Oct-2023
Abstract: Fifth-generation (5G) and beyond (B5G) wireless communications systems have been established to support the exponential growth rate of mobile data traffic and dense user connectivity, which requires uninterrupted and location-free access to the medium. The emerging need for new application types (Internet of Things (IoT) applications, augmented/virtual reality (AR/VR), unmanned aerial vehicles (UAVs), etc.) has enabled telecommunication service categories served by 5G/B5G networks. In this context, the support of ultra-reliable low latency-communications (URLLC), enhanced mobile broadband (eMBB) and massive machine type communications (mMTC) in mass access environments is of utmost importance in 5G/B5G networks. Moreover, various novel physical layer technologies have been introduced over the last years to cope with the increasing challenges in the wireless communications domain, such as massive multi-input- multiple-output (m-MIMO) configurations, millimeter Wave (mmWave) transmission, Relay Nodes (RNs) as well as, non-orthogonal multiple access (NOMA). However, the aforementioned advanced physical layer technologies, when applied in a cellular environment characterized by high interference levels and complex channel approximations, can maximize the computational cost to support strict users’ requirements. Machine learning (ML) algorithms are proposed as an efficient way to tackle these considerations, due to their ability to utilize data generated by the network itself in improving network performance and efficiency. ML algorithms are trained using either data generated by the wireless network under test or by similar ones. In this way, complex channel calculations are encapsulated in ML models’ layers, leading to a computational cost and complexity decrease, after multiple successful training rounds. Moreover, there are ML algorithms (e.g., Reinforcement Learning (RL) ones), which can directly interact in real-time and support low-latency requirements of modern era networks. In the present thesis ML and Deep Learning (DL) methods are developed for efficient RRM in 5G/B5G wireless communication networks. More specifically, ML/DL algorithms are examined in various RRM subproblems, such as subcarrier allocation to active users (User Equipments - UEs), base station (BS) or RN placement and selection for users entering the cellular topology, as well as prediction of network key performance indicators (KPIs), such as throughput. The increased demands of the UEs for uninterrupted QoS, ultra-low latency and high density of connected devices necessitate the use of ML/DL techniques for the aforementioned RRM problems. Therefore, in addition to classical Supervised and Unsupervised learning techniques, this thesis explores Deep Reinforcement Learning (DRL) techniques, primarily Deep Q-Learning algorithms. Additionally, distributed ML techniques, such as Federated Learning (FL), are proposed for the aforementioned RRM subproblems, combining the benefits of ML and Mobile Edge Computing (MEC). In the context of this thesis, a state-of-the-art analysis regarding ML-based RRM in 5G/B5G networks is firstly performed. The corresponding research works are categorized, based on both the RRM sub-problem, and the employed ML technique. Then, the RRM problem in 5G/B5G networks is formulating and the significance of KPI prediction for RRM tasks is highlighted, while several ML/DL algorithms are developed concerning their performance in throughput prediction for 5G/B5G networks. An additional key problem in 5G/B5G orientations, where RNs are deployed to extend each cell’s coverage area and increase network’s capacity, is the optimal RN placement and selection for each user entering the cellular topology. After formulating both problems (RN placement and selection) ML/DL frameworks are studied. Regarding the RN placement problem, two different DL approaches are developed and evaluated based on datasets created by a MATLAB RN-assisted 5G/B5G link and system level simulator. These ML algorithms are not only deployed in a centralized manner, but also an FL framework is proposed. The coexistence of several interconnected devices in 5G/B5G networks, which can assist in splitting the computational overload among them, to efficiently utilize network resources. As far as the RN selection problem is concerned, a novel Deep Q-Learning scheme is proposed, based on the joint Energy Efficiency (EE) and Spectral Efficiency (SE) maximization for each user of the cellular topology. In addition, a specific mechanism is, also, implemented for the total system’s EE and SE maximization. Finally, all proposed solutions are thoroughly evaluated and tested via extended simulations. Comparisons are made, both among them and against other up-to-date approaches. In each case, significant performance gains are identified, leading to increased systems’ EE and SE levels and important spectrum utilization, while the advantages of the proposed frameworks are, also, mirrored in terms of computational costs.
URI: http://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/18815
Appears in Collections:Διδακτορικές Διατριβές - Ph.D. Theses

Files in This Item:
File Description SizeFormat 
phd_thesis_bartsiokas_2023_ai_ml_algorithms_for_5G_RRM_final_October_2023.pdf4.36 MBAdobe PDFView/Open


Items in Artemis are protected by copyright, with all rights reserved, unless otherwise indicated.