Please use this identifier to cite or link to this item:
|Title:||Optimization of Distributed Learning Methods|
|Abstract:||From medical imaging to predictive analytics, more and more applications nowadays are based on Artificial Intelligence-Machine Learning (AI/ML), to address complex problems. Although powerful, AI/ML techniques require enormous amounts of data in order to be trained. During the last decade, increasing privacy concerns require the authorities to restrict the use and transfer of data generated by individuals. At the same time, the massive stream of data produced and transmitted by the IoT (Internet of Things) devices can lead to network overload, increased demand for storage capacity and computational power. As the traditional approach of centralized machine learning (CL) struggles under these circumstances, the paradigm of Federated Learning (FL) emerges as an alternative. Unlike CL where the data processing task (training) occurs in a centralized entity (e.g. cloud server), in FL it is offloaded to the client devices (e.g. smartphones) and the central entity is only responsible to aggregate the produced local models. This approach tackles the above challenges but new ones come together. One major challenge is the bias that the client introduces which leads to a suboptimal model compared to a centralized approach. The aim of this thesis is to investigate the problem of bias in FL environments focusing on its impact on the model performance. On top, we present FedLoss, a novel bias mitigation algorithm, which is benchmarked in our dedicated Federated Learning simulation environment.|
|Appears in Collections:||Διπλωματικές Εργασίες - Theses|
Items in Artemis are protected by copyright, with all rights reserved, unless otherwise indicated.