Please use this identifier to cite or link to this item: http://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/19740
Title: On-Device Federated Learning for Human Activity Recognition
Authors: Ματσουκας, Δημητριος
Τσανάκας Παναγιώτης
Keywords: Machine Learning
Federated Learning
Human Activity Recognition
On-device training
Android
Data Privacy
Issue Date: 20-Jun-2025
Abstract: Edge devices such as smartphones and wearables have become the primary comput- ing platforms, generating large volumes of sensitive, user-specific data. Machine Learning (ML) models can utilize this data for tasks in areas like computer vision, natural language processing, and health monitoring. Traditionally, ML relies on centralized data collection, but this approach introduces serious privacy risks and is increasingly constrained by regu- lations such as GDPR and HIPAA. Federated Learning (FL) offers a promising alternative by addressing privacy concerns through a decentralized training approach, where model training occurs directly on users devices. This eliminates the need to transmit sensitive data to a central server. However, most FL research relies in simulation-based studies using standardized datasets, often neglecting the real-world challenges posed by hardware limitations, energy constraints, and network instability. This thesis addresses that gap by implementing a real-world FL system for Human Activity Recognition (HAR), which is a privacy-sensitive task that leverages sensor data from mobile devices. HAR is selected for its practical relevance and dependence on data commonly collected by personal devices. The system uses a Flower-based server coordinating training across five Android smart- phones, with on-device training and evaluation conducted via TensorFlow Lite (TFLite) which is one of the few frameworks supporting local updates on mobile hardware. Through experimental evaluation, the thesis quantifies how key FL challenges impact HAR across three critical axes: data heterogeneity, energy efficiency, and network relia- bility. Results show that extreme label imbalance can degrade model accuracy by over 55%. In contrast, when the amount of training data per client is reduced to just 10%, model’s performance drops by only 2%, indicating the relatively low sensitivity to data volume imbalance. Energy experiments show that increasing local training on each de- vice while reducing the number of communication rounds can reduce energy consumption by over 84% without compromising accuracy. Finally, network experiments reveal that client dropouts and intermittent participation lead to up to 20% performance loss and increased training instability, emphasizing the importance of robust aggregation strategies in real-world deployments.
URI: http://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/19740
Appears in Collections:Διπλωματικές Εργασίες - Theses

Files in This Item:
File Description SizeFormat 
thesis_.pdf15.87 MBAdobe PDFView/Open


Items in Artemis are protected by copyright, with all rights reserved, unless otherwise indicated.