Please use this identifier to cite or link to this item:
|Title:||Community detection and resource assignment in interdependent systems via complex network analysis|
Online social networks
|Abstract:||The aim of this thesis is to develop novel approaches for inferring relations and hidden similarities among the actors of complex systems that consist of various types of devices and users, as well as approaches for assigning content to them. In order to accomplish this, the developed methods take into account the interdependency and the various relationships of the multiple types of actors found in these systems. These approaches rely on tools and techniques from the fields of complex networks and social network analysis. Such systems are commonly observed in current interconnected environments, such as Smart Cities, and are expected to become even more prevalent in the future. These systems combine the operation of large infrastructure with the actions and requirements of people that have access to it. For the unobstructed operation of such topologies, network operators need to be able to monitor the generated data, detect possible redundancies, discover similar regions. Moreover, people using such environments need to have fast access to data and learn about relevant applications that keep their perceived quality of experience high. These entities (people, devices, data measurements) being the actors of such complex systems, are related in multiple ways, forming multi-layer complex networks, and highlighting the need to employ proper tools for their analysis. Aiming to provide a framework for achieving the aforementioned goals, this thesis focuses on the problems of community detection and resource allocation in interconnected and interdependent environments. The identification of important problems observed in these areas and the development of suitable solutions can aid in identifying groups of similar devices operating within the interconnected environment, groups of similar users, and also distinguish the most influential ones in terms of information diffusion. In particular, in order to deal with the problems of detecting clusters of generated data from the infrastructure and also communities of people in Online Social Networks (OSNs) existing in interdependent and complex systems, a novel community detection algorithm is developed. A new framework is presented for mapping the problem of data clustering to a community detection one. The proposed algorithm manages to discover meaningful clusters of data, outperforming some traditional data clustering approaches in terms of accuracy and also detect communities in OSNs resulting in high modularity scores. Inspired by the well-known Girvan-Newman (GN) algorithm, it manages to perform many operations faster, leveraging on network embedding in hyperbolic space and by introducing a new approximative network metric for estimating the edge betweenness centrality. Combined with the removal of more edges per iteration instead of a single one as in the case of GN and coupled with a graph database, it marks a more scalable approach than GN for large networks that are oftentimes observed in realistic complex systems. Τhe evaluation process on both synthetic and real data, showcases the benefits of adopting the proposed approach. The people that use the facilities of such interdependent systems, interact with each other by using OSNs. Focusing on these social relations and studying their interactions can reveal the manners in which information flows throughout the network. The monitoring of the information diffusion across the network arises as one of the most crucial aspects for estimating the possible outcome of seeding sets of users with units of information (recommendations). Considering that each user displays a relevance score towards each available item for recommendation, the problem of assigning recommendations to users is formed as a relevance maximization one. Contrary to other works, in this approach, the tolerance of a user to different levels of recommendations is considered as a major factor for the development of the recommender system for the first time. Complex constraints on the amount of duplicate and distinct recommendations are imposed per user. The maximization problem is proven to be computationally difficult, as it consists of an NP-hard problem with added constraints. In order to overcome this computational obstacle, the problem is divided into two sub-problems treated with greedy algorithms, and their combination produces high relevance scores, while respecting all the imposed constraints. Aspiring to provide users with fast access to data that increases the users' quality of experience, various schemes for caching at the network edge by utilizing limited memory space in the User Equipment (UE), are examined . Knowledge obtained from recommender systems about each user's preferences can be applied in order to predict future requests. In order to decide the optimal content to cache in each UE, the problem of content placement is formulated as a cache hit maximization one. Algorithms that employ either the full set or only a portion of the set of users as caches, while caching contents either proactively or reactively, are examined and compared in terms of the overall cache hit ratio obtained. The increased obtained cache-hit ratio proves the benefits of utilizing caching at the UEs instead of just caching at special devices. Also, from these comparisons, the need to take into consideration the probabilities of request for more than one's self in order to design more accurate caching schemes is highlighted. Finally, leveraging users' mobility and by taking into account the impact that recommendations have on users' requests and the ability of dedicated devices and selected users' UEs to cache and offload content, a caching and recommendation scheme is developed. Modeling the perceived user's Quality of Experience (QoE) as a function of the delay experienced by the user for retrieving a requested item and the relevance of the recommended items to her preferences, the problem is formulated as QoE maximization one. Knowing that this problem is NP-hard, a heuristic method is developed and compared to an approximative algorithm showcasing its benefits in terms of balancing the achieved QoE score for each user and the execution time needed, marking it as a computationally feasible approach able to yield results of high QoE. In the following, the proposed methods are presented, alongside with a discussion on the main contributions of this thesis. Then, each Chapter focuses on one of the aforementioned problems, presenting related work on the field and introducing the developed solutions together with some indicative evaluation that justifies the benefits of their adoption.|
|Appears in Collections:||Διδακτορικές Διατριβές - Ph.D. Theses|
Files in This Item:
|thesis_tsitseklis.pdf||3.06 MB||Adobe PDF||View/Open|
Items in Artemis are protected by copyright, with all rights reserved, unless otherwise indicated.