Παρακαλώ χρησιμοποιήστε αυτό το αναγνωριστικό για να παραπέμψετε ή να δημιουργήσετε σύνδεσμο προς αυτό το τεκμήριο: http://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/19630
Πλήρες αρχείο μεταδεδομένων
Πεδίο DC ΤιμήΓλώσσα
dc.contributor.authorStefanakis, Georgios-
dc.date.accessioned2025-06-30T12:13:04Z-
dc.date.available2025-06-30T12:13:04Z-
dc.date.issued2025-02-01-
dc.identifier.urihttp://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/19630-
dc.description.abstractNowadays, the diversity of workloads that are hosted on the Cloud is extensive, ranging from high-performance applications to microservice software architecture, data analytics and machine learning pipelines. As the Cloud paradigm is constantly expanding in every modern field of computation, the need to improve the per- formance and resource utilization of datacenter infrastructure becomes crucial, not only for the end-user, but also from a power and cost efficiency perspective. Containerization of applications is one such optimization strategy that offers many advantages over the preceding hypervisor-based virtualization, such as portabil- ity, reproducibility, lower performance overhead and memory requirements, faster deployment and scaling. Kubernetes is an open-source container orchestrator for deploying, managing and scaling containerized ap- plications in production environments. While the performance upgrade over traditional clusters is apparent, orchestrators generally do not rely on fine-grained resource information for scheduling and executing applica- tions. Additionally, they often lack awareness of the application’s internal characteristics. Kubernetes is only aware of simplistic metrics such as CPU and memory load, often leading to sub-optimal scheduling decisions and interference phenomena between co-located workloads. Cloud Service Providers are aware of this issue and are willing to compromise resource utilization to uphold the Quality of Service class requested by the customer. In this thesis, we experimentally identify the formerly described challenges in an attempt to design a more ef- ficient resource management mechanism that integrates with Kubernetes. We leverage Kubernetes’ extension points for developers, as well as different system monitoring and benchmarking tools and create sophisticated scheduling policies that utilize application profiling in contrast to the baseline CPU and memory affinity en- abled policy of the default scheduler. We profile incoming applications based while observing low-level system metrics, e.g. Memory Bandwidth, Instructions per Cycle, L2 and L3 Cache Misses, and apply scheduling decisions with our custom scheduler. Afterwards, we evaluate the effectiveness of our solution by comparing the slowdown of the applications prior and after deploying our custom solution. We conduct experiments using numerous benchmarks that introduce realistic scenarios of stress on the system and demonstrate the impact of our solution in foreseeing resource contention and improving overall system performance.en_US
dc.languageelen_US
dc.subjectCloud Computingen_US
dc.subjectContainerizationen_US
dc.subjectKubernetesen_US
dc.subjectResource Managementen_US
dc.subjectPerformance Optimizationen_US
dc.subjectInterference Mitigationen_US
dc.subjectApplication Schedulingen_US
dc.subjectBenchmarkingen_US
dc.titleFine-Grained Container Orchestration and Scheduling on Kubernetes Clustersen_US
dc.description.pages94en_US
dc.contributor.supervisorΓκούμας Γεώργιοςen_US
dc.departmentΤομέας Τεχνολογίας Πληροφορικής και Υπολογιστώνen_US
Εμφανίζεται στις συλλογές:Διπλωματικές Εργασίες - Theses

Αρχεία σε αυτό το τεκμήριο:
Αρχείο Περιγραφή ΜέγεθοςΜορφότυπος 
main.pdf4.57 MBAdobe PDFΕμφάνιση/Άνοιγμα


Όλα τα τεκμήρια του δικτυακού τόπου προστατεύονται από πνευματικά δικαιώματα.