Please use this identifier to cite or link to this item: http://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/19023
Title: Μέθοδοι Μηχανικής Μάθησης Βασισμένες στη Γνωσιακή Επιστήμη για Μείωση Διαστατικότητας και Προσαρμογή μεταξύ Πεδίων Μοντέλων Φωνής και Γλώσσας σε Περιβάλλοντα με Περιορισμένους Πόρους (Cognitively Motivated Machine Learning for Dimensionality Reduction and Domain Adaptation of Speech and Language Models in Resource-Constrained Settings)
Authors: Παρασκευόπουλος, Γεώργιος
Ποταμιάνος Αλέξανδρος
Keywords: Unsupervised Domain Adaptation
Dimensionality Reduction
Multi-dimensional Scaling
Self-Supervised Learning
Deep Learning
Text Sentiment Analysis
Speech Emotion Recognition
Speech Recognition
Μη επιβλεπόμενη Προσαρμογή Τομέα
Μείωση Διαστατικότητας
Πολυδιάστατη Κλιμάκωση
Αυτο-επιβλεπόμενη Μάθηση
Βαθιά Μάθηση
Κειμενική Ανάλυση Συναισθημάτων
Αναγνώριση Συναισθημάτων Φωνής
Αναγνώριση Ομιλίας
Issue Date: 2024
Abstract: In the recent years, a dominant strategy has arised in machine learning, i.e., scaling-up model capacity and training data, with impressive results. However, the development of techniques for resource-limited settings can have a great economic, environmental, and research impact, especially for digitally under-represented communities. In this thesis, which is split into two major parts, we draw motivation from insights in the fields of cognitive sciences and neurosciences to design efficient and effective machine learning algorithms for data representation and model adaptation. First, we propose a novel algorithm for dimensionality reduction via multi-dimensional scaling based on the global geometry of the input data. The proposed algorithm, Pattern Search MDS is based on derivative-free direct search, and is able to capture the geometry of complex “pseudo”-metric spaces. Reduction of the algorithm to the General Pattern Search algorithmic family provides theoretical convergence guarantees, and an optimized implementation is provided to the research community. The performance and convergence of Pattern Search MDS is demonstrated on diverse tasks, i.e., manifold geometry, semantic similarity, and speech emotion recognition. In the second part we shift our focus to the problem of Unsupervised Domain Adaptation of speech and language models. To address the inherent stability-plasticity dilemma in this problem we propose mixed self-supervision, a robust and effective fine-tuning strategy, where the task is learned using annotated out-of-domain data, while relevant in-domain knowledge from pretraining is maintained via self-supervision on unlabeled in-domain data. We evaluate mixed self-supervision for text sentiment analysis based on product reviews, and the adaptation of speech recognition systems to new domains for Modern Greek. Particular emphasis is placed on the sample-efficiency of the proposed fine-tuning strategy in our ablations, where we demonstrate that 500 in-domain reviews, or 3 hours of in-domain speech, are enough for successful adaptation.
URI: http://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/19023
Appears in Collections:Διδακτορικές Διατριβές - Ph.D. Theses

Files in This Item:
File Description SizeFormat 
dissertation.pdfDoctoral Dissertation of Georgios Paraskevopoulos9.76 MBAdobe PDFView/Open


Items in Artemis are protected by copyright, with all rights reserved, unless otherwise indicated.