Please use this identifier to cite or link to this item: http://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/19023
Full metadata record
DC FieldValueLanguage
dc.contributor.authorΠαρασκευόπουλος, Γεώργιος-
dc.date.accessioned2024-03-26T09:21:49Z-
dc.date.available2024-03-26T09:21:49Z-
dc.date.issued2024-
dc.identifier.urihttp://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/19023-
dc.description.abstractIn the recent years, a dominant strategy has arised in machine learning, i.e., scaling-up model capacity and training data, with impressive results. However, the development of techniques for resource-limited settings can have a great economic, environmental, and research impact, especially for digitally under-represented communities. In this thesis, which is split into two major parts, we draw motivation from insights in the fields of cognitive sciences and neurosciences to design efficient and effective machine learning algorithms for data representation and model adaptation. First, we propose a novel algorithm for dimensionality reduction via multi-dimensional scaling based on the global geometry of the input data. The proposed algorithm, Pattern Search MDS is based on derivative-free direct search, and is able to capture the geometry of complex “pseudo”-metric spaces. Reduction of the algorithm to the General Pattern Search algorithmic family provides theoretical convergence guarantees, and an optimized implementation is provided to the research community. The performance and convergence of Pattern Search MDS is demonstrated on diverse tasks, i.e., manifold geometry, semantic similarity, and speech emotion recognition. In the second part we shift our focus to the problem of Unsupervised Domain Adaptation of speech and language models. To address the inherent stability-plasticity dilemma in this problem we propose mixed self-supervision, a robust and effective fine-tuning strategy, where the task is learned using annotated out-of-domain data, while relevant in-domain knowledge from pretraining is maintained via self-supervision on unlabeled in-domain data. We evaluate mixed self-supervision for text sentiment analysis based on product reviews, and the adaptation of speech recognition systems to new domains for Modern Greek. Particular emphasis is placed on the sample-efficiency of the proposed fine-tuning strategy in our ablations, where we demonstrate that 500 in-domain reviews, or 3 hours of in-domain speech, are enough for successful adaptation.en_US
dc.languageenen_US
dc.subjectUnsupervised Domain Adaptationen_US
dc.subjectDimensionality Reductionen_US
dc.subjectMulti-dimensional Scalingen_US
dc.subjectSelf-Supervised Learningen_US
dc.subjectDeep Learningen_US
dc.subjectText Sentiment Analysisen_US
dc.subjectSpeech Emotion Recognitionen_US
dc.subjectSpeech Recognitionen_US
dc.subjectΜη επιβλεπόμενη Προσαρμογή Τομέαen_US
dc.subjectΜείωση Διαστατικότηταςen_US
dc.subjectΠολυδιάστατη Κλιμάκωσηen_US
dc.subjectΑυτο-επιβλεπόμενη Μάθησηen_US
dc.subjectΒαθιά Μάθησηen_US
dc.subjectΚειμενική Ανάλυση Συναισθημάτωνen_US
dc.subjectΑναγνώριση Συναισθημάτων Φωνήςen_US
dc.subjectΑναγνώριση Ομιλίαςen_US
dc.titleΜέθοδοι Μηχανικής Μάθησης Βασισμένες στη Γνωσιακή Επιστήμη για Μείωση Διαστατικότητας και Προσαρμογή μεταξύ Πεδίων Μοντέλων Φωνής και Γλώσσας σε Περιβάλλοντα με Περιορισμένους Πόρους (Cognitively Motivated Machine Learning for Dimensionality Reduction and Domain Adaptation of Speech and Language Models in Resource-Constrained Settings)en_US
dc.description.pages187en_US
dc.contributor.supervisorΠοταμιάνος Αλέξανδροςen_US
dc.departmentΤομέας Σημάτων, Ελέγχου και Ρομποτικήςen_US
Appears in Collections:Διδακτορικές Διατριβές - Ph.D. Theses

Files in This Item:
File Description SizeFormat 
dissertation.pdfDoctoral Dissertation of Georgios Paraskevopoulos9.76 MBAdobe PDFView/Open


Items in Artemis are protected by copyright, with all rights reserved, unless otherwise indicated.