Παρακαλώ χρησιμοποιήστε αυτό το αναγνωριστικό για να παραπέμψετε ή να δημιουργήσετε σύνδεσμο προς αυτό το τεκμήριο: http://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/18063
Τίτλος: Extending self-supervised natural language models through human brain
Συγγραφείς: Loukas, Nikolaos
Ποταμιάνος Αλέξανδρος
Λέξεις κλειδιά: Μηχανική Μάθηση
Βαθειά Μάθηση
Νευρωνικά Δίκτυα
Εγκεφαλικές Αναπαραστάσεις
Επεξεργασία Φυσικής Γλώσσας
Machine Learning
Deep Learning
Neural Networks
Brain Representations
fMRI
Ημερομηνία έκδοσης: 26-Ιου-2021
Περίληψη: In this diploma thesis, we are concerned with tasks in the domains of CognitiveScience and Natural Language Processing (NLP). We investigate natural languagerepresentations in human brain and comparing them with traditional machinelearning representations. In this work, we developed a pipeline for extractingneural representations from fMRI datasets by using machine learning techniques,following literature’s guideline. Moreover, we evaluate our work on downstreamtasks and provide useful comparative tables.In this work, we first utilize a well known fMRI dataset to map traditional wordembeddings to cognitive representations. We present a neural activation model,with ridge regression directly from glove embeddings instead of an intermediatesemantic feature model proposed in the literature, that uses a set words withavailable fMRI measurements in order to find a mapping between word seman-tics and localized neural activations. Then, we compare this encoding model,in several variations, with traditional word embeddings on a similarity task andconclude that its performance is not affected overall from the semantic or glovespace.Thereafter, we investigate how cognitive embeddings could affect a languagemodel’s representations. We incorporate cognitive embeddings into languagemodels by adding them as queries in the attention layer, in order to induce thecognitive bias of these embeddings into their training process. After finding thattheir ability to predict brain recordings improves, we test the models’ performanceat NLP tasks. Our results indicate that, even the complex BERT architecture isnegatively affected by the noisy neural representations. Our experiment setup,although it’s very promising, can not fully exploit the potential of cognitive em-beddings.
URI: http://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/18063
Εμφανίζεται στις συλλογές:Διπλωματικές Εργασίες - Theses

Αρχεία σε αυτό το τεκμήριο:
Αρχείο Περιγραφή ΜέγεθοςΜορφότυπος 
Thesis_ECE_NTUA_Nikolaos_Loukas.pdf1.95 MBAdobe PDFΕμφάνιση/Άνοιγμα


Όλα τα τεκμήρια του δικτυακού τόπου προστατεύονται από πνευματικά δικαιώματα.