Please use this identifier to cite or link to this item: http://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/18063
Title: Extending self-supervised natural language models through human brain
Authors: Loukas, Nikolaos
Ποταμιάνος Αλέξανδρος
Keywords: Μηχανική Μάθηση
Βαθειά Μάθηση
Νευρωνικά Δίκτυα
Εγκεφαλικές Αναπαραστάσεις
Επεξεργασία Φυσικής Γλώσσας
Machine Learning
Deep Learning
Neural Networks
Brain Representations
fMRI
Issue Date: 26-Jul-2021
Abstract: In this diploma thesis, we are concerned with tasks in the domains of CognitiveScience and Natural Language Processing (NLP). We investigate natural languagerepresentations in human brain and comparing them with traditional machinelearning representations. In this work, we developed a pipeline for extractingneural representations from fMRI datasets by using machine learning techniques,following literature’s guideline. Moreover, we evaluate our work on downstreamtasks and provide useful comparative tables.In this work, we first utilize a well known fMRI dataset to map traditional wordembeddings to cognitive representations. We present a neural activation model,with ridge regression directly from glove embeddings instead of an intermediatesemantic feature model proposed in the literature, that uses a set words withavailable fMRI measurements in order to find a mapping between word seman-tics and localized neural activations. Then, we compare this encoding model,in several variations, with traditional word embeddings on a similarity task andconclude that its performance is not affected overall from the semantic or glovespace.Thereafter, we investigate how cognitive embeddings could affect a languagemodel’s representations. We incorporate cognitive embeddings into languagemodels by adding them as queries in the attention layer, in order to induce thecognitive bias of these embeddings into their training process. After finding thattheir ability to predict brain recordings improves, we test the models’ performanceat NLP tasks. Our results indicate that, even the complex BERT architecture isnegatively affected by the noisy neural representations. Our experiment setup,although it’s very promising, can not fully exploit the potential of cognitive em-beddings.
URI: http://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/18063
Appears in Collections:Διπλωματικές Εργασίες - Theses

Files in This Item:
File Description SizeFormat 
Thesis_ECE_NTUA_Nikolaos_Loukas.pdf1.95 MBAdobe PDFView/Open


Items in Artemis are protected by copyright, with all rights reserved, unless otherwise indicated.