Please use this identifier to cite or link to this item: http://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/18063
Full metadata record
DC FieldValueLanguage
dc.contributor.authorLoukas, Nikolaos-
dc.date.accessioned2021-08-30T12:08:04Z-
dc.date.available2021-08-30T12:08:04Z-
dc.date.issued2021-07-26-
dc.identifier.urihttp://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/18063-
dc.description.abstractIn this diploma thesis, we are concerned with tasks in the domains of CognitiveScience and Natural Language Processing (NLP). We investigate natural languagerepresentations in human brain and comparing them with traditional machinelearning representations. In this work, we developed a pipeline for extractingneural representations from fMRI datasets by using machine learning techniques,following literature’s guideline. Moreover, we evaluate our work on downstreamtasks and provide useful comparative tables.In this work, we first utilize a well known fMRI dataset to map traditional wordembeddings to cognitive representations. We present a neural activation model,with ridge regression directly from glove embeddings instead of an intermediatesemantic feature model proposed in the literature, that uses a set words withavailable fMRI measurements in order to find a mapping between word seman-tics and localized neural activations. Then, we compare this encoding model,in several variations, with traditional word embeddings on a similarity task andconclude that its performance is not affected overall from the semantic or glovespace.Thereafter, we investigate how cognitive embeddings could affect a languagemodel’s representations. We incorporate cognitive embeddings into languagemodels by adding them as queries in the attention layer, in order to induce thecognitive bias of these embeddings into their training process. After finding thattheir ability to predict brain recordings improves, we test the models’ performanceat NLP tasks. Our results indicate that, even the complex BERT architecture isnegatively affected by the noisy neural representations. Our experiment setup,although it’s very promising, can not fully exploit the potential of cognitive em-beddings.en_US
dc.languageenen_US
dc.subjectΜηχανική Μάθησηen_US
dc.subjectΒαθειά Μάθησηen_US
dc.subjectΝευρωνικά Δίκτυαen_US
dc.subjectΕγκεφαλικές Αναπαραστάσειςen_US
dc.subjectΕπεξεργασία Φυσικής Γλώσσαςen_US
dc.subjectMachine Learningen_US
dc.subjectDeep Learningen_US
dc.subjectNeural Networksen_US
dc.subjectBrain Representationsen_US
dc.subjectfMRIen_US
dc.titleExtending self-supervised natural language models through human brainen_US
dc.description.pages96en_US
dc.contributor.supervisorΠοταμιάνος Αλέξανδροςen_US
dc.departmentΤομέας Σημάτων, Ελέγχου και Ρομποτικήςen_US
Appears in Collections:Διπλωματικές Εργασίες - Theses

Files in This Item:
File Description SizeFormat 
Thesis_ECE_NTUA_Nikolaos_Loukas.pdf1.95 MBAdobe PDFView/Open


Items in Artemis are protected by copyright, with all rights reserved, unless otherwise indicated.