Please use this identifier to cite or link to this item: http://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/18462
Full metadata record
DC FieldValueLanguage
dc.contributor.authorΓκότση, Πολυτίμη-Άννα-
dc.date.accessioned2022-10-12T13:07:58Z-
dc.date.available2022-10-12T13:07:58Z-
dc.date.issued2022-09-27-
dc.identifier.urihttp://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/18462-
dc.description.abstractWith the vast amount of conversational data that is made publicly available daily on platforms such as Twitter, Reddit, Facebook etc., its automatic analysis for the aim of mining opinions and understanding human behavior is in high demand. Additionally, a full grasp of the emotional state of the human interlocutor is necessary for the development of empathetic agents that can interact with humans in a manner natural to the latter. Such and other potential applications have lead to an increasing research interest on the task of Emotion Recognition in Conversation (ERC), which aims at determining the emotion each utterance in a given dialogue expresses. A lot of previous approaches in the field utilize pre-trained language models such as BERT and RoBERTa as part of their proposed architecture, which they adapt to the specific task using the traditional fine-tuning method. However, despite its effectiveness, fine-tuning is very expensive in terms of computational and storage resources, while it can often lead to overfitting. An alternative, more lightweight method for the adaptation of pre-trained language models to downstream tasks that has been proposed in the recent years and aims at mitigating these issues, is prompt-based learning, which keeps the language model’s pre-trained parameters frozen and adds a small set of new parameters in the mode’s input level, called a prompt. Nevertheless, because it is still a very new method, there is limited work available, especially in the field of optimizing the method for a specific task. In our work, we aim to study prompt-based learning as an adaptation method for the task of ERC. We follow two approaches and perform extensive experiments on both. In our first approach, we aim to study the applicability of prompt-based learning in comparison to fine-tuning and set a baseline for prompt-based learning for Emotion Recognition in Conversation. We experiment with a simple baseline model as well as models utilizing popular methods previously used in related work for integrating speaker-specific information. We conclude that prompt-based learning can indeed contribute to the adaptation of our pre-trained language model, even yielding a performance comparable to fine-tuning for one of the two datasets we experiment on, with its performance depending on the dataset, prompt-size, architecture and training method. In our second approach, we propose a method for integrating additional information, useful for recognizing emotion in a conversation, directly through the prompts, without further changes to the pre-trained language model’s architecture and input. We experiment with adding speaker-specific and topic-specific information and observe an increase in performance in many cases, compared to our baseline. Our method may easily be extended to other types of information, besides speaker identity and topic, following the same logic.en_US
dc.languageenen_US
dc.subjectMachine Learningen_US
dc.subjectDeep Learningen_US
dc.subjectNatural Language Processingen_US
dc.subjectEmotion Recognition in Conversationen_US
dc.subjectPrompt-Based Learningen_US
dc.subjectTransformersen_US
dc.subjectPre-trained Language Modelsen_US
dc.subjectBERTen_US
dc.subjectInformation-Specific Promptsen_US
dc.titleEmotion Recognition in Conversation Using Prompt-Βased Learningen_US
dc.description.pages151en_US
dc.contributor.supervisorΠοταμιάνος Αλέξανδροςen_US
dc.departmentΤομέας Σημάτων, Ελέγχου και Ρομποτικήςen_US
Appears in Collections:Διπλωματικές Εργασίες - Theses

Files in This Item:
File Description SizeFormat 
Polytimi-Anna_Gkotsi_thesis.pdf2.1 MBAdobe PDFView/Open


Items in Artemis are protected by copyright, with all rights reserved, unless otherwise indicated.