Please use this identifier to cite or link to this item: http://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/18462
Title: Emotion Recognition in Conversation Using Prompt-Βased Learning
Authors: Γκότση, Πολυτίμη-Άννα
Ποταμιάνος Αλέξανδρος
Keywords: Machine Learning
Deep Learning
Natural Language Processing
Emotion Recognition in Conversation
Prompt-Based Learning
Transformers
Pre-trained Language Models
BERT
Information-Specific Prompts
Issue Date: 27-Sep-2022
Abstract: With the vast amount of conversational data that is made publicly available daily on platforms such as Twitter, Reddit, Facebook etc., its automatic analysis for the aim of mining opinions and understanding human behavior is in high demand. Additionally, a full grasp of the emotional state of the human interlocutor is necessary for the development of empathetic agents that can interact with humans in a manner natural to the latter. Such and other potential applications have lead to an increasing research interest on the task of Emotion Recognition in Conversation (ERC), which aims at determining the emotion each utterance in a given dialogue expresses. A lot of previous approaches in the field utilize pre-trained language models such as BERT and RoBERTa as part of their proposed architecture, which they adapt to the specific task using the traditional fine-tuning method. However, despite its effectiveness, fine-tuning is very expensive in terms of computational and storage resources, while it can often lead to overfitting. An alternative, more lightweight method for the adaptation of pre-trained language models to downstream tasks that has been proposed in the recent years and aims at mitigating these issues, is prompt-based learning, which keeps the language model’s pre-trained parameters frozen and adds a small set of new parameters in the mode’s input level, called a prompt. Nevertheless, because it is still a very new method, there is limited work available, especially in the field of optimizing the method for a specific task. In our work, we aim to study prompt-based learning as an adaptation method for the task of ERC. We follow two approaches and perform extensive experiments on both. In our first approach, we aim to study the applicability of prompt-based learning in comparison to fine-tuning and set a baseline for prompt-based learning for Emotion Recognition in Conversation. We experiment with a simple baseline model as well as models utilizing popular methods previously used in related work for integrating speaker-specific information. We conclude that prompt-based learning can indeed contribute to the adaptation of our pre-trained language model, even yielding a performance comparable to fine-tuning for one of the two datasets we experiment on, with its performance depending on the dataset, prompt-size, architecture and training method. In our second approach, we propose a method for integrating additional information, useful for recognizing emotion in a conversation, directly through the prompts, without further changes to the pre-trained language model’s architecture and input. We experiment with adding speaker-specific and topic-specific information and observe an increase in performance in many cases, compared to our baseline. Our method may easily be extended to other types of information, besides speaker identity and topic, following the same logic.
URI: http://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/18462
Appears in Collections:Διπλωματικές Εργασίες - Theses

Files in This Item:
File Description SizeFormat 
Polytimi-Anna_Gkotsi_thesis.pdf2.1 MBAdobe PDFView/Open


Items in Artemis are protected by copyright, with all rights reserved, unless otherwise indicated.