Please use this identifier to cite or link to this item: http://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/18710
Full metadata record
DC FieldValueLanguage
dc.contributor.authorKaravangelis, Athanasios-
dc.date.accessioned2023-07-03T12:47:44Z-
dc.date.available2023-07-03T12:47:44Z-
dc.date.issued2023-07-03-
dc.identifier.urihttp://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/18710-
dc.description.abstractAmidst the exponential growth and breakthroughs in machine learning (ML) and its profound impact on critical domains, the need for interpretability of the models is paramount. A bridge for this model-human gap is provided by Explainable AI (XAI), which has seen rapid progress in recent years, adding transparency to machine learning processes. In this work, we focus on counterfactual explanations, a method that provides insights into the decision-making process of machine learning models by exploring alternative scenarios and hypothetical transformations. Specifically, we are concerned with the generation of text counterfactual explanations and the evaluation of counterfactual editors, which leverage natural language processing (NLP) models and tasks to generate perturbations of text sentences. Our approach involves experimenting with multiple counterfactual editors from the recent literature, models, and generation methods in order to understand their inner mechanisms and make their decisions comprehensive. In order to achieve this, we present a counterfactual editing system where we generate counterfactual, contrastive edits combining counterfactual editors with a predictor and then selecting the most minimal edit that flips the predictor’s original prediction. Moreover, we utilize methods of counterfactual generation used in current academic publications and introduce a novel method of generating counterfactual edits using part-of-speech tags to constrain the generation. We also explore multiple evaluation techniques and metrics that allow us to extract valuable conclusions that cover numerous aspects of counterfactual generation. In summary, our experiments have yielded valuable conclusions and insights. We manage to unveil hidden characteristics and patterns of counterfactual editors, explain their results, and explore various aspects of counterfactual generation. Our experiments showcase performance enhancements in counterfactual generation methods through a systematic exploration of their structural components and methodologies. Therefore, the contributions of this thesis including the utilization and introduction of novel methods in the field of counterfactual generation and a comprehensive analysis on the evaluation of counterfactual editors prove to be a promising avenue for future research.en_US
dc.languageenen_US
dc.subjectExplainable AIen_US
dc.subjectCounterfactual Explanationsen_US
dc.subjectText Counterfactualsen_US
dc.subjectMachine Learning Modelsen_US
dc.subjectText Generationen_US
dc.subjectMulti-metrics Evaluationen_US
dc.titleExploring Text Counterfactual Explanations: A Multi-Metric Evaluation Approach for Counterfactual Editorsen_US
dc.description.pages150en_US
dc.contributor.supervisorΣτάμου Γιώργοςen_US
dc.departmentΤομέας Τεχνολογίας Πληροφορικής και Υπολογιστώνen_US
Appears in Collections:Διπλωματικές Εργασίες - Theses

Files in This Item:
File Description SizeFormat 
Diploma_Thesis_Counterfactuals_Karavangelis_Athanasios.pdf6.02 MBAdobe PDFView/Open


Items in Artemis are protected by copyright, with all rights reserved, unless otherwise indicated.