Please use this identifier to cite or link to this item: http://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/18500
Full metadata record
DC FieldValueLanguage
dc.contributor.authorDimitriou, Angeliki-
dc.date.accessioned2022-10-27T13:47:56Z-
dc.date.available2022-10-27T13:47:56Z-
dc.date.issued2022-10-19-
dc.identifier.urihttp://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/18500-
dc.description.abstractCounterfactual explanations provide reasoning in the form of changes needed to be made in order for a model to make a different decision. When the model in question is a black-box classifier and the input consists of images, to answer how an instance should be modified in a minimal way so as to be classified differently, one is required to find the most similar image in the other category. A semantically meaningful way to do that, while simultaneously attending to the interactions between depicted objects, is by comparing the images’ corresponding scene graphs, i.e. graphs which describe object instances in a scene and how they relate to each other. The problem of Graph Similarity or Error-tolerant Graph Matching has been tackled throughout the years by measures like Graph Edit Distance (GED) or methods like Graph Kernels. In this thesis, we propose using the recently thriving deep learning models which specifically operate on graph structured data, called Graph Neural Networks (GNN). We present a GNN framework which takes graph pairs as input and embeds each counterpart in a space which maps more similar graphs closer based on the metric used during training as a supervision signal. We train this model on a small subset of graph pairs using GED as their label and extract graph embeddings which can be compared to one another using simple metrics like cosine similarity. Therefore, rankings of similar graphs are produced for each instance in the dataset and the best match can be determined. During experimentation, we are able to utilize several different convolutional GNN variants and draw important conclusions about their effectiveness and expressivity. The GNN models are compared to each other and to graph kernel methods and evaluated both quantitatively, using an approximate GED algorithm as the ground truth, as well as qualitatively by observing corresponding images. Our models are able to outperform the previously used kernel methods in both cases and produce embeddings which are beneficial for creating counterfactual explanations and potentially applicable to many other tasks.en_US
dc.languageenen_US
dc.subjectΝευρωνικά Δίκτυα Γράφωνen_US
dc.subjectΑντιστοίχιση Γράφων με Ανοχή Λάθουςen_US
dc.subjectΟμοιότητα Γραφημάτωνen_US
dc.subjectΑνάκτηση Γραφήματοςen_US
dc.subjectΓράφοι Σκηνήςen_US
dc.subjectΕξηγήσεις με Αντιπαράδειγμαen_US
dc.titleScene Graph Retrieval for Counterfactual Explanations Using Graph Neural Networksen_US
dc.description.pages107en_US
dc.contributor.supervisorΣτάμου Γιώργοςen_US
dc.departmentΤομέας Τεχνολογίας Πληροφορικής και Υπολογιστώνen_US
Appears in Collections:Διπλωματικές Εργασίες - Theses

Files in This Item:
File Description SizeFormat 
scene_graph_retrieval_gnn.pdf26.64 MBAdobe PDFView/Open


Items in Artemis are protected by copyright, with all rights reserved, unless otherwise indicated.