Please use this identifier to cite or link to this item: http://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/18940
Title: Knowledge Graph Based Explanation and Evaluation of Machine Learning Systems
Authors: Ντερβάκος, Έντμοντ-Γρηγόρης
Στάμου Γιώργος
Keywords: Knowledge Graphs
Γράφοι Γνώσης
Explainability
Εξηγησιμότητα
Interpretability
Ερμηνευσιμότητα
Evaluation
Αξιολόγηση
Issue Date: 8-Nov-2023
Abstract: Artificial intelligence (AI) has progressed explosively in recent years. Driven by the advent of deep learning, AI is being used in a variety of applications, across multiple scientific fields, in industry as well as in the arts. Despite spectacular results, various ethical issues have arisen that prevent the utilization of deep learning in applications that critically affect people’s lives, such as applications in medicine. The main source of ethical issues is the opacity of deep learning, as the models generally do not provide explanations for the decisions they make, and their use presupposes users’ trust. This lack of transparency also gives rise to additional problems hindering the development of AI systems, such as for example the difficulty of detecting, and consequently fixing bugs, and mistakes in deep learning based systems. These issues have led to the emergence of the eXplainable AI (XAI) field of research, which is the overarching context of this dissertation. This field of research has produced a wide range of approaches, with different algorithms that produce different types of explanations, in different theoretical contexts and concerning different types of data. In this dissertation we explored systems and technologies of formal knowledge representation as a tool to explain the operation of opaque deep learning systems. Specifically, we developed a theoretical framework and algorithms for explaining such systems based on semantic descriptions of data, expressed using specific terminology which is described in underlying ontological knowledge. The proposed framework is domain and model agnostic, and was applied on image classification, symbolic music generation and classification, and audio classification systems. It was compared with existing explainability and evaluation methods, and emerged as a promising approach that can provide high level information to users that other approaches cannot, thanks to the grounding of the explanations on structured represented knowledge. Our novel idea to utilize knowledge graphs for explainability in this way opens new paths to researching hybrid AI systems that utilize both low level sub-symbolic information, such as deep learning systems, in addition to high level symbolic information, that is structured, and more understandable to humans, as are knowledge graphs.
URI: http://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/18940
Appears in Collections:Διδακτορικές Διατριβές - Ph.D. Theses

Files in This Item:
File Description SizeFormat 
Eddie-thesis-artemis.pdf11.07 MBAdobe PDFView/Open


Items in Artemis are protected by copyright, with all rights reserved, unless otherwise indicated.