Παρακαλώ χρησιμοποιήστε αυτό το αναγνωριστικό για να παραπέμψετε ή να δημιουργήσετε σύνδεσμο προς αυτό το τεκμήριο: http://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/18940
Τίτλος: Knowledge Graph Based Explanation and Evaluation of Machine Learning Systems
Συγγραφείς: Ντερβάκος, Έντμοντ-Γρηγόρης
Στάμου Γιώργος
Λέξεις κλειδιά: Knowledge Graphs
Γράφοι Γνώσης
Explainability
Εξηγησιμότητα
Interpretability
Ερμηνευσιμότητα
Evaluation
Αξιολόγηση
Ημερομηνία έκδοσης: 8-Νοε-2023
Περίληψη: Artificial intelligence (AI) has progressed explosively in recent years. Driven by the advent of deep learning, AI is being used in a variety of applications, across multiple scientific fields, in industry as well as in the arts. Despite spectacular results, various ethical issues have arisen that prevent the utilization of deep learning in applications that critically affect people’s lives, such as applications in medicine. The main source of ethical issues is the opacity of deep learning, as the models generally do not provide explanations for the decisions they make, and their use presupposes users’ trust. This lack of transparency also gives rise to additional problems hindering the development of AI systems, such as for example the difficulty of detecting, and consequently fixing bugs, and mistakes in deep learning based systems. These issues have led to the emergence of the eXplainable AI (XAI) field of research, which is the overarching context of this dissertation. This field of research has produced a wide range of approaches, with different algorithms that produce different types of explanations, in different theoretical contexts and concerning different types of data. In this dissertation we explored systems and technologies of formal knowledge representation as a tool to explain the operation of opaque deep learning systems. Specifically, we developed a theoretical framework and algorithms for explaining such systems based on semantic descriptions of data, expressed using specific terminology which is described in underlying ontological knowledge. The proposed framework is domain and model agnostic, and was applied on image classification, symbolic music generation and classification, and audio classification systems. It was compared with existing explainability and evaluation methods, and emerged as a promising approach that can provide high level information to users that other approaches cannot, thanks to the grounding of the explanations on structured represented knowledge. Our novel idea to utilize knowledge graphs for explainability in this way opens new paths to researching hybrid AI systems that utilize both low level sub-symbolic information, such as deep learning systems, in addition to high level symbolic information, that is structured, and more understandable to humans, as are knowledge graphs.
URI: http://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/18940
Εμφανίζεται στις συλλογές:Διδακτορικές Διατριβές - Ph.D. Theses

Αρχεία σε αυτό το τεκμήριο:
Αρχείο Περιγραφή ΜέγεθοςΜορφότυπος 
Eddie-thesis-artemis.pdf11.07 MBAdobe PDFΕμφάνιση/Άνοιγμα


Όλα τα τεκμήρια του δικτυακού τόπου προστατεύονται από πνευματικά δικαιώματα.