Please use this identifier to cite or link to this item: http://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/17793
Full metadata record
DC FieldValueLanguage
dc.contributor.authorChochlakis, Georgios-
dc.date.accessioned2020-11-20T21:08:25Z-
dc.date.available2020-11-20T21:08:25Z-
dc.date.issued2020-11-18-
dc.identifier.urihttp://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/17793-
dc.description.abstractIn this diploma thesis, we are concerned with tasks in the domain of Artificial Intelligence. We utilize Machine Learning and, in particular, Artificial Neural Networks, to solve the problem of Zero-shot Learning, the task of evaluating our models on classification tasks where the patterns do not belong to any category seen during the model’s training, and no supporting examples of these novel categories is provided. Moreover, Few-shot Learning is a similar task worth mentioning. In this setting, a small support set of samples from the test categories is provided in order for an algorithm to be able to adjust its parameters or extract the necessary knowledge. Contemporary approaches to Zero-shot Learning are based on Generative Networks. The basic algorithm being used is as follows: First, a Generative Network is trained using the samples and auxiliary descriptions that are provided for training. After the Generative Network has been trained, we use it to generate synthetic examples of the categories we are to classify during testing, using the respective descriptions. Lastly, based on these samples, we train a simple classifier. In this work, we propose a novel framework for Zero-shot Learning that augments already existing algorithms, based on the aforementioned basic one, by including the classifier used during testing in the training of the Generative Network. We do so by exploiting the classifier’s classification loss for the training of the Generative Network. However, such a classifier must not depend on the Generative Network’s samples for training and must be flexible w.r.t its label space. Such properties are also essential for Few-shot Learning, therefore we leverage such an algorithm. During training and testing, samples generated by the Generative Network are treated as the support set of the classifier, based on which it classifies real samples. We empirically observe gains in performance compared to simple Zero-shot Learning algorithms. In addition, given that some of these algorithms achieved state-of-the-art performance in Zero-shot Learning benchmarks, we now achieve that in various cases. Also, we show that that the usage of the Few-shot learner only during training or only during testing still improves the accuracy of the Zero-shot learner, while the advantages of using it in either setting seem to be additive to one another.en_US
dc.languageenen_US
dc.subjectΤεχνητή Νοημοσύνηen_US
dc.subjectΜηχανική Μάθησηen_US
dc.subjectΒαθειά Μάθησηen_US
dc.subjectΜηδενική Υποστήριξη Δεδομένωνen_US
dc.subjectΣύνολο Υποστήριξηςen_US
dc.subjectΠαραγωγικά Δίκτυαen_US
dc.subjectArtificial Intelligenceen_US
dc.subjectMachine Learningen_US
dc.subjectDeep Learningen_US
dc.subjectZero-shot Learningen_US
dc.subjectFew-shot Learningen_US
dc.subjectGenerative Networksen_US
dc.titleUsing Artificial Neural Networks for Zero-shot Learningen_US
dc.description.pages95en_US
dc.contributor.supervisorΠοταμιάνος Αλέξανδροςen_US
dc.departmentΤομέας Σημάτων, Ελέγχου και Ρομποτικήςen_US
Appears in Collections:Διπλωματικές Εργασίες - Theses

Files in This Item:
File Description SizeFormat 
el15133_ntua_undergrad_thesis.pdf2.72 MBAdobe PDFView/Open


Items in Artemis are protected by copyright, with all rights reserved, unless otherwise indicated.