Please use this identifier to cite or link to this item:
Title: Using Artificial Neural Networks for Zero-shot Learning
Authors: Chochlakis, Georgios
Ποταμιάνος Αλέξανδρος
Keywords: Τεχνητή Νοημοσύνη
Μηχανική Μάθηση
Βαθειά Μάθηση
Μηδενική Υποστήριξη Δεδομένων
Σύνολο Υποστήριξης
Παραγωγικά Δίκτυα
Artificial Intelligence
Machine Learning
Deep Learning
Zero-shot Learning
Few-shot Learning
Generative Networks
Issue Date: 18-Nov-2020
Abstract: In this diploma thesis, we are concerned with tasks in the domain of Artificial Intelligence. We utilize Machine Learning and, in particular, Artificial Neural Networks, to solve the problem of Zero-shot Learning, the task of evaluating our models on classification tasks where the patterns do not belong to any category seen during the model’s training, and no supporting examples of these novel categories is provided. Moreover, Few-shot Learning is a similar task worth mentioning. In this setting, a small support set of samples from the test categories is provided in order for an algorithm to be able to adjust its parameters or extract the necessary knowledge. Contemporary approaches to Zero-shot Learning are based on Generative Networks. The basic algorithm being used is as follows: First, a Generative Network is trained using the samples and auxiliary descriptions that are provided for training. After the Generative Network has been trained, we use it to generate synthetic examples of the categories we are to classify during testing, using the respective descriptions. Lastly, based on these samples, we train a simple classifier. In this work, we propose a novel framework for Zero-shot Learning that augments already existing algorithms, based on the aforementioned basic one, by including the classifier used during testing in the training of the Generative Network. We do so by exploiting the classifier’s classification loss for the training of the Generative Network. However, such a classifier must not depend on the Generative Network’s samples for training and must be flexible w.r.t its label space. Such properties are also essential for Few-shot Learning, therefore we leverage such an algorithm. During training and testing, samples generated by the Generative Network are treated as the support set of the classifier, based on which it classifies real samples. We empirically observe gains in performance compared to simple Zero-shot Learning algorithms. In addition, given that some of these algorithms achieved state-of-the-art performance in Zero-shot Learning benchmarks, we now achieve that in various cases. Also, we show that that the usage of the Few-shot learner only during training or only during testing still improves the accuracy of the Zero-shot learner, while the advantages of using it in either setting seem to be additive to one another.
Appears in Collections:Διπλωματικές Εργασίες - Theses

Files in This Item:
File Description SizeFormat 
el15133_ntua_undergrad_thesis.pdf2.72 MBAdobe PDFView/Open

Items in Artemis are protected by copyright, with all rights reserved, unless otherwise indicated.