Please use this identifier to cite or link to this item:
|Title:||Camouflaged Object Detection and Segmentation|
|Keywords:||Camouflaged Object, Image Segmentation, Transformer Encoder, Convolutional Encoder, Fuse Features, Self-Attention|
|Abstract:||Camouflaged object detection and segmentation is a branch of computer vision that aims to find objects that are difficult to detect by the human eye. This is a process opposite to finding a salient object, where the parts of the image to be detected are distinct, easily recognizable and their boundaries are differentiated enough from the rest of the background environment. In the case of camouflaged objects, the parts of the image to be detected often show a high similarity to the rest of the environment, greatly increasing the difficulty and challenges that must be faced to carry out this task. In this thesis we introduce a new architecture that combines the powerful capabilities of Transformer Encoders, for extracting global features, with the existing structures of Convolutional Encoders for capturing local features. When detecting a camouflaged object it is quite difficult to detect the exact details near its edges. Inspired by this ability of these objects, we introduce a novel method for combining the extracted features from these two encoders in order to produce rich features both at the level of detail and at the level of semantic interpretation. We evaluate and compare our model on common datasets and with common evaluation metrics and present our findings. The results obtained are quite encouraging as they achieve excellent performance and are able to stand up against the latest technologies in literature. Finally, we observe how the performance of our model changes as we modify either the algorithm or some parameters and afterwards we point out possible uses of our model for medical and other purposes.|
|Appears in Collections:||Διπλωματικές Εργασίες - Theses|
Items in Artemis are protected by copyright, with all rights reserved, unless otherwise indicated.