Please use this identifier to cite or link to this item: http://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/18457
Title: Semantic Segmentation with Deep Convolutional Neural Networks
Authors: Αλεξανδρόπουλος, Σταμάτης
Μαραγκός Πέτρος
Keywords: Computer Vision
Semantic Segmentation
Deep Convolutional Neural Networks
Autonomous Driving
HRNet
Seed Pixels
Issue Date: 10-Oct-2022
Abstract: Semantic segmentation is one of the fundamental topics of computer vision. Specifically, it is the process of assigning a category to each pixel in an image. There are a number of applications in a variety of fields, such as Autonomous Driving, Robotics, and Medical Image Processing, where pixel-level labeling is critical. Deep Convolutional Neural Networks (DCNNs) have lately demonstrated state-of-the-art performance in high-level recognition tasks. As a result, such models may now be used in the above-mentioned cutting-edge applications. Most of the related works concentrate on architectural changes to the used networks in order to better combine global context aggregation with local detail preservation, and utilize a simple loss computed on individual pixels. Designing more complex losses that account for the structure contained in semantic labelings has gotten substantially less attention. The goal of this thesis is to investigate such priors for semantic segmentation and to use them in the supervision of state-of-the-art networks to get results that better reflect the regularity of genuine segmentations. Based on knowledge about the high regularity of real scenes, we propose a method for improving class predictions by learning to selectively exploit information from coplanar pixels. In particular, we introduce a prior which claims that for each pixel, there is a seed pixel which shares the same prediction with the former. As a result of this, we design a network with two heads. The first head generates pixel-level classes, whereas the second generates a dense offset vector field that identifies seed pixel positions. Seed pixels’ class predictions are then utilized to predict classes at each point. To account for possible deviations from precise local planarity, the resultant prediction is adaptively fused with the initial prediction from the first head using a learnt confidence map. The entire architecture is implemented on HRNetV2, a state-of-the-art model on Cityscapes dataset. The offset vector-based HRNetV2 was trained on both Cityscapes and ACDC datasets. We assess our method through extensive qualitative and quantitative experiments and ablation studies and compare it with recent state-of-the-art methods demonstrating its superiority and advantages. To sum up, we achieve better results than the initial model.
URI: http://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/18457
Appears in Collections:Διπλωματικές Εργασίες - Theses

Files in This Item:
File Description SizeFormat 
Diploma_Thesis_Alexandropoulos_03117060.pdf17.58 MBAdobe PDFView/Open


Items in Artemis are protected by copyright, with all rights reserved, unless otherwise indicated.