Please use this identifier to cite or link to this item: http://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/18000
Title: Deep Learning Based Sign Language Recognition
Authors: Παρέλλη, Μαρία
Μαραγκός Πέτρος
Keywords: Sign Language
Graphs
3D Pose
ST-GCN
3D Mesh
CNN
Issue Date: 24-Jun-2021
Abstract: Sign Language constitutes the primary means of communication for the deaf and hard-of-hearing. Sign Language Recognition is a complex task, which lies at the intersection of computer vision and language modeling. Manual and non-manual cues such as expression, hand shape, and body orientation occur in parallel and play a meaningful role in the articulation of the sign. In this thesis, we study this problem extensively by leveraging recent deep learning approaches. In the first section, we focus on 3D Hand and Body Pose estimation and report quantitative and qualitative results. In the second section, we explore the task of continuous sign language recognition and how expressive 3D skeleton and parameterizations of the human body can be exploited in conjunction with graph convolutions in order to effectively solve our task. We also compare our results with successful architectures, such as transformers and LSTM attention encoder-decoders. We report competitive performance on the Phoenix 2014-T dataset.
URI: http://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/18000
Appears in Collections:Διπλωματικές Εργασίες - Theses

Files in This Item:
File Description SizeFormat 
Thesis_parelli.pdf9.32 MBAdobe PDFView/Open


Items in Artemis are protected by copyright, with all rights reserved, unless otherwise indicated.