Please use this identifier to cite or link to this item: http://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/18000
Full metadata record
DC FieldValueLanguage
dc.contributor.authorΠαρέλλη, Μαρία-
dc.date.accessioned2021-07-13T17:44:46Z-
dc.date.available2021-07-13T17:44:46Z-
dc.date.issued2021-06-24-
dc.identifier.urihttp://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/18000-
dc.description.abstractSign Language constitutes the primary means of communication for the deaf and hard-of-hearing. Sign Language Recognition is a complex task, which lies at the intersection of computer vision and language modeling. Manual and non-manual cues such as expression, hand shape, and body orientation occur in parallel and play a meaningful role in the articulation of the sign. In this thesis, we study this problem extensively by leveraging recent deep learning approaches. In the first section, we focus on 3D Hand and Body Pose estimation and report quantitative and qualitative results. In the second section, we explore the task of continuous sign language recognition and how expressive 3D skeleton and parameterizations of the human body can be exploited in conjunction with graph convolutions in order to effectively solve our task. We also compare our results with successful architectures, such as transformers and LSTM attention encoder-decoders. We report competitive performance on the Phoenix 2014-T dataset.en_US
dc.languageenen_US
dc.subjectSign Languageen_US
dc.subjectGraphsen_US
dc.subject3D Poseen_US
dc.subjectST-GCNen_US
dc.subject3D Meshen_US
dc.subjectCNNen_US
dc.titleDeep Learning Based Sign Language Recognitionen_US
dc.description.pages126en_US
dc.contributor.supervisorΜαραγκός Πέτροςen_US
dc.departmentΤομέας Σημάτων, Ελέγχου και Ρομποτικήςen_US
Appears in Collections:Διπλωματικές Εργασίες - Theses

Files in This Item:
File Description SizeFormat 
Thesis_parelli.pdf9.32 MBAdobe PDFView/Open


Items in Artemis are protected by copyright, with all rights reserved, unless otherwise indicated.