User-independent recognition of Arabic sign language for facilitating communication with the deaf community

This paper presents a solution for user-independent recognition of isolated Arabic Sign language gestures. The video based gestures are preprocessed to segment out the hands of the signer based on color segmentation of the colored gloves. The prediction errors of consecutive segmented images are the...

وصف كامل

محفوظ في:
التفاصيل البيبلوغرافية
المؤلف الرئيسي: Shanableh, Tamer (author)
مؤلفون آخرون: Assaleh, Khaled (author)
التنسيق: article
منشور في: 2011
الموضوعات:
الوصول للمادة أونلاين:http://hdl.handle.net/11073/8830
الوسوم: إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
_version_ 1864513437997989888
author Shanableh, Tamer
author2 Assaleh, Khaled
author2_role author
author_facet Shanableh, Tamer
Assaleh, Khaled
author_role author
dc.creator.none.fl_str_mv Shanableh, Tamer
Assaleh, Khaled
dc.date.none.fl_str_mv 2011
2017-05-04T05:48:35Z
2017-05-04T05:48:35Z
dc.format.none.fl_str_mv application/pdf
dc.identifier.none.fl_str_mv Shanableh, T. (2011). User-independent recognition of Arabic sign language for facilitating communication with the deaf community. Digital Signal Processing, 21(4), 535-542. doi:10.1016/j.dsp.2011.01.015
1095-4333
http://hdl.handle.net/11073/8830
10.1016/j.dsp.2011.01.015
dc.language.none.fl_str_mv en_US
dc.publisher.none.fl_str_mv Elsevier
dc.relation.none.fl_str_mv http://doi.org/10.1016/j.dsp.2011.01.015
dc.subject.none.fl_str_mv Digital video/image processing
Sign language recognition
Motion analysis
Feature extraction
Pattern classification
dc.title.none.fl_str_mv User-independent recognition of Arabic sign language for facilitating communication with the deaf community
dc.type.none.fl_str_mv Postprint
Peer-Reviewed
info:eu-repo/semantics/publishedVersion
info:eu-repo/semantics/article
description This paper presents a solution for user-independent recognition of isolated Arabic Sign language gestures. The video based gestures are preprocessed to segment out the hands of the signer based on color segmentation of the colored gloves. The prediction errors of consecutive segmented images are then accumulated into two images according to the directionality of the motion. Different accumulation weights are employed to further help preserve the directionality of the projected motion. Normally, a gesture is represented by hand movements; however, additional user-dependent head and body movements might be present. In the user-independent mode we seek to filter out such user-dependent information. This is realized by encapsulating the movements of the segmented hands in a bounding box. The encapsulated images of the projected motion are then transformed into the frequency domain using Discrete Cosine Transformation (DCT). Feature vectors are formed by applying Zonal coding to the DCT coefficients with varying cutoff values. Classification techniques such as KNN and polynomial classifiers are used to assess the validity of the proposed user-independent feature extraction schemes. An average classification rate of 87% is reported.
format article
id aus_e95e4999cfb4903800a5f23216ed674e
identifier_str_mv Shanableh, T. (2011). User-independent recognition of Arabic sign language for facilitating communication with the deaf community. Digital Signal Processing, 21(4), 535-542. doi:10.1016/j.dsp.2011.01.015
1095-4333
10.1016/j.dsp.2011.01.015
language_invalid_str_mv en_US
network_acronym_str aus
network_name_str aus
oai_identifier_str oai:repository.aus.edu:11073/8830
publishDate 2011
publisher.none.fl_str_mv Elsevier
repository.mail.fl_str_mv
repository.name.fl_str_mv
repository_id_str
spelling User-independent recognition of Arabic sign language for facilitating communication with the deaf communityShanableh, TamerAssaleh, KhaledDigital video/image processingSign language recognitionMotion analysisFeature extractionPattern classificationThis paper presents a solution for user-independent recognition of isolated Arabic Sign language gestures. The video based gestures are preprocessed to segment out the hands of the signer based on color segmentation of the colored gloves. The prediction errors of consecutive segmented images are then accumulated into two images according to the directionality of the motion. Different accumulation weights are employed to further help preserve the directionality of the projected motion. Normally, a gesture is represented by hand movements; however, additional user-dependent head and body movements might be present. In the user-independent mode we seek to filter out such user-dependent information. This is realized by encapsulating the movements of the segmented hands in a bounding box. The encapsulated images of the projected motion are then transformed into the frequency domain using Discrete Cosine Transformation (DCT). Feature vectors are formed by applying Zonal coding to the DCT coefficients with varying cutoff values. Classification techniques such as KNN and polynomial classifiers are used to assess the validity of the proposed user-independent feature extraction schemes. An average classification rate of 87% is reported.Elsevier2017-05-04T05:48:35Z2017-05-04T05:48:35Z2011PostprintPeer-Reviewedinfo:eu-repo/semantics/publishedVersioninfo:eu-repo/semantics/articleapplication/pdfShanableh, T. (2011). User-independent recognition of Arabic sign language for facilitating communication with the deaf community. Digital Signal Processing, 21(4), 535-542. doi:10.1016/j.dsp.2011.01.0151095-4333http://hdl.handle.net/11073/883010.1016/j.dsp.2011.01.015en_UShttp://doi.org/10.1016/j.dsp.2011.01.015oai:repository.aus.edu:11073/88302024-08-22T12:08:22Z
spellingShingle User-independent recognition of Arabic sign language for facilitating communication with the deaf community
Shanableh, Tamer
Digital video/image processing
Sign language recognition
Motion analysis
Feature extraction
Pattern classification
status_str publishedVersion
title User-independent recognition of Arabic sign language for facilitating communication with the deaf community
title_full User-independent recognition of Arabic sign language for facilitating communication with the deaf community
title_fullStr User-independent recognition of Arabic sign language for facilitating communication with the deaf community
title_full_unstemmed User-independent recognition of Arabic sign language for facilitating communication with the deaf community
title_short User-independent recognition of Arabic sign language for facilitating communication with the deaf community
title_sort User-independent recognition of Arabic sign language for facilitating communication with the deaf community
topic Digital video/image processing
Sign language recognition
Motion analysis
Feature extraction
Pattern classification
url http://hdl.handle.net/11073/8830