Pakistan sign language to Urdu translator using Kinect

Saad Ahmed, Hasnain Shafiq, Yamna Raheel, Noor Chishti, Syed Muhammad Asad

Abstract


The lack of a standardized sign language, and the inability to communicate with the hearing community through sign language, are the two major issues confronting Pakistan's deaf and dumb society. In this research, we have proposed an approach to help eradicate one of the issues. Now, using the proposed framework, the deaf community can communicate with normal people. The purpose of this work is to reduce the struggles of hearing-impaired people in Pakistan. A Kinect-based Pakistan sign language (PSL) to Urdu language translator is being developed to accomplish this. The system’s dynamic sign language segment works in three phases: acquiring key points from the dataset, training a long short-term memory (LSTM) model, and making real-time predictions using sequences through openCV integrated with the Kinect device. The system’s static sign language segment works in three phases: acquiring an image-based dataset, training a model garden, and making real-time predictions using openCV integrated with the Kinect device. It also allows the hearing user to input Urdu audio to the Kinect microphone. The proposed sign language translator can detect and predict the PSL performed in front of the Kinect device and produce translations in Urdu.

Keywords


Kinect; Long short-term memory; Model garden; Object detection; OpenCV; Sign language translator

Full Text:

PDF


DOI: https://doi.org/10.11591/csit.v3i3.p186-193

Refbacks

  • There are currently no refbacks.


Computer Science and Information Technologies
ISSN: 2722-323X, e-ISSN: 2722-3221

CSIT Stats

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.