Researchers Use Kinect As Digital Assistance To Sign-Language Users
Researchers from Microsoft have collaborated with colleagues from the Institute of Computing Technology at the Chinese Academy of Sciences (CAS) to explore how Kinect?s body-tracking abilities can be applied to the problem of sign-language recognition.
Sign language is the primary language for many deaf and hard-of-hearing people. But it currently is not possible for these people to interact with computers using their native language.
Because of this, researchers in recent years have spent lots of time studying the challenges of sign-language recognition, because not everyone understands sign language, and human sign-language interpreters are not always available.
Then along came a device called the Kinect. Results have been encouraging in enabling people whose primary language is sign language to interact more naturally with their computers, in much the same way that speech recognition does.
"From our point of view," says CAS Professor Xilin Chen, "the most significant contribution is that the project demonstrates the possibility of sign-language recognition with readily available, low-cost 3-D and 2-D sensors."
Kinect, with its ability to provide depth information and color data simultaneously, makes it easier to track hand and body actions more accurately - and quickly.
In this project, the hand tracking leads to a process of 3-D motion-trajectory alignment and matching for individual words in sign language. The words are generated via hand tracking by the Kinect for Windows software and then normalized, and matching scores are computed to identify the most relevant candidates when a signed word is analyzed.
The algorithm for this 3-D trajectory matching, in turn, has enabled the construction of a system for sign-language recognition and translation, consisting of two modes. The first, Translation Mode, translates sign language into text or speech. The technology currently supports American sign language but has potential for all varieties of sign language.
The second, Communications Mode, enables communications between a hearing person and a deaf or hard-of-hearing person by use of an avatar. Guided by text input from a keyboard, the avatar can display the corresponding sign-language sentence. The deaf or hard-of-hearing person responds using sign language, and the system converts that answer into text.
"One unique contribution of this project is that it is a joint effort between software researchers and the deaf and hard of hearing," Zhou says. "A group of teachers and students from Beijing Union University joined this project, and this enabled our algorithms to be conducted on real-world data."
And while the research is valuable in the realm of visual information processing, it also is intended to provide practical assistance to people who communicate primarily in sign language.
"We believe that IT should be used to improve daily life for all persons," says Guobin Wu, a research program manager from Microsoft Research Asia. "While it is still a research project, we ultimately hope this work can provide a daily interaction tool to bridge the gap between the hearing and the deaf and hard of hearing in the near future."
Because of this, researchers in recent years have spent lots of time studying the challenges of sign-language recognition, because not everyone understands sign language, and human sign-language interpreters are not always available.
Then along came a device called the Kinect. Results have been encouraging in enabling people whose primary language is sign language to interact more naturally with their computers, in much the same way that speech recognition does.
"From our point of view," says CAS Professor Xilin Chen, "the most significant contribution is that the project demonstrates the possibility of sign-language recognition with readily available, low-cost 3-D and 2-D sensors."
Kinect, with its ability to provide depth information and color data simultaneously, makes it easier to track hand and body actions more accurately - and quickly.
In this project, the hand tracking leads to a process of 3-D motion-trajectory alignment and matching for individual words in sign language. The words are generated via hand tracking by the Kinect for Windows software and then normalized, and matching scores are computed to identify the most relevant candidates when a signed word is analyzed.
The algorithm for this 3-D trajectory matching, in turn, has enabled the construction of a system for sign-language recognition and translation, consisting of two modes. The first, Translation Mode, translates sign language into text or speech. The technology currently supports American sign language but has potential for all varieties of sign language.
The second, Communications Mode, enables communications between a hearing person and a deaf or hard-of-hearing person by use of an avatar. Guided by text input from a keyboard, the avatar can display the corresponding sign-language sentence. The deaf or hard-of-hearing person responds using sign language, and the system converts that answer into text.
"One unique contribution of this project is that it is a joint effort between software researchers and the deaf and hard of hearing," Zhou says. "A group of teachers and students from Beijing Union University joined this project, and this enabled our algorithms to be conducted on real-world data."
And while the research is valuable in the realm of visual information processing, it also is intended to provide practical assistance to people who communicate primarily in sign language.
"We believe that IT should be used to improve daily life for all persons," says Guobin Wu, a research program manager from Microsoft Research Asia. "While it is still a research project, we ultimately hope this work can provide a daily interaction tool to bridge the gap between the hearing and the deaf and hard of hearing in the near future."