Source:
http://thenextweb.com/microsoft/2013/10/30/microsoft-research-uses-kinect-translate-spoken-sign-languages-real-time/
YouTube video:
http://www.youtube.com/watch?v=HnkQyUo3134
“The street finds its own uses for things,” William Gibson wrote three decades ago. The acclaimed science-fiction author was musing on humanity’s penchant for taking new technology and appropriating (or misappropriating) it toward uses not foreseen by the original creators. If so, then few items have demonstrated Gibson’s appraisal with such revolutionary zeal as has Kinect. Originally developed by Microsoft as a biometrics-based user interface for its Xbox 360 gaming console, it was not long after Kinect hit the streets late in 2010 before its more practical applications were being explored by seasoned engineers and bedroom hackers alike. Kinect has since become adapted to serve as everything from an intuitive interactive art tool, to a medical image manipulation device allowing surgeons to examine MRIs and X-rays while in the operating room. There are also possibilities of using Kinect to enhance the lives of children with autism and learning disabilities.
Now there is a new use for Kinect, this time from Microsoft Research itself: serving as a translator between spoken words and sign language; and it does so in real time.
The Kinect Sign Language Translator, a project spearheaded by Microsoft scientists in China, uses the Kinect sensor in collaboration with dedicated pattern recognition software to analyze hand and body gestures as a person is making them to communicate in American Sign Language, Chinese Sign Language or any other language system within its database. Kinect Sign Language Translator automatically translates the movements into audible words, on-screen text or as a digital avatar translating the meaning in selected sign language. Not only is the Kinect system able to react to audible speech (one of its original capabilities) and correspond with sign language, but it is able to differentiate and work as a proxy between disparate sign language systems completely.
Guobin Wu, the program manager of the Kinect Sign Language Translator project, notes that there are more than 360 million people around the world who are hard of hearing. By bringing together multiple disciplines including machine learning, linguistics, special education, speechless communication, motion sensing and database querying, what had once been a video game tool could in years ahead become a potential “game changer” instead for many who live with the consequences of hearing loss. Wu and his team acknowledge that their system is still very much in its early stages, and is clearly a prototype. For each word that the Kinect system “knows”, it takes five people to teach it to understand the requisite recognition patterns. Also, of the 4,000 words used in Chinese Sign Language, the system currently recognizes only 300.
However, after only six months of research and design, the system already holds considerable promise. Microsoft Research continues to refine and define the system. One that could very one day lead to what team member Dandan Yin – herself a deaf person – remarked when she was first recruited for the project: “When I was a child, my dream was to create a machine to help people who can’t hear.” Thanks to Kinect, Dan’s dream is a grand step closer to becoming reality.