Though early Kinect patents showed its potential for sign language translation, Microsoft quashed any notion early on that this would become a proper feature. However, that hasn't stopped Redmond from continuing development of the idea. Microsoft Research Asia recently showed off software that allows the Kinect to read almost every American Sign Language gesture via hand tracking, even at conversational speeds. In addition to converting signs to text or speech, the software can also let a hearing person input text and "sign" it using an on-screen avatar. All of this is still confined to a lab so far, but the researchers hope that one day it'll open up new lines of communication between the hearing and deaf -- a patent development we could actually get behind. See its alacrity in the video after the break.

http://www.engadget.com/2013/07/18/m...nguage-reader/