Recent technology advancements made by Google engineers could mean even those of us who haven’t spent the considerable time and effort necessary to master sign language could soon be able to converse in it in real time. The number of signs in sign language are considerably less than the words in most spoken languages. Which could lead to the mistaken assumption translating it might actually be easier for Google Translate to manage than say, French to Urdu or English to Mandarin. But that would be a very mistaken assumption.
In order to translate sign language in real time, technology has to be able to track hand movements. That has proven a particularly tricky challenge. Not least because fingers can be obscured or gesticulations approximate rather than made with the kind of consistency a computer needs to accurately interpret them. But a new technique that focuses on the palms of the hands has proven to be a breakthrough in getting around those obstacles.
A 3D skeleton map of a moving hand is created by first filming a hand and then mapping out 21 co-ordinates like points at the base of the palm, where each finger meets the palm and fingertips and joints. These points were annotated on pictures of 30,000 hands, allowing an AI algorithm to then quickly be able to identify them on footage of a new hand. The map also uses several specific hand signs, including a ‘thumbs up’, fist, ‘ok’, ‘rock’ and ‘Spiderman’ as foundation. The mapped points were identified in these foundation signs, teaching the algorithm how they are positioned when the hand is in these different core positions.
Using that training, the algorithm developed by the Google engineers is now able to follow and interpret even very quick, successive hand movements without an optimal angle, for example seen from the side, or lighting. While it conceivable that the new technology could one day be incorporated into Google Translate, the research has for now been released as an ‘open-source’ code. That allows other developers to augment, refine or adapt the research the Google team has already carried out and potentially use it in different applications.
Google is keen to point out the research is still early stage and that there are other more developed hand-tracking solutions. But it believes that the relative simplicity of the technology employed, which means it can run on a mobile phone, means what its engineers have produced could be a very strong base. It intends to continue to work on the project, adding to the number of signals the software can accurately interpret.
Another limitation to the technology is that sign language relies on facial expressions and body movements for grammar – which the current system doesn’t incorporate so doesn’t allow for a complete translation. But these additional elements could conceivably be added in the future. Another obvious potential application of the technology could be in the control of devices and appliances through hand gestures.
Commenting on the development, Jesal Vishnuram was encouraged by the development, while pointing out that the technology would need to be combined additional functionalities to become a genuine real time sign language translator. He said:
“From a deaf person’s perspective, it would be more beneficial for software to be developed which could provide automated translations of text or audio into British Sign Language to help everyday conversations and reduce isolation in the hearing world.”
Google is not the only company that has worked on translating sign language. Microsoft has created a prototype software that runs on desktop computers and translates sign language into another language and subtitles. It has also developed a technology that can translate words into sign language, applying it to university lectures. A Brazilian social entrepreneur has also developed Hand Talk – an app which can translate speech into Brazilian sign language.