When you want to use your Alexa to play music, tell you the weather, or order Tide pods, nowadays, that is only a few voice-commands away.

But for a population that is globally 25% larger than the population of the US, technological innovation has been slow.

The hard-of-hearing community has seen a relatively slow rollout of innovation with regards to their hearing-able companions. Lucky for this population, change seems to be on the way.

The problem: a lack of data and interfaces

Computing solutions for deaf people is no easy task.

For starters, it is still a non-trivial task to get computers to understand sign language. Like with languages that can be conveyed with the spoken and written word, sign-languages have their own set of grammars, idioms, and dialects. Subdilties of everyday use can also fly under the radar without robust sources of data.

That leads to the second problem: getting data is harder. A spoken-language corpus can consist of a billion words from as many as 1,000 distinct speakers. On the other hand, an equivalent data-set in a sign language might have fewer than 100,000 signs from only ten people. This is in part because sign-languages are dependent on facial features.

And, currently, the best solutions for a deaf person interfacing with a computer include motion-tracking gloves and multiple cameras - although progress is being made, this remains a burden.

Who is working on this?

One company that is dedicated to creating interfaces for the deaf is SignAll, a Hungarian firm developing software that can recognize American Sign Language (ASL). Although their solution still relies on those gloves mentioned earlier, multiple cameras, and is slower than normal conversation, innovation hopes to take their technology down to real-time, gloveless, and single-camera use-cases.

Additionally, both SignAll and Microsoft are developing their own corpora of sign-language data to get more reliable information to make these kinds of solutions possible. The CEO of SignAll, Dr. Zsolt Robotka, emphasizes the need to prioritize translating sign language into text and speech. This can lead to their goal of widespread two-way communication between humans and computers using sign language.