The Voice Interface in a Deaf World

Over the holidays, many people will add voice controlled assistants to their living room. Some will add an assistant to every room of their house, including their wall clock and microwave. I’m a huge fan of these devices and their potential for accessibility. I’m less concerned with how voice search is changing eCommerce (although my day job is based in such) and I don’t really care about people having ads trigger their devices. The realization I had recently was my inability to use such devices and how it will impact my use of technology in the coming years.

The expansion of the voice driven interface isn’t surprising. Well before Siri was announced, numerous companies had been trying to add voice controls to PC’s. Even before, the interface for PC’s has rapidly evolved. Text based to GUI to touch and now voice is a natural progression. The companies tout the various ways it helps you in your daily life. I doubt the usefulness of these scenarios, like barking orders for groceries or changing the your Spotify playlist. This is all if I could actually use them.

Left Behind

I’ve talked at length on my podcast about my troubles living with and coming to terms with my hearing loss. Since I was 10 its been known I would be traveling this path, hoping my hearing would hold out long enough to experience as much as I can before everything fades away. Now that I’m in my mid-30’s I’ve embraced my future and happy I can still hear what I can.

But my hearing loss may also be excluding me from the next generation of tech, which I hold so dearly. Accessibility issues are nothing new with tech. Voice controlled solutions could help millions of people with touch disabilities or blindness interact with their phone, PC or assistant. This is why I’m so excited for this technology. Voice activated devices that connect with lights, doorbells, even those microwaves are crucial to making the world a better place for people often left behind by technologies advancements.

There is an opportunity to aid deaf users and with the addition of smart displays, like the Google Home Hub or the Amazon Echo Show that actually have a screen attached. Being virtually deaf, the voice controlled interface offers no feedback for me to know the device has understood my order. The standard Echo or Google Home is a screen-free device, responding with voice replies. As IoT expands and these devices control more things inside your home, analog overrides absolutely have to be present for people with similar disabilities such as mine or have a way to interact with the devices without audio cues such as a screen attached to the device or an app that shows the commands and confirmation.

The addition of a confirmation app could be simple, displaying the commands much the same as Siri on the phone showing what the device heard and the course of action. This type of functionality is available in the application on Fire tablets or Google apps on a mobile device. But in a household of mixed hearing capabilities where an Echo or Google Home device could be used, a push notification to a phone to say the device has been triggered would be useful.

A Subtitle Life

My realization that voice controlled interfaces are passing me by was just the first of many tech “advancements” that will pass me by. As a podcast creator and aspiring YouTuber I would love to offer quality subtitles to my viewers. The importance of subtitles are so important in my life I would like to make sure my content is accessible to all.

Over the weekend I attempted to create subtitles for a video I recently created. After 2 hours of watching and editing using the app Aegisub I created 5 minutes of a 30 minute video worth of subtitles. The Aegisub application works fantastically, but the labor is immense creating subtitles.

Many YouTubers and online content creators do not create subtitles for their work simply because there is no easy way to do so. But without them many cannot consume the content. This is even more apparent in our world of live streaming. I would love to watch more Twitch streams, but for the most part the commentary from the creators is unintelligible. Periscope streams of events are useless to me and this is a problem that is not easy to solve.

The future is bright

The future of A.I. and machine learning could produce real world subtitles that are both easy and cheap to create and accurate. Current technologies are pitiful to say the least. When I priced what it would cost to pay for a person to create subs for my content, its cost prohibitive (anywhere from $50 — $150 per video).

Recently, subtitles have been announced for both Google Slides and official subtitle support for Microsoft PowerPoint and Skype. Skype is getting a lot of headlines since it will also do near real-time translations across multiple languages. Not to undervalue the advancement, breaking the language barrier is a huge. But an everyday feature I’d love to see used universally is subtitles in PowerPoint and Slides.

Most people are terrible public speakers. They talk with their back to the audience, pace around the room and generally raise and lower their voice based on the feedback they get from the front row. I would use PowerPoint subtitles every day at work to help with the countless meetings I attend and will finally be able to know what’s being talked about, rather than sending an email after to ask for clarification because I couldn’t hear 70% of the presentation.

As A.I. and Machine Learning get better and hit scale on so many platforms, the future is full of life-changing accessibility advancements in tech.

Leave a Reply

Your email address will not be published. Required fields are marked *