I absolutely love using voice control in my car. Although I may be what marketing folks describe as an “early adopter” when it comes to voice control technology, I love this technology because it works, well at least it does most of the time. The ability to pick a song or podcast, dictate texts, and add a stop along my route with my voice vastly improves my driving experience and it’s safer to boot. However, I find myself thinking, why can’t I do more? Why can’t I open or close the windows, adjust my seats, or open the trunk of my car when my arms are full of groceries? Why doesn’t my car recognize my voice like my phone recognizes my face? Why does my voice control only extend to areas with cell service? The answers lie in the past, present, and future of voice control in vehicles.
There was a time when an intelligent car that you could talk to was only seen in movies or on television, remember Knight Rider? It wasn’t until 2005 when speech recognition made its appearance in vehicles via Honda’s Acura. Drivers could activate speech recognition with the push of a button and control temperature, make calls, use the DVD entertainment system, navigate, and have access to Acura-Link, a service that would give information about weather, and traffic reports. However, this system developed with IBM, used a phrase-spotting type of recognizer, meaning users were limited to prescribed commands versus a natural language recognizer. To make matters worse it didn’t work well in noise or with accents and with limited mobile internet access the preloaded databases would quickly become out of date.
Ford and Lexus would go on to debut systems best described as “say what you see” voice control to navigate complex menus that many users found more tedious than turning knobs. According to the National Highway Traffic Safety Administration, by 2012 most vehicles had a voice control system installed as original equipment or enabled voice interactions for drivers by connecting the vehicle with portable devices.
These days some car manufacturers have slowed the development of on-board voice control systems in favor of enabling mobile voice assistants via CarPlay and Android Auto. Integration of voice assistants like Alexa (remember this Buick commercial?) is another way that car manufacturers are leveraging the speech recognition of today’s voice assistants. However limitless these voice assistants are, they still stop short of completing or replacing basic car controls. Sure, I can turn on the lights in my home as I approach with a voice command, but can I adjust the seats or close all the windows in my car as I park?
The heavily publicized Fiat 500 “Hey Google” integration boasts features like accessing the vehicle remotely to check fuel levels, locking, or unlocking the doors from your office, and checking the location of your Fiat. What the nearly 20-minute promotion video failed to communicate was how the Google integration improved the driving experience. Of course, there was no mention of whether all these “features” would come at the price of sharing data with Google.
One exception to this pattern is the Tesla, which boasts a “natural language processor that helps interpret your request and translate it to an action for your car”, allowing drivers to control virtually all the options contained within the touchscreen, from initiating apps, controlling temperature, to locking doors, and even windshield wipers. Unfortunately, if you happen to be a loyal Apple music user, or a fan of the CarPlay interface you are out of luck, the Tesla infotainment/command center only supports the Tesla proprietary assistant which supports specific apps i.e., Spotify and Netflix.
Today it appears the battle of the assistants has found its way from the home to the car, and in some vehicles there can only be one.
With companies like Qualcomm promoting enhanced Bluetooth & Wi-Fi 6 connectivity for vehicles with onboard wifi, staying connected may be a challenge of the past. Furthermore, Apple recently announced an embedded Siri assistant, so by extension one should be able to use voice commands with CarPlay, the Siri enabled infotainment system, in and out of cell range.
In a multiple assistant world, it would be nice to switch between intelligent assistants or use them concurrently when appropriate. Ideally, I’d hop in my car which I’d unlock with my unique biometric wake word, and then my seats, mirrors, and ideal temp would automatically adjust to my profile. Then my car could ask me if I’d like to connect my phone, thus activating Siri via CarPlay. Commands specific to the operation or domain of my car would be engaged by the utterance of my unique wake word, i.e. “Hey, Bright Rider” hands free because once the engine is on, the car would be in always-listening mode, no need to tap a specific button to engage voice commands. Over time, my Bright Rider would adapt to my voice reducing error rates to never and if I ever ran into car trouble, I could even ask my car to listen and self-diagnose with its embedded SoundID.
The truth is the future isn’t so far off, Sensory already deploys all the technologies to deliver the experience that I’ve described. Hopefully car manufacturers realize that they need not limit the in-vehicle voice control experience to infotainment systems managed by the likes of Apple with CarPlay & Google with Android Auto. The technology to enable a native vehicle control system that is updatable, intelligent, and plays well with others is already here, I just wish it was in my car.