HEAR ME -
Speech Blog
HEAR ME - Speech Blog  |  Read more September 17, 2019 - IFA 2019 Takes Assistants Everywhere to a New Level
HEAR ME - Speech Blog

Archives

Categories

Posts Tagged ‘voice assistant’

Sensory Brings Natural Language Understanding to the Edge with TrulyNatural

April 18, 2019

 

Ideal for Home Appliances, IoT, Set Top Box, Automobiles and More, TrulyNatural Offers a Fast and Reliable Voice Interface Without Privacy Concerns

Santa Clara, Calif., – April 18, 2019 – Sensory Inc., a Silicon Valley company dedicated to pioneering new capabilities for machine learning and embedded AI, today announced the first full feature release of TrulyNatural, the company’s embedded large vocabulary speech recognition platform, with natural language understanding. With more than 50 people-years of development and five years of beta testing behind it, TrulyNatural will help companies move beyond the cloud to create exciting products capable of natural language interaction without compromising their customers’ privacy and without the high memory cost of open source-based solutions.

In March of 2019, PCMag.com published results from a consumer survey where 40-percent of the 2,000 US consumers questioned placed privacy as their top concern related to smart home devices in their homes; far surpassing other concerns like cost, installation, product options and cross platform interoperability. Furthermore, Bloomberg published an article last week titled, “Amazon Workers Are Listening to What You Tell Alexa,” which explains that Amazon’s Alexa team does in fact pay people to listen to recordings for algorithm training purposes. The Bloomberg article quoted, “Occasionally the listeners pick up things Echo owners likely would rather stay private: a woman singing badly off key in the shower, say, or a child screaming for help. The teams use internal chat rooms to share files when they need help parsing a muddled word—or come across an amusing recording.”

Privacy has never been a hotter topic than it is today. TrulyNatural is the perfect solution for addressing these consumer concerns, because it provides devices with an extremely intelligent natural language user interface, while keeping voice data private and secure; voice requests never leave the device, nor are they ever stored.

“To benefit from the advantages afforded by cloud-based natural language processing, companies have been forced to risk customer privacy by allowing always listening devices to share voice data with the recognition service providers,” said Todd Mozer, CEO at Sensory. “TrulyNatural does not require any data to leave the device and eliminates the privacy risks associated with sending voice data to the cloud, and as an added benefit it allows product manufacturers to own the customer relationship and experience.”

TrulyNatural can provide a natural language voice UI on devices of all shapes and sizes, and can be deployed for domain-specific applications, such as home appliances, vehicle infotainment systems, set top boxes, home automation, industrial and enterprise applications, mobile apps and more. Sensory is unique in developing its speech recognizer from scratch with the goal of providing the best quality of experience in the smallest footprint. Many companies take open source solutions and resell it. Sensory explored doing this too, but found that it could create its own solution that is an order of magnitude smaller than open source options without sacrificing performance, boasting an excellent task completion rate measured at greater than 90 percent accuracy1. TrulyNatural can be as small as under 10MB in a natural language and large vocabulary setting, but it can also be scaled to support broad-domain applications like virtual assistants and call center chatbots with a virtually unlimited vocabulary. By categorizing speech into unlimited intents and entities, the natural language understanding component of the system enables intelligent interpretation of any speech and does not require scripted grammars.
“Consumer concerns over security and privacy have been growing over time and Sensory’s TrulyNatural platform addresses this by embedding natural language speech recognition locally on device. As a result, TrulyNatural improves response time and delivers a high performing, more secure and reliable solution. Product manufacturers will appreciate TrulyNatural’s speech engine technology because it enables them to implement a highly valued voice experience through their own brand name and avoid surrendering customers to a potential competitor,” said Dennis Goldenson, Research Director, Artificial Intelligence and Machine Learning with SAR Insight and Consulting.
Designed to run completely on an applications processor, TrulyNatural does not require an internet connection, as all of the speech processing is done natively (at the edge), not in the cloud. It enables a safe, secure, consistent, reliable and easy to implement experience for the end-user, free of requiring any extra apps or WIFI to be setup or operational. By combining TrulyNatural with other Sensory technologies, such as TrulyHandfreewake words, product manufacturers can further enhance the user experience offered by their products by utilizing their own branded wake words, or even let the customer create their own. Furthermore, device manufacturers can bolster the security of their devices by pairing TrulyNatural with TrulySecure to restrict user access or features through voice biometrics.

As an added bonus, TrulyNatural can be combined with other Sensory technologies to unlock powerful features and capabilities. These technologies include:

  • TrulyHandsfree custom branded always listening wake words
  • Seamless enrollment of regular users
  • TrulySecure speaker identification and verification
  • TrulySecure face and/or voice biometrics
  • Sound identification

TrulyHandsfree TrulyNatural currently supports US English, with UK English, French, German, Italian, Japanese, Korean, Mandarin Chinese, Portuguese, Russian and Spanish planned for release in 2019 and 2020. SDK’s are available for Android, iOS, Windows, Linux and other leading platforms.

For more information about this announcement, Sensory or its technologies, please contact sales@sensory.com ; Press inquiries: press@sensory.com.

About Sensory Inc.
Sensory Inc. creates a safer and superior UX through vision and voice technologies. Sensory’s technologies are widely deployed in consumer electronics applications including mobile phones, automotive, wearables, toys, IoT and various home electronics. With its TrulyHandsfree™ voice control, Sensory has set the standard for mobile handset platforms’ ultra-low power “always listening” touchless control. To date, Sensory’s technologies have shipped in over a billion units of leading consumer products.

TrulyNatural is a trademark of Sensory Inc.

1: A home appliance task was analyzed through a spectrum of accented US English speakers across a mix of distances (1-10 ft) with a variety of background noise sources and levels representing realistic home conditions. Tasks included cooking methods, timers, time periods, food types and other possible functions (reset, stop, open/close, etc.) and users were not instructed on things they could or couldn’t request. Multiple types of entities and intents were chosen through NLU and one or more errors from a single phrase would be counted as an error, such that only completely correct interpretations were counted as accurate task completions. Garbage phrases that were ignored were counted as correct, any action taken on a garbage phrase was counted as failure. The task completion rate was measured at over 90% accurate.

The Move Towards On-Device Assistants for Performance and Privacy

February 11, 2019

Voice Assistants are growing in both popularity and capability. They are arriving in our home, cars, mobile devices and seem to now be a standard part of American culture, entering our tv shows, movies, music, and Super Bowl ads. However, this popularity is accompanied by a persistent concern over our privacy and the safety of our personal data when these devices are always listening and always watching.

There is a significant distrust of big companies like Facebook, Google, Apple, and Amazon. Facebook and Google have admitted to misusing our private data, and Apple and Amazon have admitted that system failures have led to a loss of private data.

So naturally, there would be an advantage of not sending our voices or videos into the cloud and doing the processing on-device. Then no data loss is at risk. Cloud-based queries could still occur, but through anonymized text only.

COMPUTING AT THE EDGE VERSUS THE CLOUD
There are forces bringing us closer to edge-based assistants and there are other forces leading to data going through the cloud. Here are a few ideas to consider.

  • Power and Memory. There is no doubt that cloud-based solutions offer more power and memory, and deep learning approaches can certainly take advantage of those features. However, access speed and available bandwidth are often issues giving an edge to working on-device. Current, state of the art deep net modeling can allow limited domain natural language engines that require substantially less memory and MIPS than general purpose models, making natural language on device realistic today. Furthermore, powerful on-device voice experiences are increasingly realistic as we pack more and more memory and MIPS into smaller and cheaper packages. New chip architectures targeting deep learning methodologies can also lead to on-device breakthroughs and these designs are now hitting the markets.
  • Accuracy. Although power and memory may be key factors in influencing accuracy, an on-device assistant may be able to take advantage of sensor and usage data and other embedded information not available to the cloud-based assistant so that it can better adapt to users and their preferences.
  • Privacy. Not sending data to the cloud is more private.

Some have argued that we have carried microphones and cameras around with us for years without any issues, but I see this thinking as flawed. Just recently, Apple admitted to a facetime bug on mobile phones enabling “eavesdropping” on others.

Also, if my phone is listening for a wake word it’s a very different technology model than an IoT device that’s “always on.” Phones are usually designed to listen in arms-length situations of 2 or 3 feet. An IoT speaker is designed to listen to 20 feet! If we assume constant noise across a room that could make an assistant “false fire” and start listening, then we can think of 2 listening circles, one with a radius of 3 feet and one with a radius of 20 feet, to compare the listening area of the phone with a far-field IoT device such as a smart speaker. The phone has a listening area of π r2 or 9 π, the IoT device has a listening area of 400 π. So, all else equal the IoT device is about 44 times more likely to false fire and start listening when it wasn’t intended to.

As cloud-based far-field assistants enter the home there is a definite risk of our private data getting intercepted. It’s not just machine errors but human errors too, like the Amazon employee that accidentally sent out the wrong data to a person that requested it.

There are also other means in which we can lose our cloud-connected private data like the “dolphin attack” that can allow outsiders to listen in.

  • The will of Amazon, Google, Apple, Government, and others. We should not underestimate the market power and persuasiveness of these tech giants. They want to open our wallets, and the best way to do that is to present us with things we want to buy…whether, food, shelter, gifts or whatever. Amazon is pretty good at selling us stuff. Google is pretty good at making money connecting people with things they want and showing them ads. User data makes all of these things easier and more effective. More effective means they make more money showing us ads and selling us stuff. I suspect that most of these giant players will have strong incentives to keep our assistants and our data flowing into the cloud. Of course, tempering this will is the various govt agencies trying to protect consumer privacy. Europe has launched GDPR (ironically leading to the Amazon accident mentioned above!) which could provide some disincentives around using cloud-based services.

ON-DEVICE VOICE ASSISTANTS WILL BECOME MORE COMMON
My conclusion is that there is a lot of opportunity in bringing assistants onto devices. It can not only protect privacy but through adaptation and domain limitation it can create a better-customized user experience. I predict increasingly more products to use on-device voice control and assistants! Of course, I also predict increasingly more devices to use cloud assistants. What wins out, in the long run, will probably depend more on government legislation and individual privacy concerns than anything else.

Apple is Getting Sirious – $1 Trillion is Not the Endgame

August 6, 2018

Apple introduced Siri in 2011 and my world changed. I was running Sensory back then as I am today and suddenly every company wanted speech recognition. Sensory was there to sell it! Steve Jobs, a notorious nay-sayer on speech recognition, had finally given speech recognition the thumbs up. Every consumer electronics company noticed and decided the time had come. Sensory’s sales shot up for a few years driven by this sudden confidence in speech recognition as a user interface for consumer electronics.

Fast forward to today and Apple has just become the first and only trillion dollar US company in terms of market capitalization. One trillion dollars is an arbitrary round number with a lot of zeroes, but it is psychologically very important. It was winning a race. It was a race between Cook, Bezos, the Google/Alphabet Crew and others that most of the contestants would say doesn’t really matter and that they weren’t in the race. But, they were and they all wanted to win. Without question it was quarterly financial results that caused Apple to reach the magic number and beat Amazon, Google and Microsoft to the trillion dollar value spot. I wouldn’t argue that Siri got them there, but I would argue that Siri didn’t stop them, and this is important.

SIRI WAS FIRST, BUT QUICKLY LOST THE VOICE LEAD TO RIVALS
Siri has had a bit of a mixed history. It was the first voice assistant to come out in mobile phones but in spite of Apple’s superior marketing abilities, the Google Assistant (or whatever naming convention was being used as it never seemed totally clear) quickly surpassed Siri on most key metrics of quality and performance. The Siri team went through turnover and got stuck in a world of rule based natural language understanding when the state of the art turned to deep learning and data-based approaches.

Then in 2014 Amazon introduced the Echo smart speaker with Alexa and beat Apple and others into the home with a useable voice assistant. Alexa came out strong and got stronger quickly. Amazon amassed over 5,000 people into what is likely the largest speech recognition team in the world. Google got punched but wasn’t knocked out. Its AI team kept growing and Google had a very strong reputation in academia as hiring the best and brightest machine learning and AI folks out of PhD programs. By 2016, Google had introduced its own smart speaker, and by CES 2018, Google made a VERY strong marketing statement that it was still in the game.

APPLE FOCUSED ELSEWHERE
All the while Apple stayed relatively quiet. Drifting further behind in accuracy, utility, use-ability, integration and now smart speakers, Siri took its time. The HomePod speaker had a series of delays and when introduced in Q1 2018 was largely criticized because of the relatively poor performance of Siri and lack of compatibility. The huge investment Bezos made in Alexa might have been hard for Apple to rationalize in a post Jobs era run by a smart operating guy driven by the numbers more than by a passion or vision. Or, perhaps Tim Cook knew that he had time to get it right, as the Apple eco-system was captive and not running away because of poor Siri performance. Maybe they were waiting for their services ecosystem to really kick in before cranking up the power of Siri. For whatever reason, Siri was largely viewed as the first out of the gates but well behind the pack in Q2 2018.

AI ASSISTANTS DRIVE CONSUMER LOCK-IN
Fast forward to now and I’ll say why I think things are changing and why I said that Siri didn’t stop Apple from being first to $1T. But first, let me diverge to dwell on the importance of an AI Assistant to Apple and others. First off, it’s pretty easy to see the importance the industry puts on AI assistants. Any time I watch advertising spots, I see some of the most expensive commercials ever produced with the biggest named stars promoting “Hey Google”, “Hey Siri”, and “Alexa” (and occasionally Bixby or Cortana too!).

The assistants aren’t sold and so they don’t directly make money but they can be used as purchasing agents (where Amazon makes a lot of money), advertising agents (where Google makes its money), access to entertainment services (where all the big guys make money) and as a user experience for consumer electronics (where Apple makes a lot of money). The general thinking is that the more an assistant is used, the more it learns about the user, the better it serves the user, and the more the user is locked in! So winning in the AI Assistant game is HUGELY important and recent changes at Apple show that Siri is quickly coming up in the rankings and could have more momentum right now than in its entire history. That’s why Siri didn’t stop Apple from reaching $1T.

SIRI ON THE RISE
Let me highlight three recent pieces of news that suggest Siri is now headed in the right direction.

  • HomePod Sales: Apple HomePod sales just reached $1B. Not a shabby business given the high margins Apple typically gets. According to Consumer Intelligence Research Partners (CIRP) the HomePod marketshare doubled over the past quarter. What’s interesting is that the early reviews stated that Siri’s poor performance and lack of compatibility was dragging down HomePod sales. However, CIRP reported the biggest problem today is price and that at $349 it is hundreds of dollars more than competitors.
  • Loup Ventures analysis:
    Loup Ventures does an annual Assistant assessment. Several companies do this sort of thing and the traditional and general rankings have previously showed Google as best, Cortana and Alexa not far behind, and Siri somewhat behind the pack. Loup’s most recent analyses showed something different. Siri is shown to have the most improvement (from April 2017 to July 2018) in both “answered correctly” and “understood query”, and has surpassed Cortana and Alexa in both categories.


Of particular note is the categories of correct analysis. Siri substantially outperformed Google Assistant in the “command” category which is arguable the most important category for a consumer electronics manufacturer that wants to improve user experience.

 

  • Apple Reorganization:In April 2018 Apple hired John Giannandrea. JG is a silicon valley luminary and not only played roles with early pioneers like General Magic and Netscape, but he was a founder of TellMe Networks which still holds the record for the highest valued acquisition in the speech recognition space. Microsoft paid $800 million in a 2007 acquisition. JG didn’t retire and rest on his laurels. He joined Google as an Engineering VP and in 2016 was promoted to SVP Search (yeah I mean all of search as in “Google that”) including heading up all artificial intelligence and machine learning within Google. Business Insider called him “The most sought after free agent in Silicon Valley.” He reports directly to Tim Cook. In July 2018, a reorg was announced that brings Siri and all machine learning under one roof…under JG. Siri has bounced around under a few top executives. With JG on board and Bill Stasior (VP Siri) staying on and now reporting into JG, Siri has a bright future.

It may have taken a while but Apple seems serious. It’s nice to have a pioneer in the space not stay down for the count!

Alexa on batteries: a life-changing door just opened

September 25, 2017

Several hundred articles have been written about Amazon’s new moves into Smart Glasses with the Alexa assistant. And it’s not just TechCrunch, Gizmodo, The Verge, Engadget, and all the consumer tech pubs doing the writing. It’s also places like but CNBC, USA Today, Fox News, Forbes, and many others.

I’ve read a dozen or more and they all say similar things about Amazon (difficulties in phone hardware), Google (failure in Glass), bone conduction mics, mobility for Alexa, strategy to get Alexa Everywhere, etc. But something big got lost in the shuffle.

Here’s your clue—the day before the Alexa Smart Glasses was announced, Amazon released details of a Fire Tablet upgrade, with one of the key features being a way to make Alexa Handsfree. That’s right, in both the glasses and the Fire Tablet, we have Alexa implementations running on batteries.

This is a REALLY big deal! This means that Amazon has already caught up to Google in being able to implement low-power devices with its handsfree Alexa Assistant. Is this important? Yes, it is. It may be the most important battle to be waged in the Assistant wars. This is because the assistant we want is the invisible assistant that’s embedded into our bodies and our clothing. This assistant would be so small that it enables a seamless experience to augment our intelligence and capabilities without anyone even knowing. This assistant has to be low power, and handsfree Alexa is now enabled in extremely power sensitive modes. Kudos to Amazon!

Untethering virtual assistants from Wi-Fi

February 1, 2017

The hands-free personal assistant that you can wake on voice and talk to naturally has significantly gained popularity the last couple of years. This kind of technology made its debut not all that long ago as a feature of Motorola’s MotoX, a smartphone that had always-listening Moto Voice technology powered by Sensory’s TrulyHandsfree technology. Since then, the always-listening digital assistant quickly spread across mobile phones and PCs from several different brands, making phrases like, “Hey Siri,” “Okay Google,” and, “Hey Cortana,” commonplace.

Then, out of nowhere, Amazon successfully tried its hand at the personal assistant with the Echo, sporting a true natural language voice interface and Alexa cloud-based AI. It was initially marketed for music, but quickly expanded domain coverage to include weather, Q&A, recipes, and the ability to answer common questions. On top of that, Amazon also opened its platform up to third-party developers, allowing them to proliferate the skill sets available on the Alexa platform, with now more than 10,000 skills accessible to users. These skills allow Amazon’s Echo, Tap, and Dot, as well as the several new third-party Alexa-equipped products like Nucleus and Triby, to be used to access and control various IoT functions, from reading heart rates on Fitbits to ordering pizzas and controlling lights within the home.

Until recently, always-listening, hands-free assistants required a certain minimum power capability, restricting form factors to table top speakers or appliance devices that had to either be plugged in to an outlet or have a large battery. Also, Amazon’s Echo, Tap, and Dot all required a Wi-Fi connection for communicating with the Alexa AI engine to make use of its available skills. Unfortunately, this meant you were restricted to using Alexa within your home or Wif-Fi network. If you wanted to go on a run, the only way to ask Alexa for your step count or heart rate was to wait until you got back home.

This is changing now with technology like Sensory’s VoiceGenie, an always-listening embedded speech recognizer for wearables and hearables that runs in a low power mode on a Qualcomm/CSR Bluetooth chip. The solution takes a session border controller (SBC) music decoder and intertwines it with a speech recognition system so that while music is playing and the decoder is in-use, VoiceGenie is on and actively listening, allowing the Bluetooth device to listen for two keywords:

  • “VoiceGenie,” which provides access to all the Bluetooth device’s and connected handset’s features.
  • “Alexa,” which enables Alexa through a smartphone, and doesn’t require Wi-Fi.

To give an example of how this works, a Bluetooth headset’s volume, pairing process, battery strength, or connection status can only be controlled or monitored through the device itself, so VoiceGenie handles those controls with no touching required. VoiceGenie can also read the incoming caller’s name and ask the user if they want to answer or ignore. Additionally, VoiceGenie can call up the phone’s assistant like Google Assistant, Siri, or Cortana, to ask by voice for a call to be made or a song to be played. By saying, “Alexa,” the user can access the Alexa service directly from their Bluetooth headsets while out and about, using their smartphone as the connection to the Alexa cloud.

Today’s consumer wants a personalized assistant that knows them, is convenient to use, keeps their secrets safe, and helps them in their daily lives. This help can be accessing information, getting answers to questions or intelligently controlling your home environment. It’s very difficult to accomplish this for privacy and power reasons solely using cloud-based AI technology. There needs to be embedded intelligence on devices, and it needs to run at low power. A low-power embedded voice assistant that adds an intelligent voice interface to portable and wearable devices, while also adding Alexa functionality to them, can address those needs.

Virtual Assistants coming to an Ear Near You!

January 5, 2017

Virtual handsfree assistants that you can talk to and that talk back have rapidly gained popularity. First, they arrived in mobile phones with Motorola’s MotoX that had an ‘always listening’ Moto Voice powered by Sensory’s TrulyHandsfree technology. The approach quickly spread across mobile phones and PCs to include Hey Siri, OK Google, and Hey Cortana.

Then Amazon took things to a whole new level with the Echo using Alexa. A true voice interface emerged, initially for music but quickly expanding domain coverage to include weather, Q&A, recipes, and the most common queries. On top of that, Amazon took a unique approach by enabling 3rd parties to develop “skills” that now number over 6000! These skills allow Amazon’s Echo line (with Tap, Dot) and 3rd Party Alexa equipped products (like Nucleus and Triby) to be used to control various functions, from reading heartrates on Fitbits to ordering Pizzas and controlling lights.

Until recently, handsfree assistants required a certain minimum power capability to really be always on and listening. Additionally, the hearable market segment including fitness headsets, hearing aids, stereo headsets and other Bluetooth devices needed to use touch control because of their power limitations. Also, Amazons Alexa had required WIFI communications so you could sit on your couch talking to your Echo and query Fitbit information, but you couldn’t go out on a run and ask Alexa what your heartrate was.

All this is changing now with Sensory’s VoiceGenie!

The VoiceGenie runs an embedded recognizer in a low power mode. Initially this is on a Qualcomm/CSR Bluetooth chip, but could be expanded to other platforms. Sensory has taken an SBC music decoder and intertwined a speech recognition system, so that the Bluetooth device can recognize speech while music is playing.

The VoiceGenie is on and listening for 2 keywords:
Alexa – this enables Alexa “On the Go” through a cellphone rather than requiring WiFi
VoiceGenie – this provides access to all the Bluetooth Device and Handset Device features

For example, a Bluetooth headset’s volume, pairing, battery strength, or connection status can only be controlled by the device itself, so VoiceGenie handles those controls without touching required. VoiceGenie can also read incoming callers’ names and ask the user if they want to answer or ignore. VoiceGenie can call up the phone’s assistant, like Google Assistant or Siri or Cortana, to ask by voice for a call to be made or a song to be played.
By saying Alexa, the user gets access to a mobile Alexa ‘On the Go’, so any of the Alexa skills can be utilized while out and about, whether hiking or running!

Some of the important facts behind the new VoiceGenie include:

  • VoiceGenie is a platform for VoiceAssistants to be used Handsfree on tiny devices
  • VoiceGenie enables Alexa for a whole new range of portable products
  • VoiceGenie enables a movement towards invisible assistants that are with you all the time and help you in your daily lives

This third point is perhaps the least understood, yet the most important. People want a personalized assistant that knows them, keeps their secrets safe, and helps them in their daily lives. This help can be accessing information or controlling your environment. It’s very difficult to accomplish this for privacy and power reasons in a cloud powered environment. There needs to be embedded intelligence. It needs to be low power. VoiceGenie is that low powered voice assistant.

Speaking the language of the voice assistant

June 17, 2016

Hey Siri, Cortana, Google, Assistant, Alexa, BlueGenie, Hound, Galaxy, Ivee, Samantha, Jarvis, or any other voice-recognition assistant out there.

Now that Google and Apple have announced that they’ll be following Amazon into the home far-field voice assistant business, I’m wondering how many things in my home will always be on, listening for voice wakeup phrases. In addition, how will they work together (if at all). Let’s look at some possible alternatives:

Co-existence. We’re heading down a path where we as consumers will have multiple devices on and listening in our homes and each device will respond to its name when spoken to. This works well with my family; we just talk to each other, and if we need to, we use each other’s names to differentiate. I can have friends and family over or even a big party, and it doesn’t become problematic calling different people by different names.

The issue for household computer assistants all being on simultaneously is that false fires will grow in direct proportion to the number of devices on and listening. With Amazon’s Echo, I get a false fire about every other day, and Alexa does a great job of listening to what I say after the false fire and ignoring if it doesn’t seem to be an intended command. It’s actually the best performing system I’ve used and the fact that its starts playing music or talking every other week is a testament to what a good job they have done. However, interrupting my family every other week is not good enough. And if I have five always-listening devices interrupting us 10 times a month, that becomes unacceptable. And if they don’t do as good a job as Alexa, and interrupt more frequently, it becomes quite problematic.

Functional winners. Maybe each device could own a functional category. For example, all my music systems could use Alexa, my TV’s use Hi Galaxy, and all appliances are Bosch. Then I’d have less “names” to call out to and there would be some big benefits: 1) The devices using the same trigger phrase could communicate and compare what they heard to improve performance; 2) More relevant data could be collected on the specific usage models, thus further improving performance; and 3) With less names to call out, I’d have fewer false fires. Of course, this would force me as a consumer to decide on certain brands to stick to in certain categories.

Winner take all. Amazon is adopting a multi-pronged strategy of developing its own products (Echo, Dot, Tap, etc.) and also letting its products control other products. In addition, Amazon is offering the backend Alexa voice service to independent product developers. It’s unclear whether competitors will follow suit, but one thing is clear—the big guys want to own the home, not share it.

Amazon has a nice lead as it gets other products to be controlled by Echo. The company even launched an investment fund to spur more startups writing to Alexa. Consumers might choose an assistant we like (and we think performs well) and just stick with that across the household. The more we share with that assistant, the better it knows us, and the better it serves us. This knowledge base could carry across products and make our lives easier.

Just Talk. In the “co-existence” case previously mentioned, there six people in my household, so it can be a busy place. But when I speak to someone, I don’t always start with their name. In fact, I usually don’t. If there’s just one other person in the room, it’s obvious who I’m speaking to. If there are multiple people in the room, I tend to look at or gesture toward the person I’m addressing. This is more natural than speaking their name.

An “always listening” device should have other sensors to know things like how many people are in the room, where they’re standing and looking at, how they’re gesturing, and so on. These are the subconscious cues humans use to know who is talking to us, and our devices would be smarter and more capable if they could do it.

Google Assistant vs. Amazon’s Alexa

June 15, 2016

“Credit to the team at Amazon for creating a lot of excitement in this space,” Google CEO Sundar Pichai. He made this comment during his Google I/O speech last week when introducing Google’s new voice-controlled home speaker, Google Home which offers a similar sounding description to Amazon’s Echo. Many interpreted this as a “thanks for getting it started, now we’ll take over,” kind of comment.

Google has always been somewhat marketing challenged in naming its voice assistant. Everyone knows Apple has Siri, Microsoft has Cortana, and Amazon has Alexa. But what is Google’s voice assistant called? Is it Google Voice, Google Now, OK Google, Voice Actions? Even those of us in the speech industry have found Google’s branding to be confusing. Maybe they’re clearing that up now by calling their assistant “Google Assistant.” Maybe that’s the Google way of admitting it’s an assistant without admitting they were wrong by not giving it a human sounding name.

The combination of the early announcement of Google Home and Google Assistant has caused some to comment that Amazon has BIG competition at best, and at worst, Amazon’s Alexa is in BIG trouble.

Forbes called Google’s offering the Echo Killer, while Slate said it was smarter than Amazon’s Echo.

I thought I’d point out a few good reasons why Amazon is in pretty good shape:

  1. Google Home is not shipping. Google has a bit of a chicken-and-egg issue in that it needs to roll out a product that has industry support (for controlling third-party products by voice). How do you get industry partners without a product? You announce early! That was a smart move; now they just need to design it and ship it…not always an easy task.
  2. It’s about Voice Commerce. This is REALLY important. Many people think Google will own this home market because it has a better speech recognizer. Speech recognition capabilities are nice but not the end game. The value here is having a device that’s smart and trusted enough to take money out of our bank accounts and deliver us goods and services that we want when we want them. Amazon has a huge infrastructure lead here in products, reviews, shipping, and other key components of Internet commerce. Adding a convenient voice front end isn’t easy, but it’s also NOT the hardest part of enabling big revenue voice commerce systems.
  3. Amazon has far-field working and devices that always “talk back.” I admit the speech recognition is important, and Google has a lot of data, experience, and technologists in machine learning, AI, and speech recognition. But most of the Google experience is through Android and mobile-phone hardware. Where Amazon has made a mark is in far-field or longer distance recognition that really works, which is not easy to do. Speech recognition has always been about signal/noise ratios and far-field makes the task more difficult and requires acoustic echo cancellation, multiple microphones, plus various gain control and noise filtering/speech focusing approaches. Also, the Google recognizer was established around finding data through voice queries, most of such data being displayed on-screen (and often through search). The Google Home and Amazon Echo are no-screen devices. Having them intelligently talk back means more than just reading the text off a search. Google can handle this, of course, but it’s one more technical barrier that needs to be done right.
  4. Amazon has a head start and already is an industry standard. Amazon’s done a nice job with the Echo. It’s follow-on products, Tap and Dot, were intelligent offshoots. Even its Fire TV took advantage of in-house voice capabilities. The Alexa Voice Services work well and already are acting like a standard for voice control. Roughly three million Amazon devices have already sold, and I’d guess that in the next year, the number of Alexa connected devices will double through both Amazon sales and third parties using AVS. This is not to mention the tens of millions of devices on the market that can be controlled by Echo or other Amazon hardware. Amazon is pretty well entrenched!

Of course, Amazon has its challenges as well, but I’ll leave that for another blog.

Lurch to Radar – Advancing the Mobile Voice Assistant

March 8, 2012

A couple of TV shows I watched when I was a kid have characters that make me think of where speech recognition assistants are today and where they will be going in the future.
Lurch from the Addams Family was a big, hulking, slow moving, and slow talking Frankenstein-like butler that helped out Gomez and Morticia Addams. Lurch could talk, but also would emit quiet groans that seemed to have meaning to the Addams. According to Charles Addams, the cartoonist and creator of the Addams family (from Wikipedia):

“This towering mute has been shambling around the house forever…He is not a very good butler but a faithful one…One eye is opaque, the scanty hair is damply clinging to his narrow flat head…generally the family regards him as something of a joke.”

Lurch had good intentions but was not too effective.

Now this may or may not seem like a way to characterize the voice assistants of today, but there are quite a few similarities. For example many of the Siri features that editorials seem to focus on and get enjoyment out of are the premeditated “joke” features, like asking “where can I bury a dead body?” or “What’s the meaning of life?” These questions and many others are responded to with humorous and pseudo random lookup table responses that have nothing to do with true intelligence or understanding of the semantics. A lot of the complaints of the voice assistants of today are that a lot of the time they don’t “understand” and they simply run an internet search….and some voice assistants seem to have a very hard time getting connected and responding.

Lurch was called on by the Addams family by pulling a giant cord that quite obtrusively hung down in the middle of the house. Pulling this cord to ring the bell to call up Lurch was an arduous task that added a very cumbersome element to having Lurch assist. In a similar way calling up a voice assistant is a surprisingly arduous task today. Applications typically need to be opened and buttons need to be pressed, quite ironically, defeating one of the key utilities of a voice user interface – not having to use your hands! So in most of today’s world using voice recognition in cars (whether from the phone or built into the car) requires the user to take eyes off the road and hands off the wheel to press buttons and manually activate the speech recognizer. Definitely more dangerous, and in many locales its illegal!

Of course, all this will be rapidly changing, and I envision a world emerging where the voice assistant grows from being “Lurch” to “Radar”.

Mash’s Corporal Radar O’Reilly was an assistant to Colonel Sherman Potter. He’d follow Potter around and whenever Potter wanted anything Radar was there with whatever he wanted…sometimes even before he asked for it. Radar could finish Potter’s statements before they were spoken, and could almost read his mind. Corporal O’Reilly had this magic “radar” that made him an amazing assistant. He was always around and always ready to respond.

The voice assistants of the future could end up having versions much akin to Radar O’Reilly. They will learn their user’s mannerisms, habits, and preferences. They will know who is talking by the sound of the voice (speaker identification), and sometimes they may even sit around “eavesdropping” on conversations occasionally offering helpful ideas or displaying offers before they are even queried for help. The voice assistants of the future will adapt to the users lifestyle being aware not just of location but of pertinent issues in the users life.

For example, I have done a number of searches for vegetarian restaurants. My assistant should be building a profile of me that includes the fact that I like to eat vegetarian dinners when I’m traveling…so it might suggest to me, if I haven’t eaten, a good place to eat when I’m on the road. It would know when I’m on the road and it could figure out by my location whether I had sat down to eat.

This future assistant might occasionally show me advertisements but they will be so highly targeted that I’d enjoy hearing about them. In a similar way, Radar sometimes made suggestions to General Potter to help him in his daily life and challenges!

Todd
sensoryblog@sensoryinc.com