HEAR ME -
Speech Blog
HEAR ME - Speech Blog  |  Read more June 11, 2019 - Revisiting Wake Word Accuracy and Privacy
HEAR ME - Speech Blog

Archives

Categories

Archive for the ‘always listening’ Category

Revisiting Wake Word Accuracy and Privacy

June 11, 2019

I used to blog a lot about wake words and voice triggers. Sensory pioneered this technology for voice assistants, and we evangelized the importance of not hitting buttons to speak to a voice recognizer. Then everybody caught on and the technology went into main stream use (think Alexa, OK Google, Hey Siri, etc.), and I stopped blogging about it. But I want to reopen the conversation…partly to talk about how important a GREAT wake word is to the consumer experience, and partly to congratulate my team on a recent comparison test that shows how Sensory continues to have the most accurate embedded wake word solutions.

Competitive Test Results. The comparison test was done by Vocalize.ai. Vocalize is an independent test house for voice enabled products. For a while, Sensory would contract out to them for independent testing of our latest technology updates. We have always tested in-house but found that our in-house simulations didn’t always sync up with our customers’ experience. Working with Vocalize allowed us to move from our in-house simulations to more real-world product testing. We liked Vocalize so much that we acquired them. So, now we “contract in” to them but keep their data and testing methodology and reporting uninfluenced by Sensory.

Vocalize compared two Sensory TrulyHandsfree wake word models (1MB size and 250KB size) with two external wake words (Amazon and Kitt.ai’s Snowboy), all using “Alexa” as the trigger. The results are replicable and show that Sensory’s TrulyHandsfree remains the superior solution on the market. TrulyHandsfree was better/lower on BOTH false accepting AND false rejecting. And in many cases our technology was better by a longshot! If you would like see the full report and more details on the evaluation methods, please send an email request to either Vocalize (dev@vocalize.ai) or Sensory (sales@sensory.com).

 

It’s Not Easy. There are over 20 companies today that offer on-device wake words. Probably half of these have no experience in a commercially shipping product and they never will; there are a lot of companies that just won’t be taken seriously. The other half can talk a good talk, and in the right environment they can even give a working demo. But this technology is complex, and really easy to do badly and really hard to do great. Some demos are carefully planned with the right noise in the right environment with the right person talking. Sensory has been focused on low power embedded speech for 25 years, we have 65 of the brightest minds working on the toughest challenges in embedded AI. There’s a reason that companies like Amazon, Google, Microsoft and Samsung have turned to Sensory for our TrulyHandsfree technology. Our stuff works, and they understand how difficult it is to make this kind of technology work on-device! We are happy to provide APK’s so you can do you’re your own testing and judge for yourself! OK, enough of the sales pitch…some interesting stuff lays ahead…

It’s Really Important. Getting a wake word to work well is more important than most people realize. It’s like the front door to your house. It might be a small part of your house, but if you aren’t letting the homeowners in then that’s horrible, and if you are letting strangers in by accident that’s even worse. The name a company gives their wake word is usually the company brand name, imagine the sentiment that comes off when I say a brand name and it doesn’t work. Recently I was at a tradeshow that had a Mercedes booth. There were big signs that said “Hey Mercedes”…I walked up to the demo area and I said “Hey Mercedes” but nothing happened…the woman working there informed me that they couldn’t demo it on the show floor because it was really too noisy. I quickly pulled out my mobile phone and showed her that I could use dozens of wake words and command sets without an error in that same environment. Mercedes has spent over 100 years building up one of the best quality brand reputations in the car industry. I wonder what will happen to that reputation if their wake word doesn’t respond in noise? Even worse is when devices accidentally go off. If you have family members that listen to music above volume 7 then you already know the shock that a false alarm causes!

It’s about Privacy. Amazon, like Google and a few others seem to have pretty good wake words, but if you go into your Alexa settings you can see all of the voice data that’s been collected, and a lot of it is being collected when you weren’t intentionally talking to Alexa! You can see this performance issue in the Vocalize test report. Sensory substantially outperformed Amazon in the false reject area. This is when a person tries to speak to Alexa and she doesn’t respond. The difference is most apparent in babble noise where Sensory falsely rejected 3% and Amazon falsely rejected 10% in comparable sized models (250KB). However the False Accept difference is nothing short of AMAZING. Amazon false accepted 13 times in 24 hours of random noise. In this same time period Sensory false accepted ZERO times (on comparably sized 250KB models). How is this possible you may be wondering? Amazon “fixes” its mistakes in the cloud. Even though the device falsely accepts quite frequently, their (larger and more sophisticated) models in the cloud collect the error. Was that a Freudian slip? They correct the error…AND they COLLECT the error. In effect, they are disregarding privacy to save device cost and collect more data.

As the voice revolution continues to grow, you can bet that privacy will continue to be a hot topic. What you now understand is that wake word quality has a direct impact on both the user experience and PRIVACY! While most developers and product engineers in the CE industry are aware of wake words and the difficulty in making them work well on-device, they don’t often consider that competing wake words technologies aren’t created equally – the test results from Vocalize prove it! Sensory is more accurate AND allows more privacy!

Voice assistant battles, part three: The challenges

August 13, 2018

It’s not easy to be a retailer today when more and more people are turning to Amazon for shopping. And why not shop online? Ordering is convenient with features such as ratings. Delivery is fast and cheap, and returns are easy and free – if you are Prime member! In April 2018 Bezos reported there are more than 100 million Prime members in the world, and the majority of US households are Prime members. Walmart and Google have partnered in an ecommerce play to compete with Amazon, but Walmart is just dancing with the devil. Google will use the partnership to gather data and invest more in their internal ecommerce and shopping experiences. Walmart isn’t relaxing, and is aggressively pursuing ecommerce and AI initiatives through acquisitions, and its Store #8 that acts as an incubator for AI companies and internal initiatives. Question: why does Facebook have a Building 8 and Walmart have a Store 8 for skunkworks projects?

It’s not just the retailers that are under pressure, though. If you make consumer electronics it’s getting more challenging too. Google controls the Android eco-system and is pumping a lot of money into centralizing and hiring around their hardware development efforts. Google is competing against the mobile phones of Samsung, Huawei, LG, Oppo, Vivo, and other users of their Android OS. And Amazon is happy to sell other people’s hardware online (OK, not Google, but others), but they take a nice commission on those sales, and if it’s a hit product they find ways to make more money through Amazon’s in house brands and warehousing, and potentially even making the product themselves. The Alexa fund has financed companies that created Alexa based hardware products that Amazon ended up competing against with in-house developments,and when Amazon sells Alexa products it doesn’t need to make a big profit (as described in part one). And Apple… well, they have a history of extracting money from anyone that wants to play in their eco-system too. This is business and there’s a very good reason that Google, Amazon, Apple, and other giants are giants. They know how to make money on everything they do. They are tough to compete with. The “free” stuff consumers get (and we do get a lot!) isn’t really free. We are trading our data and personal information for it.

So retailers have it tough (and assistants will make it even tougher), service providers have it tough (and assistants with service offerings make it even tougher), and consumer electronic companies have it tough. But the toughest situation is for the speaker companies. The market for speakers is exploding driven by the demand for “smart” speakers. Markets and Markets research report the current smart speaker market at over $2.6B and growing at over 34% a year. Seems like that would be a sweet market to be in, but a lot of that growth is eating away at the traditional speaker market. So a speaker company gets faced with a few alternatives:

  1. Partner with voice assistants within the eco-system of their biggest competitors (Google, Apple, Amazon, etc.). This would give all the data collected to their competitors and put them at the mercy of their competitors systems.
  2. Develop and support an in house solution which could cost WAY too much to maintain, or
  3. Use a 3rd party solution which is likely to cost a lot more and underperform compared to the big guys that are pumping billions of dollars each year into enhancing their AI offerings.

Many are choosing option 1 only to find that their sales are poor because of better quality lower priced offering from Google and Amazon. A company like Sonos, who is a leader in high quality wifi speakers has chosen option 1 with a twist where they are trying to support Google and Amazon and Apple. Their recent IPO filing highlights the challenges well:

”Our current agreement with Amazon allows Amazon to disable the Alexa integration in our Sonos One and Sonos Beam products with limited notice. As such, it is possible that Amazon, which sells products that compete with ours, may on limited notice disable the integration, which would cause our Sonos One or Sonos Beam products to lose their voice-enabled functionality. Amazon could also begin charging us for this integration which would harm our operating results.”

They further highlighted that their lack of service integrations could be a challenge should Google, Amazon or others offer discounting (which is already happening): “Many of these partners may subsidize these prices and seek to monetize their customers through the sale of additional services rather than the speakers themselves,” the company said. “Our business model, by contrast, is dependent on the sale of our speakers. Should we be forced to lower the price of our products in order to compete on a price basis, our operating results could be harmed.” Looking at Sono’s financials you can see their margins already starting to erode.

Some companies have attempted #2 above by bringing out in house Assistants using open-source speech recognizers like Kaldi. This might save the cost of deploying third party solutions but it requires substantial in house efforts, and is ultimately fraught with the same challenges as #3 above which is that it’s really hard to compete against companies approaching a trillion dollar market capitalization when these companies see AI and voice assistants as strategically important and are investing that way.

Retailers, Consumer OEMs, and Service providers all have a big challenge. I run a small company called Sensory. We develop AI technologies, and companies like Google, Amazon, Samsung, Microsoft, Apple, Alibaba, Tencent, Baidu, etc. are our customers AND our biggest competitors. My strategy? Move fast, innovate, and move on. I can’t compete head to head with these companies, but when I come out with solutions that they need BEFORE they have it in house, I get a 1-3 year window to sell to them before they switch to an in house replacement. That’s not bad for a small company like Sensory. For a bigger company like a Sonos or a Comcast, they could deploy the same general strategy to set up fast moving innovation pieces that allow them to stay ahead of the game. This appears to be the exact strategy that Walmart is taking on with Store 8 to not be left behind! Without doubt, it’s very tough competing in a world of giants that have no boundaries in their pursuits and ambitions!

Apple is Getting Sirious – $1 Trillion is Not the Endgame

August 6, 2018

Apple introduced Siri in 2011 and my world changed. I was running Sensory back then as I am today and suddenly every company wanted speech recognition. Sensory was there to sell it! Steve Jobs, a notorious nay-sayer on speech recognition, had finally given speech recognition the thumbs up. Every consumer electronics company noticed and decided the time had come. Sensory’s sales shot up for a few years driven by this sudden confidence in speech recognition as a user interface for consumer electronics.

Fast forward to today and Apple has just become the first and only trillion dollar US company in terms of market capitalization. One trillion dollars is an arbitrary round number with a lot of zeroes, but it is psychologically very important. It was winning a race. It was a race between Cook, Bezos, the Google/Alphabet Crew and others that most of the contestants would say doesn’t really matter and that they weren’t in the race. But, they were and they all wanted to win. Without question it was quarterly financial results that caused Apple to reach the magic number and beat Amazon, Google and Microsoft to the trillion dollar value spot. I wouldn’t argue that Siri got them there, but I would argue that Siri didn’t stop them, and this is important.

SIRI WAS FIRST, BUT QUICKLY LOST THE VOICE LEAD TO RIVALS
Siri has had a bit of a mixed history. It was the first voice assistant to come out in mobile phones but in spite of Apple’s superior marketing abilities, the Google Assistant (or whatever naming convention was being used as it never seemed totally clear) quickly surpassed Siri on most key metrics of quality and performance. The Siri team went through turnover and got stuck in a world of rule based natural language understanding when the state of the art turned to deep learning and data-based approaches.

Then in 2014 Amazon introduced the Echo smart speaker with Alexa and beat Apple and others into the home with a useable voice assistant. Alexa came out strong and got stronger quickly. Amazon amassed over 5,000 people into what is likely the largest speech recognition team in the world. Google got punched but wasn’t knocked out. Its AI team kept growing and Google had a very strong reputation in academia as hiring the best and brightest machine learning and AI folks out of PhD programs. By 2016, Google had introduced its own smart speaker, and by CES 2018, Google made a VERY strong marketing statement that it was still in the game.

APPLE FOCUSED ELSEWHERE
All the while Apple stayed relatively quiet. Drifting further behind in accuracy, utility, use-ability, integration and now smart speakers, Siri took its time. The HomePod speaker had a series of delays and when introduced in Q1 2018 was largely criticized because of the relatively poor performance of Siri and lack of compatibility. The huge investment Bezos made in Alexa might have been hard for Apple to rationalize in a post Jobs era run by a smart operating guy driven by the numbers more than by a passion or vision. Or, perhaps Tim Cook knew that he had time to get it right, as the Apple eco-system was captive and not running away because of poor Siri performance. Maybe they were waiting for their services ecosystem to really kick in before cranking up the power of Siri. For whatever reason, Siri was largely viewed as the first out of the gates but well behind the pack in Q2 2018.

AI ASSISTANTS DRIVE CONSUMER LOCK-IN
Fast forward to now and I’ll say why I think things are changing and why I said that Siri didn’t stop Apple from being first to $1T. But first, let me diverge to dwell on the importance of an AI Assistant to Apple and others. First off, it’s pretty easy to see the importance the industry puts on AI assistants. Any time I watch advertising spots, I see some of the most expensive commercials ever produced with the biggest named stars promoting “Hey Google”, “Hey Siri”, and “Alexa” (and occasionally Bixby or Cortana too!).

The assistants aren’t sold and so they don’t directly make money but they can be used as purchasing agents (where Amazon makes a lot of money), advertising agents (where Google makes its money), access to entertainment services (where all the big guys make money) and as a user experience for consumer electronics (where Apple makes a lot of money). The general thinking is that the more an assistant is used, the more it learns about the user, the better it serves the user, and the more the user is locked in! So winning in the AI Assistant game is HUGELY important and recent changes at Apple show that Siri is quickly coming up in the rankings and could have more momentum right now than in its entire history. That’s why Siri didn’t stop Apple from reaching $1T.

SIRI ON THE RISE
Let me highlight three recent pieces of news that suggest Siri is now headed in the right direction.

  • HomePod Sales: Apple HomePod sales just reached $1B. Not a shabby business given the high margins Apple typically gets. According to Consumer Intelligence Research Partners (CIRP) the HomePod marketshare doubled over the past quarter. What’s interesting is that the early reviews stated that Siri’s poor performance and lack of compatibility was dragging down HomePod sales. However, CIRP reported the biggest problem today is price and that at $349 it is hundreds of dollars more than competitors.
  • Loup Ventures analysis:
    Loup Ventures does an annual Assistant assessment. Several companies do this sort of thing and the traditional and general rankings have previously showed Google as best, Cortana and Alexa not far behind, and Siri somewhat behind the pack. Loup’s most recent analyses showed something different. Siri is shown to have the most improvement (from April 2017 to July 2018) in both “answered correctly” and “understood query”, and has surpassed Cortana and Alexa in both categories.


Of particular note is the categories of correct analysis. Siri substantially outperformed Google Assistant in the “command” category which is arguable the most important category for a consumer electronics manufacturer that wants to improve user experience.

 

  • Apple Reorganization:In April 2018 Apple hired John Giannandrea. JG is a silicon valley luminary and not only played roles with early pioneers like General Magic and Netscape, but he was a founder of TellMe Networks which still holds the record for the highest valued acquisition in the speech recognition space. Microsoft paid $800 million in a 2007 acquisition. JG didn’t retire and rest on his laurels. He joined Google as an Engineering VP and in 2016 was promoted to SVP Search (yeah I mean all of search as in “Google that”) including heading up all artificial intelligence and machine learning within Google. Business Insider called him “The most sought after free agent in Silicon Valley.” He reports directly to Tim Cook. In July 2018, a reorg was announced that brings Siri and all machine learning under one roof…under JG. Siri has bounced around under a few top executives. With JG on board and Bill Stasior (VP Siri) staying on and now reporting into JG, Siri has a bright future.

It may have taken a while but Apple seems serious. It’s nice to have a pioneer in the space not stay down for the count!

Voice assistant battles, part two: The strategic importance

August 6, 2018

Here’s the basic motivation that I see in creating Voice Assistants…Build a cross platform user experience that makes it easy for consumers to interact, control and request things through their assistant. This will ease adoption and bring more power to consumers who will use the products more and in doing so create more data for the cloud providers. This “data” will include all sorts of preferences, requests, searches, purchases, and will allow the assistants to learn more and more about the users. The more the assistant knows about any given user, the BETTER the assistant can help the user in providing services such as entertainment and assisting with purchases (e.g. offering special deals on things the consumer might want). Let’s look at each of these in a little more detail:

1. Owning the cross platform user experience and collecting user data to make a better Voice Assistants. ​
For thousands of years consumers interacted with products by touch. Squeezing, pressing, turning, and switching were all the standard means of controlling. The dawn of electronics really didn’t change this and mechanical touch systems became augmented with electrical touch mechanisms. Devices got smarter and had more capabilities but the means to access these capabilities got more confusing with more complicated interfaces and a more difficult user experience. As new sensory technologies began to be deployed (such as gesture, voice, pressure sensors, etc.) companies like Apple emerged as consumer electronics leaders because of their ability to package consumer electronics in a more user friendly manner. With the arrival of Siri on the iPhone and Alexa in the home, voice first user experiences are driving the ease of use and naturalness of interacting with consumer products. Today we find companies like Google and Amazon investing heavily into their hardware businesses and using their Assistants as a means to improve and control the user experience.

Owning the user experience on a single device is not good enough. The goal of each of these voice assistants is to be your personal assistant across devices. On your phone, in your home, in your car, wherever you may go. This is why we see Alexa and Google and Siri all battling for, as an example, a position in automotive. Your assistant wants to be the place you turn for consistent help. In doing so it can learn more about your behaviors…where you go, what you buy, what you are interested in, who you talk to, and what your history is. This isn’t just scary big brother stuff. It’s quite practical. If you have multiple assistants for different things, they may each think of you and know you differently, thereby having a less complete picture. It’s really best for the consumer to have one assistant that knows you best.

For example, let’s take the simple case of finding food when I’m hungry. I might say “I’m hungry.” Then the assistant’s response would be much more helpful the more it knows about me. Does it know I’m a vegetarian? Does it know where I’m located, or whether I am walking or driving? Maybe it knows I’m home and what’s in my refrigerator, and can suggest a recipe…does it know my food/taste preferences? How about cost preferences? Does it have the history of what I have eaten recently, and knows how much variety I’d like? Maybe it should tell me something like “Your wife is at Whole Foods, would you like me to text her a request or call her for you?” It’s easy to see how these voice assistants could really be quite helpful the more it knows about you. But with multiple assistants in different products and locations, it wouldn’t be as complete. In this example it might know I’m home, but NOT know what’s in my fridge. Or it might know what’s in the fridge and know I’m home but NOT know my wife is currently shopping at Whole Foods, etc.

The more I use my assistant across more devices in more situations and over more time, the more data it could gather and the better it should get at servicing my needs and assisting me! It’s easy to see that once it knows me well and is helping me with this knowledge it will get VERY sticky and become difficult to get me to switch to a new assistant that doesn’t know me as well.

2. Entertainment and other service package sales.
Alexa came onto the scene in 2014 with one very special domain – Music. Amazon chose to do one thing really well, and that was make a speaker that could accept voice commands for playing songs, albums, bands, radio. Not long after that Alexa added new domains and moved into new platforms like Fire TV and the Fire stick controller. It’s no coincidence that an Amazon Music service and Amazon TV services both exist and you can wrap even more services into an Amazon Prime membership. When Assistants don’t support Spotify well, there are a lot of complaints. And it’s no surprise that Spotify has been reported to be developing their own assistant and speaker. In fact Comcast has their own voice control remotes. There’s a very close tie between the voice assistants and the services that they bring. Apple is restrictive in what Siri will allow you to listen for. They want to keep you within their eco-system where they make more money. (Maybe it’s this locked in eco-system that has given Apple a more relaxed schedule in improving Siri?). Amazon and Google are really not that different, although they may have different means of leading us to the services they want us to use, they still can influence our choices for media. Spotify has over 70M subscribers (20M paying), over 5 Billion in revenues and recently went public with about a $30B market cap…and Apple Music just overtook Spotify in terms of paying subscribers. Music streaming has turned the music industry into a growth business again. The market for video services is even bigger, and Amazon is one of the top content producers of video! Your assistant will have a lot of influence on the services you choose and how accessible they are. This is one reason why voice assistant providers might be willing to lose money in getting the assistants out to the market, so they can make more money on services. The battle of Voice Assistants is really a battle of who controls your media and your purchases!

3. Selling and recommending products to consumers
The biggest business in the world is selling products. It’s helped make Amazon, Google and Apple the giants that they are today. Google makes the money on advertising, which is an indirect form of selling products. What if your assistant knew what you needed whenever you needed it? It would uproot the entire advertising industry. Amazon has the ability to pull this off. They have the world’s largest online store, they know our purchase histories, they have an awesome rating system that really works, and they have Alexa listening everywhere willing to take our orders. Because assistants use a voice interface, there will be a much more serial approach to making recommendations and selling me things. For example, if I do a text search on a device for nearby vegan restaurants, I see a map with a whole lot of choices and long list of options. Typically these options could include side bars of advertising or “sponsored” restaurants first in the listing, but I’m supplied a long list. If I do a voice search on a smart speaker with no display, it will be awkward to give me more than a few results…and I’ll bet the results we hear will become the “sponsored” restaurants and products.

It would be really obnoxious if Alexa or Siri or Cortana or Google Assistant suddenly suggested I buy something that I wasn’t interested in, but what if it knew what I needed? For example, it could track vitamin usage and ask if I want more before they run out, or it could know how frequently I wear out my shoes, and recommend a sale for my brand and my size, when I really needed them. The more my assistant knows me the better it can “advertise” and sell me in a way that’s NOT obnoxious but really helpful. And of course making extra money in the process!

Voice Assistant Battles, part one

July 25, 2018

I have spoken on a lot of “voice” oriented shows over the years, and it has been disappointing that there hasn’t been more discussion about the competition in the industry and what is driving the huge investments we see today. Because companies like Amazon and Google participate in and sponsor these shows, there is a tendency to avoid the more controversial aspects of the industry. I wrote this blog to share some of my thoughts on what is driving the competition, why the voice assistant space is so strategically important to companies, and some of the challenges resulting from the voice assistant battles

In September of 2017 it was widely reported that Amazon had over 5000 employees working on Alexa with more than 1000 more to be hired. To use a nice round and conservative number, let’s assume an average Alexa employee’s fully weighted cost to Amazon is $200K. With about 6,000 employees on the Alexa team today, that would mean a $1.2 billion investment. Of course, some of this is recouped by the Echo’s and Dot’s bringing in profits, but when you consider that Dots sell for $30-$50 and Echos at $80-$100, it’s hard to imagine a high enough profit to justify the investment through hardware sales. For example, if Amazon can sell 30 million Alexa devices and make an average of $30 per unit profit, that only covers 75% of the cost of the conservative $1.2 billion investment.

Other evidence supporting the huge investments being made in voice assistants is the battle in advertising. Probably the most talked about thing at 2018’s CES show was the enormous position Google took in advertising the Google Assistant. In fact, if you watch any of the most expensive advertising slots on TV (SuperBowl, NBA finals, World Cup, etc.) you will see a preponderance of advertisements with known actors and athletes saying “Hey Google,” “Alexa,” or, “Hey Siri.” (Being in the wakeword business, I particularly like the Kevin Durant “Yo Google” ad!)

And it’s not just the US giants that are investing big into assistants: Docomo, Baidu, Tencent, Alibaba, Naver, and other large international players are developing their own or working with 3rd party assistants.

So what is driving this huge investment companies are making? It’s a multitude of factors including:

  1. Owning the cross platform user experience and collecting user data
  2. Entertainment and other service package sales
  3. Selling and recommending products to consumers

In my next blog, I’ll discuss these three factors in more detail, and in a final blog on this topic I will discuss the challenges being faced by consumer OEMs and service providers that must play in the voice assistant game to not lose out to service and hardware competition from Apple, Amazon, Google, and others.

I Nailed It!

August 30, 2017

A few days ago I wrote a blog that talked about assistants and wake words and I said:

“We’ll start seeing products that combine multiple assistants into one product. This could create some strange and interesting bedfellows.”

Interesting that this was just announced:

http://fortune.com/2017/08/30/amazon-alexa-microsoft-cortana-siri/

Here’s another prediction for you…

All assistants will start knowing who is talking to them. They will hear your voice and look at your face and know who you are. They will bring you the things you want (e.g. play my favorite songs), and only allow you to conduct transaction you are qualified for (e.g. order more black licorice). Today there is some training required but in the near future they will just learn who is who much like a new born quickly learns the family members without any formal training.

Here’s what’s next for always listening devices

August 28, 2017

Ten years ago, I tried to explain to friends and family that my company Sensory was working on a solution that would allow IoT devices to always be “on” and listening for a key wake up word without “false firing” and doing it at ultra-low power and with very little processing power. Generally, the response was “Huh?”

Today, I say, “Just like Hey Siri, OK Google, Alexa, Hey Cortana, and so on.” Now, everybody gets it and the technology is mainstream. In fact, next year, Sensory will have technology that’s embedded in IoT devices that listens all those things (and more). But that’s not good enough.

Here are some of the things that will be appearing over the next 10 (or more) years to make always listening better and different:

  1. Assistants that see. I hate it when I say OK Google to my Home and my phone responds. Or worse, when a device false fires and I left the volume on really loud. Many of these devices will be getting vision in the coming years (Amazon’s Echo Look already does) and their ability to see what device I’m talking to will make it easier for them to respond from the correct device.
  2. No wake words. In a room with multiple people, we sometimes direct questions by saying the name of the person we want to talk to first. But we don’t do this when we are having a dialog back and forth, and we certainly don’t do it if there’s just one person in the room. Our Assistants should respond to questions without having their names said.
  3. Multiple assistants on single devices. Why can’t I have a device that I can shop on with Alexa, search info with Google, or control my appliances with Bixby? Amazon should be fine with that but Google wouldn’t. Certain cloud assistants will allow it and others won’t, and we’ll start seeing products that combine multiple assistants into one product. This could create some strange and interesting bedfellows.
  4. Portable assistants. I unplug my Echo and move it from room to room when I’m listening to music. I already have two Echos and one Home (and a few other Alexa devices) and I don’t want to buy one for every room. Why can’t I throw Google Home in my backpack for music while biking? What about an always on wearable assistant? This will require ultra-low power wake words that perform great.
  5. Privacy controls. The intelligent assistants’ capabilities are directly proportional to the privacy we’re willing give up. The better they know us, the better they can get us what we want. Today, we just sign our privacy away. In the future, there likely will be settings that we can control.
  6. Embedded always on assistants. Power consumption should be low enough that assistants can be embedded into our bodies for augmented intelligence, memory, and of course medical checkups. Within 20 years, our bodies will become enhanced with sensors (microphones, cameras, etc.), memory, and processors that augment our personal capabilities and are directly wired to our brains.

Staying Ahead with Advanced AI on Devices

June 8, 2017

Since the beginning, Sensory has been a pioneer in advancing AI technologies for consumer electronics. Not only did Sensory implement the first commercially successful speech recognition chip, but we also were first to bring biometrics to low cost chips, and speech recognition to Bluetooth devices. Perhaps what I am most proud of though, more than a decade ago Sensory introduced its TrulyHandsfree technology and showed the world that wakeup words could really work in real devices, getting around the false accept and false reject, and power consumption issues that had plagued the industry. No longer did speech recognition devices require button presses…and it caught on quickly!

Let me go on boasting because I think Sensory has a few more claims to fame… Do you think Apple developed the first “Hey Siri” wake word? Did Google develop the first “OK Google” wake word? What about “Hey Cortana”? I believe Sensory developed these initial wake words, some as demos and some shipped in real products (like the Motorola MotoX smartphone and certain glasses). Even third-party Alexa and Cortana products today are running Sensory technology to wake up the Alexa cloud service.

Sensory’s roots are in neural nets and machine learning. I know everyone does that today, but it was quite out of favor when Sensory used machine learning to create a neural net speech recognition system in the 1990’s and 2000’s.  Today everyone and their brother is doing deep learning (yeah that’s tongue in cheek because my brother is doing it too! (http://www.cs.colorado.edu/~mozer/index.php). And a lot of these deep learning companies are huge multi-billion-dollar business or extremely well-funded startups.

So, can Sensory stay ahead now and continuing pioneering innovation in AI now that everyone is using machine learning and doing AI? Of course, the answer is yes!

Sensory is now doing computer vision with convolutional neural nets. We are coming out with deep learning noise models to improve speech recognition performance and accuracy, and are working on small TTS systems using deep learning approaches that help them sound lifelike. And of course, we have efforts in biometrics and natural language that also use deep learning.

We are starting to combine a lot of technologies together to show that embedded systems can be quite powerful. And because we have been around longer and thought through most of these implementations years before others, we have a nice portfolio of over 3 dozen patents covering these embedded AI implementations. Hand in hand with Sensory’s improvements in AI software, companies like ARM, NVidia, Intel, Qualcomm and others are investing and improving upon neural net chips that can perform parallel processing for specialized AI functions, so the world will continue seeing better and better AI offerings on “the edge”.

Curious about the kind of on-device AI we can create when combining a bunch of our technologies together? So were we! That’s why we created this demo that showcases Sensory’s natural language speech recognition, chatbots, text-to-speech, avatar lip-sync and animation technologies. It’s our goal to integrate biometrics and computer vision into this demo in the months ahead:

Let me know what you think of that! If you are a potential customer and we sign an NDA, we would be happy to send you an APK of this demo so you can try it yourself! For more information about this exciting demo, please check out the formal announcement we made: http://www.prnewswire.com/news-releases/sensory-brings-chatbot-and-avatar-technology-to-consumer-devices-and-apps-300470592.html

Untethering virtual assistants from Wi-Fi

February 1, 2017

The hands-free personal assistant that you can wake on voice and talk to naturally has significantly gained popularity the last couple of years. This kind of technology made its debut not all that long ago as a feature of Motorola’s MotoX, a smartphone that had always-listening Moto Voice technology powered by Sensory’s TrulyHandsfree technology. Since then, the always-listening digital assistant quickly spread across mobile phones and PCs from several different brands, making phrases like, “Hey Siri,” “Okay Google,” and, “Hey Cortana,” commonplace.

Then, out of nowhere, Amazon successfully tried its hand at the personal assistant with the Echo, sporting a true natural language voice interface and Alexa cloud-based AI. It was initially marketed for music, but quickly expanded domain coverage to include weather, Q&A, recipes, and the ability to answer common questions. On top of that, Amazon also opened its platform up to third-party developers, allowing them to proliferate the skill sets available on the Alexa platform, with now more than 10,000 skills accessible to users. These skills allow Amazon’s Echo, Tap, and Dot, as well as the several new third-party Alexa-equipped products like Nucleus and Triby, to be used to access and control various IoT functions, from reading heart rates on Fitbits to ordering pizzas and controlling lights within the home.

Until recently, always-listening, hands-free assistants required a certain minimum power capability, restricting form factors to table top speakers or appliance devices that had to either be plugged in to an outlet or have a large battery. Also, Amazon’s Echo, Tap, and Dot all required a Wi-Fi connection for communicating with the Alexa AI engine to make use of its available skills. Unfortunately, this meant you were restricted to using Alexa within your home or Wif-Fi network. If you wanted to go on a run, the only way to ask Alexa for your step count or heart rate was to wait until you got back home.

This is changing now with technology like Sensory’s VoiceGenie, an always-listening embedded speech recognizer for wearables and hearables that runs in a low power mode on a Qualcomm/CSR Bluetooth chip. The solution takes a session border controller (SBC) music decoder and intertwines it with a speech recognition system so that while music is playing and the decoder is in-use, VoiceGenie is on and actively listening, allowing the Bluetooth device to listen for two keywords:

  • “VoiceGenie,” which provides access to all the Bluetooth device’s and connected handset’s features.
  • “Alexa,” which enables Alexa through a smartphone, and doesn’t require Wi-Fi.

To give an example of how this works, a Bluetooth headset’s volume, pairing process, battery strength, or connection status can only be controlled or monitored through the device itself, so VoiceGenie handles those controls with no touching required. VoiceGenie can also read the incoming caller’s name and ask the user if they want to answer or ignore. Additionally, VoiceGenie can call up the phone’s assistant like Google Assistant, Siri, or Cortana, to ask by voice for a call to be made or a song to be played. By saying, “Alexa,” the user can access the Alexa service directly from their Bluetooth headsets while out and about, using their smartphone as the connection to the Alexa cloud.

Today’s consumer wants a personalized assistant that knows them, is convenient to use, keeps their secrets safe, and helps them in their daily lives. This help can be accessing information, getting answers to questions or intelligently controlling your home environment. It’s very difficult to accomplish this for privacy and power reasons solely using cloud-based AI technology. There needs to be embedded intelligence on devices, and it needs to run at low power. A low-power embedded voice assistant that adds an intelligent voice interface to portable and wearable devices, while also adding Alexa functionality to them, can address those needs.

Virtual Assistants coming to an Ear Near You!

January 5, 2017

Virtual handsfree assistants that you can talk to and that talk back have rapidly gained popularity. First, they arrived in mobile phones with Motorola’s MotoX that had an ‘always listening’ Moto Voice powered by Sensory’s TrulyHandsfree technology. The approach quickly spread across mobile phones and PCs to include Hey Siri, OK Google, and Hey Cortana.

Then Amazon took things to a whole new level with the Echo using Alexa. A true voice interface emerged, initially for music but quickly expanding domain coverage to include weather, Q&A, recipes, and the most common queries. On top of that, Amazon took a unique approach by enabling 3rd parties to develop “skills” that now number over 6000! These skills allow Amazon’s Echo line (with Tap, Dot) and 3rd Party Alexa equipped products (like Nucleus and Triby) to be used to control various functions, from reading heartrates on Fitbits to ordering Pizzas and controlling lights.

Until recently, handsfree assistants required a certain minimum power capability to really be always on and listening. Additionally, the hearable market segment including fitness headsets, hearing aids, stereo headsets and other Bluetooth devices needed to use touch control because of their power limitations. Also, Amazons Alexa had required WIFI communications so you could sit on your couch talking to your Echo and query Fitbit information, but you couldn’t go out on a run and ask Alexa what your heartrate was.

All this is changing now with Sensory’s VoiceGenie!

The VoiceGenie runs an embedded recognizer in a low power mode. Initially this is on a Qualcomm/CSR Bluetooth chip, but could be expanded to other platforms. Sensory has taken an SBC music decoder and intertwined a speech recognition system, so that the Bluetooth device can recognize speech while music is playing.

The VoiceGenie is on and listening for 2 keywords:
Alexa – this enables Alexa “On the Go” through a cellphone rather than requiring WiFi
VoiceGenie – this provides access to all the Bluetooth Device and Handset Device features

For example, a Bluetooth headset’s volume, pairing, battery strength, or connection status can only be controlled by the device itself, so VoiceGenie handles those controls without touching required. VoiceGenie can also read incoming callers’ names and ask the user if they want to answer or ignore. VoiceGenie can call up the phone’s assistant, like Google Assistant or Siri or Cortana, to ask by voice for a call to be made or a song to be played.
By saying Alexa, the user gets access to a mobile Alexa ‘On the Go’, so any of the Alexa skills can be utilized while out and about, whether hiking or running!

Some of the important facts behind the new VoiceGenie include:

  • VoiceGenie is a platform for VoiceAssistants to be used Handsfree on tiny devices
  • VoiceGenie enables Alexa for a whole new range of portable products
  • VoiceGenie enables a movement towards invisible assistants that are with you all the time and help you in your daily lives

This third point is perhaps the least understood, yet the most important. People want a personalized assistant that knows them, keeps their secrets safe, and helps them in their daily lives. This help can be accessing information or controlling your environment. It’s very difficult to accomplish this for privacy and power reasons in a cloud powered environment. There needs to be embedded intelligence. It needs to be low power. VoiceGenie is that low powered voice assistant.

« Older Entries