Posts Tagged ‘google’
June 15, 2016
“Credit to the team at Amazon for creating a lot of excitement in this space,” Google CEO Sundar Pichai. He made this comment during his Google I/O speech last week when introducing Google’s new voice-controlled home speaker, Google Home which offers a similar sounding description to Amazon’s Echo. Many interpreted this as a “thanks for getting it started, now we’ll take over,” kind of comment.
Google has always been somewhat marketing challenged in naming its voice assistant. Everyone knows Apple has Siri, Microsoft has Cortana, and Amazon has Alexa. But what is Google’s voice assistant called? Is it Google Voice, Google Now, OK Google, Voice Actions? Even those of us in the speech industry have found Google’s branding to be confusing. Maybe they’re clearing that up now by calling their assistant “Google Assistant.” Maybe that’s the Google way of admitting it’s an assistant without admitting they were wrong by not giving it a human sounding name.
The combination of the early announcement of Google Home and Google Assistant has caused some to comment that Amazon has BIG competition at best, and at worst, Amazon’s Alexa is in BIG trouble.
I thought I’d point out a few good reasons why Amazon is in pretty good shape:
Of course, Amazon has its challenges as well, but I’ll leave that for another blog.
December 8, 2015
I saw an interesting press release titled “EyeVerify Gets Positive Feedback From Curious Users”. I know this company as a fellow biometrics vendor selling into some of the same markets as Sensory. I also knew that their Google Playstore rating hovered around a 3/5 rating while our AppLock app hits around a 4/5 rating, so I was curious about what this announcement meant. It made me think of the power of all the data in the Google Playstore, and I decided to take a look at biometric ratings in general to see if there were any interesting conclusions.
Here’s my methodology…I conducted searches for applications in Google Play that use biometrics to lock applications or other things. I wanted the primary review to relate to the biometric itself, so I excluded “pranks” and other apps that provided something other than biometric security. I also rejected apps with less than 5,000 downloads to insure that friends, employees and families weren’t having a substantive effect on the ratings. I ran a variety of searches for four key biometrics: Eyes, Face, Fingerprint and Voice.
I did not attempt to exhaust the entire list of biometric apps, I searched under a variety of terms until I had millions of downloads for each category with a minimum of 25,000 reviews for each category. The “eye” was the only biometric category that couldn’t meet this criteria, as I had to be satisfied with 6,884 reviews. Here’s a summary chart of my findings:
As you can see, this shows the total number of downloads, the total number of apps/companies, the number of reviews and the avg rating of reviews per biometric category. So, for example, Face had 11 applications with 1.75 million total downloads and just over 25,000 reviews with an average review rating of 3.89.
What’s most interesting to me about the findings is that it points to HIGHER RATINGS FOR EASIER TO USE BIOMETRICS. This is a direct correlation as Face comes in first and is clearly the easiest biometric to use Voice is somewhat more intrusive as a user must speak, and the rating drops by .16 to 3.73, though this segment does seem to receive the most consumer interest with more than 5-million downloads. Finger is today’s most common biometric but is often criticized by its 2-hand requirement and that it often fails, requiring users to re-swipe, consumer satisfaction with fingerprint is about 3.67. Eye came in last, albeit with the least data, but numbers don’t lie, and the average consumer rating for that biometric comes in at about 3.42. If you consider the large number of reviews in this study and the narrow range of review scores (which typically range from 2.5 to 4.5), the statistically significant nature becomes apparent.
The results were not really a surprise to me. When we first developed TrulySecure, it was based on the premise that users wanted a more convenient biometric without sacrificing security, so we focused on COMBINING the two most convenient biometrics (face and voice) to produce a combined security that could match the most stringent of requirements.
November 12, 2015
A really smart guy told me years ago that neural networks would prove to be the second best solution to many problems. While he was right about lots of stuff, he missed that one! Out of favor for years, neural networks have enjoyed a resurgence fueled by advances in deep machine learning techniques and the processing power to implement them. Neural networks are now seen to be the leading solution to a host of challenges around mimicking how the brain recognizes patterns.
Google’s Monday announcement that it was releasing its TensorFlow machine learning system on an open-source basis underscores the significance of these advances, and further validates Sensory’s 22 year commitment to machine learning and neural networks. TensorFlow is intended to be used broadly by researchers and students “wherever researchers are trying to make sense of very complex data — everything from protein folding to crunching astronomy data”. The initial release of TensorFlow will be a version that runs on a single machine, and it will be put into effect for many computers in the months ahead, Google said.
Microsoft also had cloud-based machine learning news on Monday, announcing an upgrade to Project Oxford’s facial recognition API launched in May specifically for the Movember Foundation’s no-shave November fundraising effort: a facial hair recognition API that can recognize moustache and beard growth and assign it a rating (as well as adding a moustache “sticker” to the faces of facial hair posers).
Project Oxford’s cloud-based services are based on the same technology used in Microsoft’s Cortana personal assistant and the Skype Translator service, and also offer emotion recognition, spell check, video processing for facial and movement detection, speaker recognition and custom speech recognition services.
While Google and Microsoft have announced some impressive machine-learning capabilities in the cloud, Sensory uniquely combines voice and face for authentication and improved intent interpretation on device, complementing what the big boys are doing.
From small footprint neural networks for noise robust voice triggers and phrase-spotted commands, to large vocabulary recognition leveraging a unique neural network with deep learning that achieves acoustic models an order of magnitude smaller than the present state-of-the-art, to convolutional neural networks deployed in the biometric fusion of face and voice modalities for authentication, all on device and not requiring any cloud component, Sensory continues to be the leader in utilizing state-of-the-art machine learning technology for embedded solutions.
Not bad company to keep!
May 4, 2015
I was at the Mobile Voice Conference last week and was on a keynote panel with Adam Cheyer (Siri, Viv, etc.) and Phil Gray (Interactions) with Bill Meisel moderating. One of Bills questions was about the best speech products, and of course there was a lot of banter about Siri, Cortana, and Voice Actions (or GoogleNow as it’s often referred to). When it was my turn to chime in I spoke about Amazon’s Echo, and heaped lots of praise on it. I had done a bit of testing on it before the conference but I didn’t own one. I decided to buy one from Ebay since Amazon didn’t seem to ever get around to selling me one. It arrived yesterday.
Here are some miscellaneous thoughts:
OK, Amazon… here’s my free advice (admittedly self-serving but nevertheless accurate):
May 1, 2015
Winning on Accuracy & Speed… How can a tiny player like Sensory compete in deep learning technology with giants like Microsoft, Google, Facebook, Baidu and others?
There’s a number of ways, and let me address them specifically:
These 3 items together have provided Sensory with the highest quality embedded speech engines in the world. It’s worth reiterating why embedded is needed, even if speech recognition can all be done in the cloud:
March 30, 2015
At Mobile World Congress, I participated in ZTE’s Mobile Voice Alliance panel. ZTE presented data researched in China that basically said people want to use speech recognition on their phones, but they don’t use it because it doesn’t work well enough. I have seen similar data on US mobile phone users, and the automotive industry has also shown data supporting the high level of dissatisfaction with speech recognition.
In fact, when I bought my new car last year I wanted the state of the art in speech recognition to make navigation easier… but sadly I’ve come to learn that the system used in my Lexus just doesn’t work well — even the voice dialing doesn’t work well.
As an industry, I feel we must do better than this, so in this blog I’ll provide my two-cents as to why speech recognition isn’t where it should be today, even when technology that works well exists:
Deep learning nets have enabled some amazing progress in speech recognition over the last five years. The next five years will see embedded recognition with high performance noise cancelling and beamforming coming to the forefront, and Sensory will be leading this charge… and just like how Sensory led the way with the “always on” low-power trigger, I expect to see Google, Apple, Microsoft, Amazon, Facebook and others follow suit.
March 23, 2015
This month had three very different announcements about face recognition from Alibaba, Google, and Microsoft. Nice to see that Sensory is in good company!!!
Alibaba’s CEO Jack Ma discussed and demoed the possibility of using face verification for the very popular Alipay.
A couple interesting things about this announcement…First, I have to say, with a name like Alibaba, I am a little let down that they’re not using “Open Sesame” as a voice password to go with or instead of the face authentication… All joking aside, I do think relying on facial recognition as the sole means of user authentication is risky, and think they would be better served using a solution that integrates both face and voice recognition (something like our own TrulySecure), to ensure the utmost security of their customers’ linked bank accounts.
Face is considered one of the more “convenient” methods of biometrics because you just hold your phone out and it works! Well, at least it should… A couple of things I noticed in the Alibaba announcement: Look at the picture…Jack Ma is using both hands to carefully center his photo, and looking at the image of the phone screen tells us why. He needs to get his face very carefully centered on this outline to make it work. Why? Well, it’s a technique used to improve accuracy, but this improved accuracy, trades off the key advantage of face recognition, convenience, to make the solution more robust. Also the article notes that it’s a cloud based solution. To me cloud based means slower, dependent on a connection, and putting personal privacy more at risk. At Sensory, we believe in keeping data secure, especially when it comes to something like mobile payments, which is why we design our technologies to be “embedded” on the device – meaning no biometric data has to be sent to the cloud, and our solutions don’t require an internet connection to function. Additionally, with TrulySecure, we combine face and voice recognition, making authentication quick and simple, not to mention more secure, and less spoofable than face-only solutions. By utilizing a multi-biometric authentication solution like TrulySecure, the biometric is far less environmentally sensitive and even more convenient!
Mobile pay solutions are on the rise and as more hit the market differentiators like authentication approach, solution accuracy, convenience and most of all data security will continue to be looked at more closely. We believe that the embedded multi-biometric approach to user authentication is best for mobile pay solutions.
Also, Google announced that its deep learning FaceNet is nearly 100% accurate.
Everybody (even Sensory) is using deep learning neural net techniques for things like face and speech recognition. Google’s announcement seems to have almost no bearing on their Android based face authentication, which came in the middle of the pack of the five different face authentication systems we recently tested. So, why does Google announce this? Two reasons: – 1) Reaction to Baidu’s recent announcement that their deep learning speech recognition is the best in the world: 2) To counter Facebook’s announcement last year that their DeepFace is the best face recognition in world. My take – it’s really hard to tell whose solution is best on these kind of things, and the numbers and percentages can be deceiving. However, Google is clearly doing research experiments on high-accuracy face matching and NOT real world implementation, and Facebook is using face recognition in a real world setting to tag photos of you. Real-world facial recognition is WAY harder to perfect, so my praise goes out to Facebook for their skill in tagging everyone’s picture to reveal to our friends and family things might not have otherwise seen us doing!
Lastly, Microsoft’s announced Windows Hello.
This is an approach to getting into your Windows device with a biometric (face, iris, or fingerprint). Microsoft has done a very nice job with this. They joined the FIDO alliance and are using an on-device biometric. This approach is what made sense to us at Sensory, because you can’t just hack into it remotely, you must have the device AND the biometric! They also addressed privacy by storing a representation of the biometric. I think their approach of using a 3D IR camera for Face ID is a good approach for the future. This extra definition and data should yield much better accuracy than what is possible with today’s standard 2D cameras and should HELP with convenience because it could be better at angles can work in the dark. Microsoft claims 1 in 100,000 false accepts (letting the wrong person in). I always think it’s silly when companies make false accept claims without stating the false reject numbers (when the right person doesn’t get in). There’s always a tradeoff. For example I could say my coffee mug uses a biometric authenticator to let the right user telepathically levitate it and it has less than a 1 in a billion false accepts (it happens to also have a 100% false reject since even the right biometric can’t telepathically levitate it!). Nevertheless, with a 3D camera I think Microsoft’s face authentication can be more accurate than Sensory’s 2D face authentication. BUT, its unlikely that the face recognition on its own will ever be more accurate than our TrulySecure, which still offers a lower False Accept rate than Microsoft – and less than 10% False Reject rate to boot!
Nevertheless, I like the announcement of 3D cameras for face recognition and am excited to see how their system performs.
June 30, 2014
February 5, 2014
Everyone seems to be talking about this as the year of the wearable. I don’t think so. Even if Apple does introduce a watch, and Google widely releases Glass, will they really go mainstream and sell hundreds of millions of units? I don’t think so. At least not for a few years. IMHO there needs to be a few major breakthroughs:
I’ll be leading a Wearables panel at the Mobile Voice show with an AWESOME group of people representing thought leaders from Google, Pebble, Intel, Xowi, and reQall. Here’s the press release
August 5, 2013
I often get the question, “If Android and Qualcomm offer voice activation for free, why would anyone license from Sensory?” While I’m not sure about Android and Qualcomm’s business models, I do know that decisions are based on accuracy, total added cost (royalties plus hardware requirements to run), power consumption, support, and other variables. Sensory seems to be consistently winning the shootouts it enters for embedded voice control. Some approaches that appear lower cost require a lot more memory or MIPS, driving up total cost and power consumption.
It’s interesting to note that companies like Nuance have a similar challenge on the server side where Google and Microsoft “give it away”. Because Google’s engine is so good it creates a high hurdle for Nuance. I’d guess Google’s rapid progress helps Nuance with their licensing of Apple, but may have made it more challenging to license Samsung. Samsung actually licensed Vlingo AND Nuance AND Sensory, then Nuance bought Vlingo.
Why doesn’t Samsung use Google recognition if it’s free? On the server it’s not power consumption effecting decisions, but cost, quality, and in this case CONTROL. On the cost side it could be that Samsung MAKES more money by using Nuance in some sort of ad revenue kickbacks, which I’d guess Google doesn’t allow. This is of course just hypothesizing. I don’t really know, and if I did know I couldn’t say. The control issue is big too as companies like Sensory and Nuance will sell to everyone and in that sense offer platform independence and more control. Working with a Microsoft or Google engine forces an investment in a specific platform implementation, and therefore less flexibility to have a uniform cross platform solution.