HEAR ME -
Speech Blog
HEAR ME - Speech Blog  |  Read more March 11, 2019 - Taking Back Control of Our Personal Data
HEAR ME - Speech Blog

Archives

Categories

Posts Tagged ‘google’

Google Assistant vs. Amazon’s Alexa

June 15, 2016

“Credit to the team at Amazon for creating a lot of excitement in this space,” Google CEO Sundar Pichai. He made this comment during his Google I/O speech last week when introducing Google’s new voice-controlled home speaker, Google Home which offers a similar sounding description to Amazon’s Echo. Many interpreted this as a “thanks for getting it started, now we’ll take over,” kind of comment.

Google has always been somewhat marketing challenged in naming its voice assistant. Everyone knows Apple has Siri, Microsoft has Cortana, and Amazon has Alexa. But what is Google’s voice assistant called? Is it Google Voice, Google Now, OK Google, Voice Actions? Even those of us in the speech industry have found Google’s branding to be confusing. Maybe they’re clearing that up now by calling their assistant “Google Assistant.” Maybe that’s the Google way of admitting it’s an assistant without admitting they were wrong by not giving it a human sounding name.

The combination of the early announcement of Google Home and Google Assistant has caused some to comment that Amazon has BIG competition at best, and at worst, Amazon’s Alexa is in BIG trouble.

Forbes called Google’s offering the Echo Killer, while Slate said it was smarter than Amazon’s Echo.

I thought I’d point out a few good reasons why Amazon is in pretty good shape:

  1. Google Home is not shipping. Google has a bit of a chicken-and-egg issue in that it needs to roll out a product that has industry support (for controlling third-party products by voice). How do you get industry partners without a product? You announce early! That was a smart move; now they just need to design it and ship it…not always an easy task.
  2. It’s about Voice Commerce. This is REALLY important. Many people think Google will own this home market because it has a better speech recognizer. Speech recognition capabilities are nice but not the end game. The value here is having a device that’s smart and trusted enough to take money out of our bank accounts and deliver us goods and services that we want when we want them. Amazon has a huge infrastructure lead here in products, reviews, shipping, and other key components of Internet commerce. Adding a convenient voice front end isn’t easy, but it’s also NOT the hardest part of enabling big revenue voice commerce systems.
  3. Amazon has far-field working and devices that always “talk back.” I admit the speech recognition is important, and Google has a lot of data, experience, and technologists in machine learning, AI, and speech recognition. But most of the Google experience is through Android and mobile-phone hardware. Where Amazon has made a mark is in far-field or longer distance recognition that really works, which is not easy to do. Speech recognition has always been about signal/noise ratios and far-field makes the task more difficult and requires acoustic echo cancellation, multiple microphones, plus various gain control and noise filtering/speech focusing approaches. Also, the Google recognizer was established around finding data through voice queries, most of such data being displayed on-screen (and often through search). The Google Home and Amazon Echo are no-screen devices. Having them intelligently talk back means more than just reading the text off a search. Google can handle this, of course, but it’s one more technical barrier that needs to be done right.
  4. Amazon has a head start and already is an industry standard. Amazon’s done a nice job with the Echo. It’s follow-on products, Tap and Dot, were intelligent offshoots. Even its Fire TV took advantage of in-house voice capabilities. The Alexa Voice Services work well and already are acting like a standard for voice control. Roughly three million Amazon devices have already sold, and I’d guess that in the next year, the number of Alexa connected devices will double through both Amazon sales and third parties using AVS. This is not to mention the tens of millions of devices on the market that can be controlled by Echo or other Amazon hardware. Amazon is pretty well entrenched!

Of course, Amazon has its challenges as well, but I’ll leave that for another blog.

Google Play Statistics Shows Ease of Use Correlates to Biometric Ratings

December 8, 2015

I saw an interesting press release titled “EyeVerify Gets Positive Feedback From Curious Users”. I know this company as a fellow biometrics vendor selling into some of the same markets as Sensory. I also knew that their Google Playstore rating hovered around a 3/5 rating while our AppLock app hits around a 4/5 rating, so I was curious about what this announcement meant. It made me think of the power of all the data in the Google Playstore, and I decided to take a look at biometric ratings in general to see if there were any interesting conclusions.

Here’s my methodology…I conducted searches for applications in Google Play that use biometrics to lock applications or other things. I wanted the primary review to relate to the biometric itself, so I excluded “pranks” and other apps that provided something other than biometric security.  I also rejected apps with less than 5,000 downloads to insure that friends, employees and families weren’t having a substantive effect on the ratings. I ran a variety of searches for four key biometrics: Eyes, Face, Fingerprint and Voice.

I did not attempt to exhaust the entire list of biometric apps, I searched under a variety of terms until I had millions of downloads for each category with a minimum of 25,000 reviews for each category. The “eye” was the only biometric category that couldn’t meet this criteria, as I had to be satisfied with 6,884 reviews. Here’s a summary chart of my findings:

Capturebig

As you can see, this shows the total number of downloads, the total number of apps/companies, the number of reviews and the avg rating of reviews per biometric category. So, for example, Face had 11 applications with 1.75 million total downloads and just over 25,000 reviews with an average review rating of 3.89.

What’s most interesting to me about the findings is that it points to HIGHER RATINGS FOR EASIER TO USE BIOMETRICS. This is a direct correlation as Face comes in first and is clearly the easiest biometric to use Voice is somewhat more intrusive as a user must speak, and the rating drops by .16 to 3.73, though this segment does seem to receive the most consumer interest with more than 5-million downloads. Finger is today’s most common biometric but is often criticized by its 2-hand requirement and that it often fails, requiring users to re-swipe, consumer satisfaction with fingerprint is about 3.67. Eye came in last, albeit with the least data, but numbers don’t lie, and the average consumer rating for that biometric comes in at about 3.42. If you consider the large number of reviews in this study and the narrow range of review scores (which typically range from 2.5 to 4.5), the statistically significant nature becomes apparent.

The results were not really a surprise to me. When we first developed TrulySecure, it was based on the premise that users wanted a more convenient biometric without sacrificing security, so we focused on COMBINING the two most convenient biometrics (face and voice) to produce a combined security that could match the most stringent of requirements.

 

Guest Blog – Rise of the Machines (Learning)

November 12, 2015

A really smart guy told me years ago that neural networks would prove to be the second best solution to many problems.  While he was right about lots of stuff, he missed that one!  Out of favor for years, neural networks have enjoyed a resurgence fueled by advances in deep machine learning techniques and the processing power to implement them.  Neural networks are now seen to be the leading solution to a host of challenges around mimicking how the brain recognizes patterns.

Google’s Monday announcement that it was releasing its TensorFlow machine learning system on an open-source basis underscores the significance of these advances, and further validates Sensory’s 22 year commitment to machine learning and neural networks.  TensorFlow is intended to be used broadly by researchers and students “wherever researchers are trying to make sense of very complex data — everything from protein folding to crunching astronomy data”.  The initial release of TensorFlow will be a version that runs on a single machine, and it will be put into effect for many computers in the months ahead, Google said.

Microsoft also had cloud-based machine learning news on Monday, announcing an upgrade to Project Oxford’s facial recognition API launched in May specifically for the Movember Foundation’s no-shave November fundraising effort: a facial hair recognition API that can recognize moustache and beard growth and assign it a rating (as well as adding a moustache “sticker” to the faces of facial hair posers).

Project Oxford’s cloud-based services are based on the same technology used in Microsoft’s Cortana personal assistant and the Skype Translator service, and also offer emotion recognition, spell check, video processing for facial and movement detection, speaker recognition and custom speech recognition services.

While Google and Microsoft have announced some impressive machine-learning capabilities in the cloud, Sensory uniquely combines voice and face for authentication and improved intent interpretation on device, complementing what the big boys are doing.

From small footprint neural networks for noise robust voice triggers and phrase-spotted commands, to large vocabulary recognition leveraging a unique neural network with deep learning that achieves acoustic models an order of magnitude smaller than the present state-of-the-art, to convolutional neural networks deployed in the biometric fusion of face and voice modalities for authentication, all on device and not requiring any cloud component, Sensory continues to be the leader in utilizing state-of-the-art machine learning technology for embedded solutions.

Not bad company to keep!

Bernard Brafman
Vice President of Business Development

OK, Amazon!

May 4, 2015

I was at the Mobile Voice Conference last week and was on a keynote panel with Adam Cheyer (Siri, Viv, etc.) and Phil Gray (Interactions) with Bill Meisel moderating. One of Bills questions was about the best speech products, and of course there was a lot of banter about Siri, Cortana, and Voice Actions (or GoogleNow as it’s often referred to). When it was my turn to chime in I spoke about Amazon’s Echo, and heaped lots of praise on it. I had done a bit of testing on it before the conference but I didn’t own one. I decided to buy one from Ebay since Amazon didn’t seem to ever get around to selling me one. It arrived yesterday.

Here are some miscellaneous thoughts:

  • Echo is a fantastic product! Not so much because of what it is today but for the platform it’s creating for tomorrow. I see it as every bit as revolutionary as Siri.
  • The naming is really confusing. You call it Alexa but the product is Echo. I suspect this isn’t the blunder that Google made (VoiceActions, GoogleNow, GoogleVoice, etc.), but more an indication that they are thinking of Echo as the product and Alexa as the personality, and that new products will ship with the same personality over time. This makes sense!
  • Setup was really nice and easy, the music content integration/access is awesome, the music quality could be a bit better but is useable; there’s lots of other stuff that normal reviewers will talk about…But I’m not a “normal” reviewer because I have been working with speech recognition consumer electronics for over 20 years, and my kids have grown up using voice products, so I’ll focus on speech…
  • My 11 year old son, Sam, is pretty used to me bringing home voice products, and is often enthusiastic (he insisted on taking my Vocca voice controlled light to keep in his room earlier this year). Sam watched me unpack it and immediately got the hang of it and used it to get stats on sports figures and play songs he likes. Sam wants one for his birthday! Amazon must have included some kids voice modeling in their data because it worked pretty well with his voice (unlike the Xbox when it first shipped, which I found particularly ironic since Xbox was targeting kids).
  • The Alexa trigger works VERY well. They have implemented beamforming and echo cancellation in a very state of the art implementation. The biggest issue is that it’s a very bandwidth intensive approach and is not low power. Green is in! That could be why its plug-in/AC only and not battery powered. Noise near the speaker definitely hurts performance as does distance, but it absolutely represents a new dimension in voice usability from a distance and unlike with the Xbox, you can move anywhere around it, and aren’t forced to be in a stationary position (thanks to their 7 mics, which surely must be overkill!)
  • The voice recognition in generally is good, but like all of the better engines today (Google, Siri, Cortana, and even Sensory’s TrulyNatural) it needs to get better. We did have a number of problems where Alexa got confused. Also, Alexa doesn’t appear to have memory of past events, which I expect will improve with upgrades. I tried playing the band Cake (a short word, making it more difficult) and it took about 4 attempts until it said “Would you like me to play Cake?” Then I made the mistake of trying “uh-huh” instead of “yes” and I had to start all over again!
  • My FAVORITE thing about the recognizer is that it does ignore things very nicely. It’s very hard to know when to respond and when not to. The Voice Assistants (Google, Siri, Cortana) seem to always defer to web searches and say things like “It’s too noisy” no matter what I do, and I thought Echo was good at deciding not to respond sometimes.

OK, Amazon… here’s my free advice (admittedly self-serving but nevertheless accurate):

  • You need to know who is talking and build models of their voices and remember who they are and what their preferences are. Sensory has the BEST embedded speaker identification/verification engine in the world, and it’s embedded so you don’t need to send a bunch of personal data into the cloud. Check out TrulySecure!
  • In fact, if you added a camera to Alexa, it too could be used for many vision features, including face authentication.
  • Make it battery powered and portable! To do this, you’d need an equally good embedded trigger technology that runs at low power – Check out TrulyHandsfree!
  • If it’s going to be portable, then it needs to work if even when not connected to the Internet. For this, you’d need an amazing large vocabulary embedded speech engine. Did I tell you about TrulyNatural?
  • Of course, the hope is that the product-line will quickly expand and as a result, you will then add various sensors, microphones, cameras, wheels, etc.; and at the same time, you will also want to develop lower cost versions that don’t have all the mics and expensive processing. You are first to market and that’s a big edge. A lot of companies are trying to follow you. You need to expand the product-line quickly, learning from Alexa. Too many big companies have NIH syndrome… don’t be like them! Look for partnering opportunities with 3rd parties who can help your products succeed – Like Sensory! ;-)

Going Deep Series – Part 3 of 3

May 1, 2015

Going Deep Banner small

 

 

Winning on Accuracy & Speed… How can a tiny player like Sensory compete in deep learning technology with giants like Microsoft, Google, Facebook, Baidu and others?

There’s a number of ways, and let me address them specifically:

  1. Personnel: We all know it’s about quality, not quantity. I’d like to think that at Sensory we hire higher-caliber engineers than they do at Google and Microsoft; and maybe to an extent that is true, but probably not true when comparing their best with our best. We probably do however have less “turnover”. Less turnover means our experience and knowledge base is more likely to stay in house rather than walk off to our competitors, or get lost because it wasn’t documented.
  2. Focus and strategy: Sensory’s ability to stay ahead in the field of speech recognition and vision is because we have remained quite focused and consistent from our start. We pioneered the use of neural networks for speech recognition in consumer products. We were focused on consumer electronics before anyone thought it was a market…more than a dozen years before Siri!
  3. “Specialized” learning: Deep learning works. But Sensory has a theory that it can also be destructive when individual users fall outside the learned norms. Sensory learns deep on a general usage model, but once we go on device, we learn shallow through a specialized adaptive process. We learn to the specifics of the individual users of the device, rather than to a generalized population.

These 3 items together have provided Sensory with the highest quality embedded speech engines in the world. It’s worth reiterating why embedded is needed, even if speech recognition can all be done in the cloud:

  1. Privacy: Privacy is at the forefront of todays most heated topics. There is growing concern about “big brother” organizations (and governments) that know the intimate details of our lives. Using embedded speech recognition can help improve privacy by not sending personal data for analysis into the cloud.
  2. Speed: Embedded speech recognition can be ripping fast and consistently available. Accessing online or cloud based recognition services can be spotty when Internet connections are unstable, and not always available.
  3. Accuracy: Embedded speech systems have the potential advantage of a superior signal to noise ratio and don’t risk data loss or performance issues due to a poor or non-existent connection.

 

Good Technology Exists – So Why Does Speech Recognition Still Fall Short?

March 30, 2015

At Mobile World Congress, I participated in ZTE’s Mobile Voice Alliance panel. ZTE presented data researched in China that basically said people want to use speech recognition on their phones, but they don’t use it because it doesn’t work well enough. I have seen similar data on US mobile phone users, and the automotive industry has also shown data supporting the high level of dissatisfaction with speech recognition.

In fact, when I bought my new car last year I wanted the state of the art in speech recognition to make navigation easier… but sadly I’ve come to learn that the system used in my Lexus just doesn’t work well — even the voice dialing doesn’t work well.

As an industry, I feel we must do better than this, so in this blog I’ll provide my two-cents as to why speech recognition isn’t where it should be today, even when technology that works well exists:

  1. Many core algorithms, especially the ones provided to the automotive industry are just not that good. It’s kind of ironic, but the largest independent supplier of speech technologies actually has one of the worst performing speech engines. Sadly, it’s this engine that gets used by many of the automotive companies, as well as some of the mobile companies.
  2. Even many of the good engines don’t work well in noise. In many tests, Googles speech recognition would come in as tops, but when the environment gets noisy even Google fails. I use my Moto X to voice dial while driving (at least I try to). I also listen to music while driving. The “OK Google Now” trigger works great (kudo’s to Sensory!), but everything I say after that gets lost and I see an “it’s too noisy” message from Google. I end up turning down the radio to voice dial or use Sensory’s VoiceDial app, because Sensory always works… even when it’s noisy!
  3. Speech Application designs are really bad. I was using the recognizer last week on a popular phone. The room was quiet, I had a great internet connection and the recognizer was working great but as a user I was totally confused. I said “set alarm for 4am” and it accurately transcribed “set alarm for 4am” but rather than confirm that the alarm was set for 4am, it asked me what I wanted to do with the alarm. I repeated the command, it accurately transcribed again and asked one more time what I wanted to do with the alarm. Even though it was recognizing correctly it was interfacing so poorly with me that I couldn’t tell what was happening, and it didn’t appear to be doing what I asked it to do. Simple and clear application designs can make all the difference in the world.
  4. Wireless connections are unreliable. This is a HUGE issue. If the recognizer only works when there’s a strong Internet connection, then the recognizer is going to fail A GREAT DEAL of the time. My prediction – over the next couple of years, the speech industry will come to realize that embedded speech recognition offers HUGE advantages over the common cloud based approaches used today – and these advantages exist in not just accuracy and response time, but privacy too!

Deep learning nets have enabled some amazing progress in speech recognition over the last five years. The next five years will see embedded recognition with high performance noise cancelling and beamforming coming to the forefront, and Sensory will be leading this charge… and just like how Sensory led the way with the “always on” low-power trigger, I expect to see Google, Apple, Microsoft, Amazon, Facebook and others follow suit.

Interesting Announcements from Alibaba, Google and Microsoft

March 23, 2015

This month had three very different announcements about face recognition from Alibaba, Google, and Microsoft. Nice to see that Sensory is in good company!!!

Alibaba’s CEO Jack Ma discussed and demoed the possibility of using face verification for the very popular Alipay.

A couple interesting things about this announcement…First, I have to say, with a name like Alibaba, I am a little let down that they’re not using “Open Sesame” as a voice password to go with or instead of the face authentication… All joking aside, I do think relying on facial recognition as the sole means of user authentication is risky, and think they would be better served using a solution that integrates both face and voice recognition (something like our own TrulySecure), to ensure the utmost security of their customers’ linked bank accounts.

Face is considered one of the more “convenient” methods of biometrics because you just hold your phone out and it works! Well, at least it should… A couple of things I noticed in the Alibaba announcement: Look at the picture…Jack Ma is using both hands to carefully center his photo, and looking at the image of the phone screen tells us why. He needs to get his face very carefully centered on this outline to make it work. Why? Well, it’s a technique used to improve accuracy, but this improved accuracy, trades off the key advantage of face recognition, convenience, to make the solution more robust. Also the article notes that it’s a cloud based solution. To me cloud based means slower, dependent on a connection, and putting personal privacy more at risk.  At Sensory, we believe in keeping data secure, especially when it comes to something like mobile payments, which is why we design our technologies to be “embedded” on the device – meaning no biometric data has to be sent to the cloud, and our solutions don’t require an internet connection to function. Additionally, with TrulySecure, we combine face and voice recognition, making authentication quick and simple, not to mention more secure, and less spoofable than face-only solutions. By utilizing a multi-biometric authentication solution like TrulySecure,  the biometric is far less environmentally sensitive and even more convenient!

Mobile pay solutions are on the rise and as more hit the market differentiators like authentication approach, solution accuracy, convenience and most of all data security will continue to be looked at more closely. We believe that the embedded multi-biometric approach to user authentication is best for mobile pay solutions.

Also, Google announced that its deep learning FaceNet is nearly 100% accurate.

Everybody (even Sensory) is using deep learning neural net techniques for things like face and speech recognition. Google’s announcement seems to have almost no bearing on their Android based face authentication, which came in the middle of the pack of the five different face authentication systems we recently tested. So, why does Google announce this? Two reasons: – 1) Reaction to Baidu’s recent announcement that their deep learning speech recognition is the best in the world: 2) To counter Facebook’s announcement last year that their DeepFace is the best face recognition in world. My take – it’s really hard to tell whose solution is best on these kind of things, and the numbers and percentages can be deceiving. However, Google is clearly doing research experiments on high-accuracy face matching and NOT real world implementation, and Facebook is using face recognition in a real world setting to tag photos of you. Real-world facial recognition is WAY harder to perfect, so my praise goes out to Facebook for their skill in tagging everyone’s picture to reveal to our friends and family things might not have otherwise seen us doing!

Lastly, Microsoft’s announced Windows Hello.

This is an approach to getting into your Windows device with a biometric (face, iris, or fingerprint). Microsoft has done a very nice job with this. They joined the FIDO alliance and are using an on-device biometric. This approach is what made sense to us at Sensory, because you can’t just hack into it remotely, you must have the device AND the biometric! They also addressed privacy by storing a representation of the biometric. I think their approach of using a 3D IR camera for Face ID is a good approach for the future. This extra definition and data should yield much better accuracy than what is possible with today’s standard 2D cameras and should HELP with convenience because it could be better at angles can work in the dark. Microsoft claims 1 in 100,000 false accepts (letting the wrong person in). I always think it’s silly when companies make false accept claims without stating the false reject numbers (when the right person doesn’t get in). There’s always a tradeoff. For example I could say my coffee mug uses a biometric authenticator to let the right user telepathically levitate it and it has less than a 1 in a billion false accepts (it happens to also have a 100% false reject  since even the right biometric can’t telepathically levitate it!). Nevertheless, with a 3D camera I think Microsoft’s face authentication can be more accurate than Sensory’s 2D face authentication. BUT, its unlikely that the face recognition on its own will ever be more accurate than our TrulySecure, which still offers a lower False Accept rate than Microsoft – and less than 10% False Reject rate to boot!

Nevertheless, I like the announcement of 3D cameras for face recognition and am excited to see how their system performs.

Random Blogger Thoughts

June 30, 2014

  • TrulySecure™ is now announced!!!! This is the first on device fusion of voice and vision for authentication, and it really works AMAZINGLY well. I’m so proud of our new computer vision team and in Sensory’s expansion from speech recognition to speech and vision technologies. Now we are much more than “The Leader in Speech Technologies for Consumer Electronics”- we are “The Leader in Speech and Vision Technology for Consumer Products!” Hey check out the new TrulySecure video on our home page, and our new TrulySecure Product Brief. We hope and expect that TrulySecure will have the same HUGE impact on the market as Sensory had with TrulyHandsfree, the technology that pioneered always on touch less control!
  • Google I/O. Android wants to be everywhere: in our cars, in our homes, and in our phones. They are willing to spend billions of dollars to do it. Why? To observe our behaviors, which in turn will help provide us more of what we want…and they will also assist in those purchases. Of course this is what Microsoft and Apple and others want as well, but right now Google has the best cloud based voice experience, and if you ask me it’s the best user experience that will win the game. Seems like they should try and move ahead on the client, but lucky for Sensory we are staying ahead!
  • Rumors about Samsung acquiring Nuance…Why would they spend $7B for Nuance when they can pick up a more unique solution from Sensory for only $1B? Yeah, that’s a joke, and is definitely not intended as an offer or solicitation to sell Sensory!
  • OH! Sensory has a new logo! We made it to celebrate our 20 year anniversary!

Mobile Voice 2014 and The Year of Wearables

February 5, 2014

Everyone seems to be talking about this as the year of the wearable. I don’t think so. Even if Apple does introduce a watch, and Google widely releases Glass, will they really go mainstream and sell hundreds of millions of units? I don’t think so. At least not for a few years. IMHO there needs to be a few major breakthroughs:

  1. Apps. Yeah we always need a killer app. I don’t think sending little messages and alerts is enough. The killer app could be a great music player…maybe one that’s completely voice controlled? Glass has the potential to augment my knowledge without my asking and that could be really cool, basically look and learn!
  2. Power. Why hasn’t battery power advanced beyond lithium? I’m hoping for energy harvesting breakthroughs that will allow devices to last and be tiny…to fulfill number 3.
  3. Invisibility. I stopped wearing a watch when I began carrying a smart phone. I never wear my wedding ring. I need something pretty comfortable and compelling to dangle electronics off of my body. What I really want is something invisible or near invisible. Moto has a tattoo patent for electronics, right? Then there’s the micro-electronic pills…when will we have seamless attachments to augment our abilities?
  4. Untethered. It would be really cool if I could travel around town without having to carry my phone to use a wearable. It kind of does defeat the purpose. It isn’t that hard to pull my phone out. If I could go a few miles that would be nice…20 would be even better. A completely untethered self-contained unit would be nice, but unlikely to be invisible!

I’ll be leading a Wearables panel at the Mobile Voice show with an AWESOME group of people representing thought leaders from Google, Pebble, Intel, Xowi, and reQall. Here’s the press release

on it.

The price of free phone features

August 5, 2013

I often get the question, “If Android and Qualcomm offer voice activation for free, why would anyone license from Sensory?” While I’m not sure about Android and Qualcomm’s business models, I do know that decisions are based on accuracy, total added cost (royalties plus hardware requirements to run), power consumption, support, and other variables. Sensory seems to be consistently winning the shootouts it enters for embedded voice control. Some approaches that appear lower cost require a lot more memory or MIPS, driving up total cost and power consumption.

It’s interesting to note that companies like Nuance have a similar challenge on the server side where Google and Microsoft “give it away”. Because Google’s engine is so good it creates a high hurdle for Nuance. I’d guess Google’s rapid progress helps Nuance with their licensing of Apple, but may have made it more challenging to license Samsung. Samsung actually licensed Vlingo AND Nuance AND Sensory, then Nuance bought Vlingo.

Why doesn’t Samsung use Google recognition if it’s free? On the server it’s not power consumption effecting decisions, but cost, quality, and in this case CONTROL. On the cost side it could be that Samsung MAKES more money by using Nuance in some sort of ad revenue kickbacks, which I’d guess Google doesn’t allow. This is of course just hypothesizing. I don’t really know, and if I did know I couldn’t say. The control issue is big too as companies like Sensory and Nuance will sell to everyone and in that sense offer platform independence and more control. Working with a Microsoft or Google engine forces an investment in a specific platform implementation, and therefore less flexibility to have a uniform cross platform solution.

« Older EntriesNewer Entries »