Posts Tagged ‘Apple’
August 6, 2018
Apple introduced Siri in 2011 and my world changed. I was running Sensory back then as I am today and suddenly every company wanted speech recognition. Sensory was there to sell it! Steve Jobs, a notorious nay-sayer on speech recognition, had finally given speech recognition the thumbs up. Every consumer electronics company noticed and decided the time had come. Sensory’s sales shot up for a few years driven by this sudden confidence in speech recognition as a user interface for consumer electronics.
Fast forward to today and Apple has just become the first and only trillion dollar US company in terms of market capitalization. One trillion dollars is an arbitrary round number with a lot of zeroes, but it is psychologically very important. It was winning a race. It was a race between Cook, Bezos, the Google/Alphabet Crew and others that most of the contestants would say doesn’t really matter and that they weren’t in the race. But, they were and they all wanted to win. Without question it was quarterly financial results that caused Apple to reach the magic number and beat Amazon, Google and Microsoft to the trillion dollar value spot. I wouldn’t argue that Siri got them there, but I would argue that Siri didn’t stop them, and this is important.
SIRI WAS FIRST, BUT QUICKLY LOST THE VOICE LEAD TO RIVALS
Then in 2014 Amazon introduced the Echo smart speaker with Alexa and beat Apple and others into the home with a useable voice assistant. Alexa came out strong and got stronger quickly. Amazon amassed over 5,000 people into what is likely the largest speech recognition team in the world. Google got punched but wasn’t knocked out. Its AI team kept growing and Google had a very strong reputation in academia as hiring the best and brightest machine learning and AI folks out of PhD programs. By 2016, Google had introduced its own smart speaker, and by CES 2018, Google made a VERY strong marketing statement that it was still in the game.
APPLE FOCUSED ELSEWHERE
AI ASSISTANTS DRIVE CONSUMER LOCK-IN
The assistants aren’t sold and so they don’t directly make money but they can be used as purchasing agents (where Amazon makes a lot of money), advertising agents (where Google makes its money), access to entertainment services (where all the big guys make money) and as a user experience for consumer electronics (where Apple makes a lot of money). The general thinking is that the more an assistant is used, the more it learns about the user, the better it serves the user, and the more the user is locked in! So winning in the AI Assistant game is HUGELY important and recent changes at Apple show that Siri is quickly coming up in the rankings and could have more momentum right now than in its entire history. That’s why Siri didn’t stop Apple from reaching $1T.
SIRI ON THE RISE
It may have taken a while but Apple seems serious. It’s nice to have a pioneer in the space not stay down for the count!
August 6, 2018
Here’s the basic motivation that I see in creating Voice Assistants…Build a cross platform user experience that makes it easy for consumers to interact, control and request things through their assistant. This will ease adoption and bring more power to consumers who will use the products more and in doing so create more data for the cloud providers. This “data” will include all sorts of preferences, requests, searches, purchases, and will allow the assistants to learn more and more about the users. The more the assistant knows about any given user, the BETTER the assistant can help the user in providing services such as entertainment and assisting with purchases (e.g. offering special deals on things the consumer might want). Let’s look at each of these in a little more detail:
1. Owning the cross platform user experience and collecting user data to make a better Voice Assistants.
Owning the user experience on a single device is not good enough. The goal of each of these voice assistants is to be your personal assistant across devices. On your phone, in your home, in your car, wherever you may go. This is why we see Alexa and Google and Siri all battling for, as an example, a position in automotive. Your assistant wants to be the place you turn for consistent help. In doing so it can learn more about your behaviors…where you go, what you buy, what you are interested in, who you talk to, and what your history is. This isn’t just scary big brother stuff. It’s quite practical. If you have multiple assistants for different things, they may each think of you and know you differently, thereby having a less complete picture. It’s really best for the consumer to have one assistant that knows you best.
For example, let’s take the simple case of finding food when I’m hungry. I might say “I’m hungry.” Then the assistant’s response would be much more helpful the more it knows about me. Does it know I’m a vegetarian? Does it know where I’m located, or whether I am walking or driving? Maybe it knows I’m home and what’s in my refrigerator, and can suggest a recipe…does it know my food/taste preferences? How about cost preferences? Does it have the history of what I have eaten recently, and knows how much variety I’d like? Maybe it should tell me something like “Your wife is at Whole Foods, would you like me to text her a request or call her for you?” It’s easy to see how these voice assistants could really be quite helpful the more it knows about you. But with multiple assistants in different products and locations, it wouldn’t be as complete. In this example it might know I’m home, but NOT know what’s in my fridge. Or it might know what’s in the fridge and know I’m home but NOT know my wife is currently shopping at Whole Foods, etc.
The more I use my assistant across more devices in more situations and over more time, the more data it could gather and the better it should get at servicing my needs and assisting me! It’s easy to see that once it knows me well and is helping me with this knowledge it will get VERY sticky and become difficult to get me to switch to a new assistant that doesn’t know me as well.
2. Entertainment and other service package sales.
3. Selling and recommending products to consumers
It would be really obnoxious if Alexa or Siri or Cortana or Google Assistant suddenly suggested I buy something that I wasn’t interested in, but what if it knew what I needed? For example, it could track vitamin usage and ask if I want more before they run out, or it could know how frequently I wear out my shoes, and recommend a sale for my brand and my size, when I really needed them. The more my assistant knows me the better it can “advertise” and sell me in a way that’s NOT obnoxious but really helpful. And of course making extra money in the process!
October 12, 2017
Amazon, Google, Sonos, and LINE all introduced smart speakers within a few weeks of each other. Here’s my quick take and commentary on those announcements.Amazon now has the new Echo, the old Echo, the Echo Plus, Spot, Dot, Show, and Look. The company is improving quality, adding incremental features, lowering cost, and seemingly expanding its leadership position. They make great products for consumers, have a very strong eco-system, and make very tough products to compete with for both their competitors and their many platform partners that use Alexa. Seems that their branding strategy is to use short three- or four-letter names that have Os. The biggest thing that was missing was speaker identification to know who’s talking to it. Interestingly, Amazon just added that capability.
Google execs wore black shirts and jeans in a very ironic-seeming Steve Jobs fashion. They attacked the Amazon Dot with their Mini, and announced the Max to compete with the quality expectations of Sonos and Apple. I didn’t find much innovation in the product line or in their dress, but I’d still rank the Google Assistant as the most capable assistant I’ve used. Of course, Google got caught stealing data, so it makes sense they have more knowledge about us and can make a better assistant.
Sonos invented the Wi-Fi speaker market and has always been known for quality. They announced the Sonos One at a surprisingly aggressive $199 price point. Their unique play is to support Alexa, Assistant, and Siri, starting first with Alexa. Now this would put price pressure on Apple’s planned $349 HomePod, but my guess is that Apple will aggressively sell this into its captive, and demographically wealthy market before they allow Sonos to incorporate Siri. Like Apple, Sonos will have a nice edge in being able to sell into its existing customer base who will certainly want the added convenience and capability of voice control, with their choice of assistant.
American readers might be familiar with LINE, but the company offers a hugely popular communications app that’s been downloaded by about a billion people. They’re big in Japan and owned by Naver, an even bigger Korean company that’s also working on a smart speaker.
Most notable about LINE (besides the unique looking speaker that resembles a cone with the top cut off) is that it appears that they’re not only beating Amazon, Google, Apple, and Sonos to Japan, but they’re also getting there before the Japanese giants like Docomo, Sony, Sharp, and Softbank. And all of these companies are making smart speakers.
Then, there are the Chinese giants who are all making smart speakers, and the old-school speaker companies who are trying to get into the game. It’s going to be crowded very quickly, and I’m very excited to see quality going up and costs staying low.
September 15, 2017
On the same day that Apple rolled out the iPhone X on the coolest stage of the coolest corporate campus in the world, Sensory gave a demo of an interactive talking and listening avatar that uses a biometric ID to know who’s talking to it. In Trump metrics, the event I attended had a few more attendees than Apple.
Interestingly, Sensory’s face ID worked flawlessly, and Apple’s failed. Sensory used a traditional camera using convolutional neural networks with deep learning anti-spoofing models. Apple used a 3D camera.
There are many theories about what happened with FaceID at Apple. Let’s discuss what failure even means and the effects of 2D versus 3D cameras. There are basically three classes of failure: accuracy, spoofability, and user experience. It’s important to understand the differences between them.
It’s easy to reach one in a million or one in a billion FAs by making it FR all of the time. For example, a rock will never respond to the wrong person… it also won’t respond to the right person! This is where Apple failed. They might have had amazing false accepts rates, but they hit two false rejects on stage!
I believe that there is too much emphasis placed on FA. The presumption is random users trying to break in, and 1 in 50,000 seems fine. The break-in issue typically relates to spoofability, which needs to be thought of in a different way – it’s not a random face, it’s a fake face of you.
Every biometric that gets introduces gets spoofed. Gummy bears, cameras, glue, and tape were all used to spoof fingerprints. Photos, masks, and videos have been used to spoof faces.
To prevent this, Sensory built anti-spoof models that weaken the probability of spoofing. 3D cameras also make it easier to reduce spoofs, and Apple moved in the right direction here. But the real solution is to layer biometrics, using additional layers when more security is needed.
Apple misfires on UX?
Apple set the FA so high on FaceID that it hurt the consumer experience by rejecting too much, which is what we saw on stage. But there’s more to it in the tradeoffs.
The easiest way to prevent spoofing is to get the user to do unnatural things, live and randomly. Blinking was a less intrusive version that Google and others have tried, but a photo with the eyes cut out could spoof it.
Having people turn their face, widen their nostrils, or look in varying directions might help prevent spoofing, but also hurt the user experience. The trick is to get more intrusive only when the security needs demand it. Training the device is also part if the user experience.
June 4, 2014
It was about 4 years ago that Sensory partnered with Vlingo to create a voice assistant with a special “in car” mode that would allow the user to just say “Hey Vlingo” then ask any question. This was one of the first “TrulyHandsfree” voice experiences on a mobile phone, and it was this feature that was often cited for giving Vlingo the lead in the mobile assistant wars (and helped lead to their acquisition by Nuance).
About 2 years ago Sensory introduced a few new concepts including “trigger to search” and our “deeply embedded” ultra-low power always listening (now down to under 2mW, including audio subsystem!). Motorola took advantage of these excellent approaches from Sensory and created what I most biasedly think is the best voice experience on a mobile phone. Samsung too has taken the Sensory technology and used in a number of very innovative ways going beyond mere triggers and using the same noise robust technology for what I call “sometimes always listening”. For example when the camera is open it is always listening for “shoot” “photo” “cheese” and a few other words.
So I’m curious about what Google, Microsoft, and Apple will do to push the boundaries of voice control further. Clearly all 3 like this “sometimes always on” approach, as they don’t appear to be offering the low power options that Motorola has enabled. At Apple’s WWDC there wasn’t much talk about Siri, but what they did say seemed quite similar to what Sensory and Vlingo did together 4 years ago…enable an in car mode that can be triggered by “Hey Siri” when the phone is plugged in and charging.
I don’t think that will be all…I’m looking forward to seeing what’s really in store for Siri. They have hired a lot of smart people, and I know something good is coming that will make me go back to the iPhone, but for now it’s Moto and Samsung for me!
May 7, 2014
If you read through the biometrics literature you will see a general security based ranking of biometric techniques starting with retinal scans as the most secure, followed by iris, hand geometry and fingerprint, voice, face recognition, and then a variety of behavioral characteristics.
The problem is that these studies have more to do with “in theory” than “in practice” on a mobile phone, but they never-the-less mislead many companies into thinking that a single biometric can provide the results required. This is really not the case in practice. Most companies will require that False Accepts (error caused by wrong person or thing getting in) and False Rejects (error caused by the right person not getting in) be so low that the rate where these two are equal (equal error rate or EER) would be well under 1% across all conditions. Here’s why the studies don’t reflect the real world of a mobile phone user:
A great case in point is the fingerprint readers now deployed by Apple and Samsung. These are extremely expensive devices, and the literature would make one think that they are highly accurate, but Apple doesn’t have the confidence to allow them to be used in the iTunes store for ID, and San Jose Mercury News columnist Troy Wolverton says:
“I’ve not been terribly happy with the fingerprint reader on my iPhone, but it puts the one on the S5 to shame. Samsung’s fingerprint sensor failed repeatedly. At best, I would get it to recognize my print on the second try. But quite often, it would fail so many times in a row that I’d be prompted to enter my password instead. I ended up turning it off because it was so unreliable (full article).”
There is a solution to this problem…It’s to utilize sensors already on the phone to minimize cost, and deploy a biometric chain combining face verification, voice verification, or other techniques that can be easily implemented in a user friendly manner that allows the combined usage to create a very low equal error rate, that become “immune” to conditions and compliance issues by having a series of biometric and other secure backup systems.
Sensory has an approach we call SMART, Sensory Methodology for Adaptive Recognition Thresholding that takes a look at environmental and usage conditions and intelligently deploys thresholds across a multitude of biometric technologies to yield a highly accurate solution that is easy to use and fast in responding yet robust to environmental and usage models AND uses existing hardware to keep costs low.
April 25, 2014
It’s not often that I rave about articles I read, but Ian Mansfield of Cellular News hit the nail on the head with this article.
Not only is it a well written and concise article but its chock full of recent data (primarily from JD Power research), and most importantly it’s data that tells a very interesting story that nicely aligns with Sensory’s strategy in mobile. So, thanks Ian, for getting me off my butt to start blogging again!
A few key points from the article:
Now, let me dive one step deeper into the problem, and explore whether customer satisfaction can be achieved with minimal impact on cost:
Seamless voice control is here and soon every phone will have it, and it doesn’t add any hardware cost. Sensory introduced the technology with our TrulyHandsfree technology that allows users to just start talking, and our “trigger to search” technology has been nicely deployed by companies like Motorola that pioneered this “seamless voice control” in many of their recent releases. The seamless voice control really doesn’t add much cost, and with excellent engines from Google and Apple and Microsoft sitting in the clouds, it can and will be nicely implemented without effecting handset pricing.
Sensors are a different story. By their nature they will be embedded into the phones and will increase cost. Some “sensors” in the broadest sense of the term are no brainers and necessities, for example microphones and cameras are a must have, and the six-axis sensors combining GPS and accelerometers are arguably must haves as well. Magnetometers, barometers are getting increasingly common, and to differentiate further leading manufacturers are embedding things like heartbeat monitors; stereo 3D cameras are just around the corner. To address the desire for biometric security Samsung and Apple have the 2 bestselling phones in the world embedded with fingerprint sensors!
The problem is that all these sensors add cost, and in particular those finger print sensors are the most expensive and can add $5-$15 to the cost of goods. It’s kind of ironic that after spending all that money on biometric security, Apple doesn’t even allow them as a security measure for purchasing iTunes. And both Samsung and Apple have been chastised for fingerprint sensors that can be cracked with gummy bears or glue!
A much more accurate and cost effective solution can be achieved for biometrics by using the EXISTING sensors on the phones and not adding special purpose biometric sensors. In particular, the “must have sensors” like microphones, cameras, and 6-axis sensors can create a more secure environment that is just as seamless but much less difficult to crack. I’ll talk more about that in my next blog.
August 5, 2013
I often get the question, “If Android and Qualcomm offer voice activation for free, why would anyone license from Sensory?” While I’m not sure about Android and Qualcomm’s business models, I do know that decisions are based on accuracy, total added cost (royalties plus hardware requirements to run), power consumption, support, and other variables. Sensory seems to be consistently winning the shootouts it enters for embedded voice control. Some approaches that appear lower cost require a lot more memory or MIPS, driving up total cost and power consumption.
It’s interesting to note that companies like Nuance have a similar challenge on the server side where Google and Microsoft “give it away”. Because Google’s engine is so good it creates a high hurdle for Nuance. I’d guess Google’s rapid progress helps Nuance with their licensing of Apple, but may have made it more challenging to license Samsung. Samsung actually licensed Vlingo AND Nuance AND Sensory, then Nuance bought Vlingo.
Why doesn’t Samsung use Google recognition if it’s free? On the server it’s not power consumption effecting decisions, but cost, quality, and in this case CONTROL. On the cost side it could be that Samsung MAKES more money by using Nuance in some sort of ad revenue kickbacks, which I’d guess Google doesn’t allow. This is of course just hypothesizing. I don’t really know, and if I did know I couldn’t say. The control issue is big too as companies like Sensory and Nuance will sell to everyone and in that sense offer platform independence and more control. Working with a Microsoft or Google engine forces an investment in a specific platform implementation, and therefore less flexibility to have a uniform cross platform solution.
June 17, 2011
That’s what America’s most charismatic President used to say! I didn’t necessarily agree with Reagan’s politics, but I sure did like his presentation. Nuance’s Paul Ricci is kind of the inverse of that; a lot of people don’t like him, but it’s hard to argue with his politics (although I will later in this blog…)
I’ve never met Ricci. I’ve known a lot of people who have worked for him, with him, and against him. Everybody agrees he’s a tough guy, and I think most would also use words like ruthless and smart. A lot of people might even call him an asshole, and whether true or not, I don’t think he cares about that. He’s a competitive strategy gameplay kind of guy, and he’s done pretty well. However, he has a HUGE challenge being up against the likes of Google, Microsoft, and eventually Apple (let alone the smart little guys like Vlingo, Yap, Loquendo, etc.). But I digress…
I started this blog thinking about Nuance’s recent acquisition of SVOX. And I wanted to congratulate Nuance and Ricci for ACQUIRING SVOX WITHOUT SUING THEM. If I look back a ways (and I can look back VERY FAR!), Nuance (or the company formerly known as Lernout and Hauspie and then Scansoft) has at least 4 embedded speech recognition companies wrapped into it over the years. In rough chronological order: Voice Control Systems (VCS was probably the FIRST embedded speech company and the first and only embedded group to go public), Phillips Embedded Speech Division (I think they had acquired VCS for around $50M), Advanced Recognition Technologies, and Voice Signal Technologies. I believe Ricci was at the helm during the Philips embedded acquisition (this was the one closer to 2000 as opposed to the Philips Medical group a few years ago), ART, and VST. Interestingly, 2 of these 3 were lawsuit acquisitions. There are probably some inside stories about SVOX that I don’t know (e.g. threats of lawsuits??), but it appears that Nuance’s acquisitions of embedded companies are now down to 50% lawsuit driven. Thanks, Paul, you’re moving in the right direction! ;-)
OK, so what’s wrong with suing the companies you want to acquire? It probably does lower their price and reduce competitive bidding. Setting aside the legal and moral issues, there is one huge issue that’s clear- If you want to hold onto your star employees and technologists, you need to treat them well. Everyone understands who the “stars” are – they are the 10% of the workforce that contribute to 90% of the innovation. They are not going to stick around unless they are treated right, and starting off a relationship by calling them thieves is not a good way to court a long term relationship.
For example, there’s been a lot of press lately about the Vlingo/Nuance situation and how Ricci offered the top 3 employee/founders $5M each to sell Vlingo (plus a bundle of money for Vlingo!) Well, Mike Phillips used to be Nuance’s CTO (through acquisition of Speechworks)…so wouldn’t it have been more valuable to KEEP Mike there than BUY him back? The “other” Mike…Mike Cohen is Google’s head of speech. He FOUNDED Nuance (well, the company formerly known as Nuance!) and left to join Google, and of course this caused a lawsuit…think either of the Mike’s (two of the smartest speech technologists in the industry) would ever go back to Nuance? Google has managed to hold onto Cohen, so it’s not just an issue of the best people leaving big companies because “little companies innovate.” I’ve also seen the recent rumor mill about Nuance’s Head of Smart Phone Architecture leaving for Apple…
So it’s the personnel and customer thing that Nuance is missing out on in their competitive gameplay strategy, and my hope is that SVOX’s acquisition represents a significant change in how Nuance does business!
As a point in contrast, Sensory has acquired only one company in our history – Fluent Speech Technologies (and no, we didn’t sue them first.) This was a group that spun out of the former Oregon Graduate Institute back in the 1990’s. We saw a demo of theirs back in 1997-1998, and thought the technology was great. They offered to sell us the speech recognition technology (not the company), so they could focus on animation opportunities, but we had NO INTEREST in that. We wanted the people that made the technology, not the technology itself. That’s how our Oregon office was born; we acquired the company with the people. The office is now about as big as our headquarters (and some of our people in Silicon Valley have even moved up there!) By the way, ALL the technologists that came with that acquisition are still with us after 12 years, and we’ve kept a very friendly relationship with the former OGI as well.
Time for a breather…Yeah, I do long blogs….if you see a short one, which might start appearing, it’s probably a “ghostwriter” helping me out…. ;-)
So let’s look at Nuance’s acquisition of SVOX. Why did Nuance acquire them?
Anyways…I suspect the acquisition was a good deal for Nuance and its investors, and probably a GREAT deal for SVOX and its investors. Nuance’s market price didn’t seem to move much, but maybe it will once the price is disclosed. I commend and encourage Nuance to cut the lawsuits…one of them could bite back a lot worse than the pain of losing employees!
April 21, 2011
I had an interesting email conversation with a blog reader last month, and I thought I’d share some of the dialog. He is an equity analyst (who wishes to remain anonymous) that follows some companies in the speech industry. He emailed me saying:
“I came across your blog some time ago and have been reading it since with great interest. A topic of particular interest to me has been your periodic comments about how Apple has lagged the investments made by Google in speech recognition technology, opting instead to lean on Nuance. I was also struck by your observation that big companies, such as Google, have a history of licensing Nuance technologies before eventually taking those capabilities in-house.”
This makes me feel the need to clarify something…Nuance has great technologies, period. When companies feel the need to bring the technology “in-house”, it’s not driven by a failing of Nuance, but simply the fact that the USER EXPERIENCE IS SO CRITICAL to the success of consumer products. It’s difficult for big companies like Apple, Google, Microsoft, HP and others that depend heavily upon positive consumer experiences to farm out the technology for such a critical component.
The conversation turned to Apple, and the equity manager asked about the all too common question of whether Apple might acquire Nuance. Here’s, roughly, how the conversation went:
Analyst: What is your current view on Apple’s efforts in this space? As a company they seem to take great pride in controlling the user experience and that extends to how they think about key technologies (witness the Flash vs. HTML 5 spat, for example). It makes me wonder if Apple would be satisfied relying on Nuance for such a visible and important capability or whether they’d feel the need to also bring it in-house.
Todd: Apple can definitely afford Nuance. In fact, Apple probably makes enough profit in a good quarter to buy Nuance outright. Nevertheless, it would be a BIG price tag, and not in line with Apple’s traditional acquisition strategy. I wouldn’t rule it out, but I wouldn’t say they “need” Nuance, either, but they do need to do something, and they know it. Apple has been posting job requisitions this year in the area of speech recognition, so they definitely want to bring more of the technology in-house. My guess is they’ll do some M&A in the speech technology area as well. Google and Microsoft have combined aggressive hiring with M&A, so it seems likely that Apple will go beyond the SIRI acquisition (which added an AI layer on top of Nuance) and acquire more core speech technology expertise.
Analyst: I agree with you that Apple makes/has enough cash to acquire Nuance, but that it would be out of character for Apple to do so. Where I’m most interested is whether there are meaningful technical/architectural reasons why Apple must partner with Nuance for SR, or if the gap between Nuance and these smaller players is narrow enough that Apple would acquire or partner more closely with one of the small guys in order to maintain more control over the technology. Many people seem to think that an SR acquisition would have to be of Nuance, but I’ve been told that there are many quality SR start-ups. If you had to bet, do you think that Apple needs the 800-pound gorilla Nuance in order to do a good job in SR, or would one of these smaller companies give Apple a sufficient base upon which to build out a solution?
Todd: I’m confident Apple will eventually own it. I’d say the odds of them buying Nuance though are quite low (10-30% as a wild guess). There’s no technical reason why they can’t use another technology, but the 3 best reasons they’d acquire Nuance are:
Apple’s in-house teams are quite familiar with the Nuance engines as they have already implemented them in some products. Apple is engaged in a lot of patent fights, and Nuance has the best portfolio of speech patents in the world – That’s a really valuable asset that the Google’s and Microsoft’s would probably fight over! Of course, for the cost of Nuance, someone could probably buy all of the other TTS and SR tech companies in the world! ;-)
Analyst: Apple really has a phobia about adding third-party software to their products. No Mosaic core in their browser, no audio compression codecs from Dolby or DTS, no Flash from Adobe…. They acquired two microprocessor design companies to create a proprietary stack on ARM chips rather than using broadly available chipsets from Qualcomm or Broadcom. Now comes the question of what to do with SR technology….
Todd: It will be interesting to see how this all unfolds. I suspect a lot of other large companies will want to get into the game as well. It could be that the cloud-based solutions for TTS and SR become generic and replaceable enough that there isn’t a need to bring them “in-house”. Of course, Sensory is hoping and betting on the need for the Client/Server approaches, where an embedded solution (like our Truly Handsfree Triggers) nicely complement the cloud-based offerings.