HEAR ME -
Speech Blog
HEAR ME - Speech Blog  |  Read more June 11, 2019 - Revisiting Wake Word Accuracy and Privacy
HEAR ME - Speech Blog

Archives

Categories

Sensory Brings Low-Power Wake Words to Mobile Apps

April 3, 2018

Santa Clara, Calif., April 3, 2018 – Sensory’s TrulyHandsfree speech recognition has been re-engineered to run ultra-low-power on Android and iOS mobile applications without special hardware

Sensory, a Silicon Valley-based company focused on improving the user experience and security of consumer electronics through state-of-the-art embedded AI technologies, today announced that it has made a significant breakthrough in running its TrulyHandsfree™ wake word and speech recognition AI engine directly on Android and iOS smartphone applications at low-power. As a software component, TrulyHandsfree can be adapted to any app without requiring special purpose hardware or DSPs to capture efficiencies in computing.

Introduced in 2009, TrulyHandsfree paved the way for the hands-free operation we have come to expect with today’s always-listening personal assistant solutions. When released it revolutionized voice user interfaces by offering the first commercially successful always-listening low power wake word. With each succeeding generation, TrulyHandsfree has continually upped the benchmark for always-listening speech recognition performance, by increasing accuracy, lowering power consumption, and running across an increasing number of hardware platforms at ultra-low-power consumption.

TrulyHandsfree has seen large commercial success by running on special purpose hardware for low-power operation. Companies like Avnera, Cirrus Logic, Conexant/Synaptics, CSR/Qualcomm, DSP Group, Knowles, QuickLogic, Realtek, XMOS and many others have penetrated the market for voice assistants using Sensory TrulyHandsfree technology. This specialized hardware approach has worked well for Sensory’s customers like Samsung, Huawei, LG, Motorola and other Android mobile providers who design their own phones and wearables with their choice of hardware.

Until now, always-listening wake word solutions for apps required too much power to be practical, especially for apps that remain open and active in the background. Additionally, having to maintain the same user experience across operating systems, and across all different devices added an extra layer of complexity. However, this isn’t the case anymore. TrulyHandsfree streamlines the implementation and coding process, allowing developers to quickly and easily deploy apps with power-efficient always-listening wake word and command set capabilities across all popular mobile and PC operating systems.

In 2017 Sensory embarked on investigations of using Qualcomm and ARM as more standard cross-platform solutions to figure out how to lower power consumption for wake words used across mobile platforms. Sensory came up with a series of independent actions that when combined could lower power consumption on a mobile app using a wake word by more than 80%, or a reduction of approximately 200mAh in a 12-hour day. That enables a mobile app wake word to consume approximately one-percent of the smartphone battery in 12 hours. To achieve this outstanding reduction in power consumption, Sensory utilized an approach known as “little-big,” which uses a very small model to identify an interesting event and then revalidates the event on a large model (both events are processed on the Application Processor). This method provides the optimal user experience of the big model only when needed, while maintaining the power consumption of the little model most of the time. Frame stacking approaches further cut certain wake word model processing functions’ MIPS in half with negligible accuracy impact. Additionally, multithreading has been deployed to allow more efficient processing of speech recognition and can significantly improve the speed of execution for larger wake word models.

“Hands-free operation for voice control has become the norm, and application developers are now looking to create hands-free wake words for their own apps,” said Todd Mozer, CEO of Sensory. “For example, we recently helped Google’s Waze accept hands-free voice commands by supplying them with Sensory’s ‘OK Waze’ wake word that runs when the app is open. With previous versions of TrulyHandsfree, having our always-on wake word engine listening for the OK Waze wake word during a short trip would have had minimal effect on a smartphone’s battery, but for longer trips a more efficient system was desired – so we created it. Sensory is excited to now offer TrulyHandsfree with excellent low-power performance to all app developers!”

TrulyHandsfree is the most widely deployed embedded speech recognition engine in the world, having enabled a hands-free voice user experience on more than two billion devices from leading brands worldwide. TrulyHandsfree offers support for every voice UI application with several types of wake word options, such as independent fixed wake words, user enrolled fixed wake words, and user defined wake words. Sensory offers off-the-shelf wake word models for all major Assistant services, including Alexa, Hey Siri, OK Google, Hey Cortana, as well as wake word models for third-party devices that support cloud AI systems from Baidu, Alibaba and Tencent. Sensory can also combine multiple wake words into one solution and is the only supplier to have deployed numerous cross-assistant wake word solutions to the market.

Sensory’s TrulyHandsfree currently supports US English, UK English, Australian English, Indian English, Arabic, Dutch, French (EU and Canadian), German, Italian, Japanese, Korean, Mandarin, Portuguese (EU and Brazil), Russian, Spanish (EU, Latin America and US), Swedish and Turkish. An SDK for TrulyHandsfree is available for Android, iOS, Linux, Mac OS, QNX and Windows. Sensory provides developer support for cloud service interfaces on Android, iOS, Linux, Mac OS, Windows as well as support for dozens of proprietary DSPs, microcontrollers, smart microphones and other low-power embedded devices. SDK updates taking advantage of lower power TrulyHandsfree are now being rolled out for Android and iOS in Q2 2018.

For more information about this announcement, Sensory or its technologies, please contact sales@sensory.com; Press inquiries: press@sensory.com.

About Sensory
Sensory Inc. creates a safer and superior UX through vision and voice technologies. Sensory’s technologies are widely deployed in consumer electronics applications including mobile phones, automotive, wearables, toys, IoT and various home electronics. Sensory’s product line includes TrulyHandsfree voice control, TrulySecure biometric authentication, and TrulyNatural large vocabulary natural language embedded speech recognition. Sensory’s technologies have shipped in over a billion units of leading consumer products. 

TrulyHandsfree is a trademark of Sensory Inc.

Smart speakers coming from all over

October 12, 2017

Amazon, Google, Sonos, and LINE all introduced smart speakers within a few weeks of each other. Here’s my quick take and commentary on those announcements.Amazon now has the new Echo, the old Echo, the Echo Plus, Spot, Dot, Show, and Look. The company is improving quality, adding incremental features, lowering cost, and seemingly expanding its leadership position. They make great products for consumers, have a very strong eco-system, and make very tough products to compete with for both their competitors and their many platform partners that use Alexa. Seems that their branding strategy is to use short three- or four-letter names that have Os. The biggest thing that was missing was speaker identification to know who’s talking to it. Interestingly, Amazon just added that capability.

Google execs wore black shirts and jeans in a very ironic-seeming Steve Jobs fashion. They attacked the Amazon Dot with their Mini, and announced the Max to compete with the quality expectations of Sonos and Apple. I didn’t find much innovation in the product line or in their dress, but I’d still rank the Google Assistant as the most capable assistant I’ve used. Of course, Google got caught stealing data, so it makes sense they have more knowledge about us and can make a better assistant.

Sonos invented the Wi-Fi speaker market and has always been known for quality. They announced the Sonos One at a surprisingly aggressive $199 price point. Their unique play is to support Alexa, Assistant, and Siri, starting first with Alexa. Now this would put price pressure on Apple’s planned $349 HomePod, but my guess is that Apple will aggressively sell this into its captive, and demographically wealthy market before they allow Sonos to incorporate Siri. Like Apple, Sonos will have a nice edge in being able to sell into its existing customer base who will certainly want the added convenience and capability of voice control, with their choice of assistant.

American readers might be familiar with LINE, but the company offers a hugely popular communications app that’s been downloaded by about a billion people. They’re big in Japan and owned by Naver, an even bigger Korean company that’s also working on a smart speaker.

Most notable about LINE (besides the unique looking speaker that resembles a cone with the top cut off) is that it appears that they’re not only beating Amazon, Google, Apple, and Sonos to Japan, but they’re also getting there before the Japanese giants like Docomo, Sony, Sharp, and Softbank. And all of these companies are making smart speakers.

Then, there are the Chinese giants who are all making smart speakers, and the old-school speaker companies who are trying to get into the game. It’s going to be crowded very quickly, and I’m very excited to see quality going up and costs staying low.

Sensory Demos Awesome AI Mashup at Finovate!

September 28, 2017

Finovate is one of those shows where you get up on stage and give a short intro and live demo. They are selective in who they allow to present and many applicants are rejected. Sensory demonstrated some really cutting-, perhaps bleeding-, edge stuff by combining animated talking avatars, with text-to-speech, lip movement synchronization, natural language speech recognition and face and voice biometrics. I don’t know of any company ever combining so many AI technologies into a single product or demo!

Speech recognition has a long history of failing on stage, and one of the ways Sensory has always differentiated itself, is that our demos always work! And all our AI technologies worked here too! Even with bright backlighting, our TrulySecure face recognition was so fast and accurate some missed it. With the microphones and echo’s in the large room, our TrulyNatural speech recognition was perfect! That said, we did have a user-error… before Jeff and I got on stage he put his demo phone in DND mode, which cut our audio output – but quickly recovered from that mishap.


Alexa on batteries: a life-changing door just opened

September 25, 2017

Several hundred articles have been written about Amazon’s new moves into Smart Glasses with the Alexa assistant. And it’s not just TechCrunch, Gizmodo, The Verge, Engadget, and all the consumer tech pubs doing the writing. It’s also places like but CNBC, USA Today, Fox News, Forbes, and many others.

I’ve read a dozen or more and they all say similar things about Amazon (difficulties in phone hardware), Google (failure in Glass), bone conduction mics, mobility for Alexa, strategy to get Alexa Everywhere, etc. But something big got lost in the shuffle.

Here’s your clue—the day before the Alexa Smart Glasses was announced, Amazon released details of a Fire Tablet upgrade, with one of the key features being a way to make Alexa Handsfree. That’s right, in both the glasses and the Fire Tablet, we have Alexa implementations running on batteries.

This is a REALLY big deal! This means that Amazon has already caught up to Google in being able to implement low-power devices with its handsfree Alexa Assistant. Is this important? Yes, it is. It may be the most important battle to be waged in the Assistant wars. This is because the assistant we want is the invisible assistant that’s embedded into our bodies and our clothing. This assistant would be so small that it enables a seamless experience to augment our intelligence and capabilities without anyone even knowing. This assistant has to be low power, and handsfree Alexa is now enabled in extremely power sensitive modes. Kudos to Amazon!

Apple erred on facial recognition

September 15, 2017

On the same day that Apple rolled out the iPhone X on the coolest stage of the coolest corporate campus in the world, Sensory gave a demo of an interactive talking and listening avatar that uses a biometric ID to know who’s talking to it. In Trump metrics, the event I attended had a few more attendees than Apple.

Interestingly, Sensory’s face ID worked flawlessly, and Apple’s failed. Sensory used a traditional camera using convolutional neural networks with deep learning anti-spoofing models. Apple used a 3D camera.

There are many theories about what happened with FaceID at Apple. Let’s discuss what failure even means and the effects of 2D versus 3D cameras. There are basically three classes of failure: accuracy, spoofability, and user experience. It’s important to understand the differences between them.

False biometrics
Accuracy of biometrics is usually measured in equal error rates or false accepts (FA) and false rejects (FR). This is where Apple says it went from 1 in 50,000 with fingerprint recognition to 1 in 1,000,000 with FaceID. Those are FA rates, and they move inversely with FR – Apple doesn’t mention FR.

It’s easy to reach one in a million or one in a billion FAs by making it FR all of the time. For example, a rock will never respond to the wrong person… it also won’t respond to the right person! This is where Apple failed. They might have had amazing false accepts rates, but they hit two false rejects on stage!

I believe that there is too much emphasis placed on FA. The presumption is random users trying to break in, and 1 in 50,000 seems fine. The break-in issue typically relates to spoofability, which needs to be thought of in a different way – it’s not a random face, it’s a fake face of you.

Every biometric that gets introduces gets spoofed. Gummy bears, cameras, glue, and tape were all used to spoof fingerprints. Photos, masks, and videos have been used to spoof faces.

To prevent this, Sensory built anti-spoof models that weaken the probability of spoofing. 3D cameras also make it easier to reduce spoofs, and Apple moved in the right direction here. But the real solution is to layer biometrics, using additional layers when more security is needed.

Apple misfires on UX?
Finally, there’s an inverse relationship between user experience and security. Amazingly, this is where Apple got it wrong.Think about why people don’t like fingerprint sensors. It’s not because too many strangers get in; it’s because we have to do unnatural motions, multiple times, and often get rejected when our hands are wet, greasy, or dirty.

Apple set the FA so high on FaceID that it hurt the consumer experience by rejecting too much, which is what we saw on stage. But there’s more to it in the tradeoffs.

The easiest way to prevent spoofing is to get the user to do unnatural things, live and randomly. Blinking was a less intrusive version that Google and others have tried, but a photo with the eyes cut out could spoof it.

Having people turn their face, widen their nostrils, or look in varying directions might help prevent spoofing, but also hurt the user experience. The trick is to get more intrusive only when the security needs demand it. Training the device is also part if the user experience.

I Nailed It!

August 30, 2017

A few days ago I wrote a blog that talked about assistants and wake words and I said:

“We’ll start seeing products that combine multiple assistants into one product. This could create some strange and interesting bedfellows.”

Interesting that this was just announced:

http://fortune.com/2017/08/30/amazon-alexa-microsoft-cortana-siri/

Here’s another prediction for you…

All assistants will start knowing who is talking to them. They will hear your voice and look at your face and know who you are. They will bring you the things you want (e.g. play my favorite songs), and only allow you to conduct transaction you are qualified for (e.g. order more black licorice). Today there is some training required but in the near future they will just learn who is who much like a new born quickly learns the family members without any formal training.

Here’s what’s next for always listening devices

August 28, 2017

Ten years ago, I tried to explain to friends and family that my company Sensory was working on a solution that would allow IoT devices to always be “on” and listening for a key wake up word without “false firing” and doing it at ultra-low power and with very little processing power. Generally, the response was “Huh?”

Today, I say, “Just like Hey Siri, OK Google, Alexa, Hey Cortana, and so on.” Now, everybody gets it and the technology is mainstream. In fact, next year, Sensory will have technology that’s embedded in IoT devices that listens all those things (and more). But that’s not good enough.

Here are some of the things that will be appearing over the next 10 (or more) years to make always listening better and different:

  1. Assistants that see. I hate it when I say OK Google to my Home and my phone responds. Or worse, when a device false fires and I left the volume on really loud. Many of these devices will be getting vision in the coming years (Amazon’s Echo Look already does) and their ability to see what device I’m talking to will make it easier for them to respond from the correct device.
  2. No wake words. In a room with multiple people, we sometimes direct questions by saying the name of the person we want to talk to first. But we don’t do this when we are having a dialog back and forth, and we certainly don’t do it if there’s just one person in the room. Our Assistants should respond to questions without having their names said.
  3. Multiple assistants on single devices. Why can’t I have a device that I can shop on with Alexa, search info with Google, or control my appliances with Bixby? Amazon should be fine with that but Google wouldn’t. Certain cloud assistants will allow it and others won’t, and we’ll start seeing products that combine multiple assistants into one product. This could create some strange and interesting bedfellows.
  4. Portable assistants. I unplug my Echo and move it from room to room when I’m listening to music. I already have two Echos and one Home (and a few other Alexa devices) and I don’t want to buy one for every room. Why can’t I throw Google Home in my backpack for music while biking? What about an always on wearable assistant? This will require ultra-low power wake words that perform great.
  5. Privacy controls. The intelligent assistants’ capabilities are directly proportional to the privacy we’re willing give up. The better they know us, the better they can get us what we want. Today, we just sign our privacy away. In the future, there likely will be settings that we can control.
  6. Embedded always on assistants. Power consumption should be low enough that assistants can be embedded into our bodies for augmented intelligence, memory, and of course medical checkups. Within 20 years, our bodies will become enhanced with sensors (microphones, cameras, etc.), memory, and processors that augment our personal capabilities and are directly wired to our brains.

How Hollywood gets biometrics wrong (and what it gets right)

June 26, 2017

Setting aside the question of whether rogue robots will create a dystopian future, there is one area that artificial intelligence (AI) in movies all seem to coalesce on: biometrics will take over for keys and passwords. There are over 200 movies that show the use of biometrics – here’s a list of 184 of them, and here’s a compilation of clips from several dozen movies.

Whether its fingerprint, voiceprint, iris, retina, face, or other biometrics, there always seems to be some sort of physical scanner in Hollywood depictions of biometrics in action. They have to hold their face or hand up to a device and the device often shines a laser and makes a noise. When they speak, a pass phrase like, “My voice is my password,” is typically required. In other words, the biometrics aren’t particularly fast or easy. The devices don’t just know who people are; they need to be queried and some sort of physical analysis needs to happen after the query.

That’s not how it’s going to play out. In fact, it’s not going to be one biometric that gets a person entrance. It will be a layering of biometrics. They won’t all happen right when you want to open a door. Some will follow you around, maintaining an ongoing assessment of who you are. Other biometrics will be seamlessly assessed from cameras or other sensors in your environment, and still other biometric elements can be added by pinging your phone and asking the phone’s opinion on who you are.

One thing Hollywood got right, though, is how spoof-able biometrics tend to be, whether it’s by removing body parts, taking pictures or videos, or capturing a fingerprint with glue or gummy bears. In one scene in the movie The 6th Day, Adam Gibson, played by Arnold Schwarzenegger, is prevented from entering a restricted area when a scanner rejects his thumbprint. When a security guard approaches asking if he can help, Schwarzenegger holds the guard at gunpoint and says, “Yeah, you can stick your thumb in that.” The guard complies, which gains Schwarzenegger access. Spoofing isn’t necessarily easy – biometric vendors try to make it hard – but most single biometrics are spoof-able, and the movies we watch certainly convey that.

We will see more of these biometric implementations with a mixture of face, voice, and behavioral biometrics combined with hand, eye, or other scans that are seamlessly taken and associated with a given person. This approach substantially increases the difficulty in spoofing, yet it can be done in a completely un-intrusive manner without wasting time. Of course, in a movie it would look like people gain access without doing anything special, and that may take away from some of the “cool factor” in watching biometrics work.

Staying Ahead with Advanced AI on Devices

June 8, 2017

Since the beginning, Sensory has been a pioneer in advancing AI technologies for consumer electronics. Not only did Sensory implement the first commercially successful speech recognition chip, but we also were first to bring biometrics to low cost chips, and speech recognition to Bluetooth devices. Perhaps what I am most proud of though, more than a decade ago Sensory introduced its TrulyHandsfree technology and showed the world that wakeup words could really work in real devices, getting around the false accept and false reject, and power consumption issues that had plagued the industry. No longer did speech recognition devices require button presses…and it caught on quickly!

Let me go on boasting because I think Sensory has a few more claims to fame… Do you think Apple developed the first “Hey Siri” wake word? Did Google develop the first “OK Google” wake word? What about “Hey Cortana”? I believe Sensory developed these initial wake words, some as demos and some shipped in real products (like the Motorola MotoX smartphone and certain glasses). Even third-party Alexa and Cortana products today are running Sensory technology to wake up the Alexa cloud service.

Sensory’s roots are in neural nets and machine learning. I know everyone does that today, but it was quite out of favor when Sensory used machine learning to create a neural net speech recognition system in the 1990’s and 2000’s.  Today everyone and their brother is doing deep learning (yeah that’s tongue in cheek because my brother is doing it too! (http://www.cs.colorado.edu/~mozer/index.php). And a lot of these deep learning companies are huge multi-billion-dollar business or extremely well-funded startups.

So, can Sensory stay ahead now and continuing pioneering innovation in AI now that everyone is using machine learning and doing AI? Of course, the answer is yes!

Sensory is now doing computer vision with convolutional neural nets. We are coming out with deep learning noise models to improve speech recognition performance and accuracy, and are working on small TTS systems using deep learning approaches that help them sound lifelike. And of course, we have efforts in biometrics and natural language that also use deep learning.

We are starting to combine a lot of technologies together to show that embedded systems can be quite powerful. And because we have been around longer and thought through most of these implementations years before others, we have a nice portfolio of over 3 dozen patents covering these embedded AI implementations. Hand in hand with Sensory’s improvements in AI software, companies like ARM, NVidia, Intel, Qualcomm and others are investing and improving upon neural net chips that can perform parallel processing for specialized AI functions, so the world will continue seeing better and better AI offerings on “the edge”.

Curious about the kind of on-device AI we can create when combining a bunch of our technologies together? So were we! That’s why we created this demo that showcases Sensory’s natural language speech recognition, chatbots, text-to-speech, avatar lip-sync and animation technologies. It’s our goal to integrate biometrics and computer vision into this demo in the months ahead:

Let me know what you think of that! If you are a potential customer and we sign an NDA, we would be happy to send you an APK of this demo so you can try it yourself! For more information about this exciting demo, please check out the formal announcement we made: http://www.prnewswire.com/news-releases/sensory-brings-chatbot-and-avatar-technology-to-consumer-devices-and-apps-300470592.html

What Makes the Latest Version of TrulySecure so Different?

May 17, 2017

A key measure of any biometric system is the inherent accuracy of the matching algorithm. Earlier attempts at face recognition were based on traditional computer vision (CV) techniques. The first attempts involved measuring key distances on the face and comparing those across images, from which the idea of the number of “facial features” associated with an algorithm was born. This method turned out to be very brittle however, especially as the pose angle or expression varied. The next class of algorithms involved parsing the face into a grid, and analyzing each section of the grid individually via standard CV techniques, such as frequency analysis, wavelet transforms, local binary patterns (LBP), etc. Up until recently, these constituted the state of the art in face recognition. Voice recognition has a similar history in the use of traditional signal processing techniques.

Sensory’s TrulySecure uses a deep learning approach in our face and voice recognition algorithms. Deep learning (a subset of machine learning) is a modern variant of artificial neural networks, which Sensory has been using since the very beginning in 1994, and thus we have extensive experience in this area. In just the last few years, deep learning has become the primary technology for many CV applications, and especially face recognition. There have been recent announcements in the news by Google, Facebook, and others on face recognition systems they have developed that outperform humans. This is based on analyzing a data set such as Labeled Faces in the Wild, which has images captured over a very wide ranging set of conditions, especially larger angles and distances from the face. We’ve trained our network for the authentication case, which has a more limited range of conditions, using our large data set collected via AppLock and other methods. This allows us to perform better than those algorithms would do for this application, while also keeping our size and processing power requirements under control (the Google and Facebook deep learning implementations are run on arrays of servers).

One consequence of the deep learning approach is that we don’t use a number of points on the face per se. The salient features of a face are compressed down to a set of coefficients, but they do not directly correspond to physical locations or measurements of the face. Rather these “features” are discovered by the algorithm during the training phase – the model is optimized to reduce face images to a set of coefficients that efficiently separate faces of a particular individual from faces of all others. This is a much more robust way of assessing the face than the traditional methods, and that is why we decided to utilize deep learning opposed to CV algorithms for face recognition.

Sensory has also developed a great deal of expertise in making these deep learning approaches work in limited memory or processing power environments (e.g., mobile devices). This combination creates a significant barrier for any competitor to try to switch to a deep learning paradigm. Optimizing neural networks for constrained environments has been part of Sensory’s DNA since the very beginning.

One of the most critical elements to creating a successful deep learning based algorithm such as the ones used in TrulySecure is the availability of a large and realistic data set. Sensory has been amassing data from a wide array of real world conditions and devices for the past several years, which has made it possible to train and independently test the TrulySecure system to a high statistical significance, even at extremely low FARs.

It is important to understand how Sensory’s TrulySecure fuses the face and voice biometrics when both are available. We implement two different combination strategies in our technology. In both cases, we compute a combined score that fuses face and voice information (when both are present). Convenience mode allows the use of either face or voice or the combined score to authenticate. TrulySecure mode requires both face and voice to match individually.

More specifically, Convenience mode checks for one of face, voice, or the combined score to pass the current security level setting. It assumes a willingness by the user to present both biometrics if necessary to achieve authentication, though in most cases, they will only need to present one. For example, when face alone does not succeed, the user would then try saying the passphrase. In this mode the system is extremely robust to environmental conditions, such as relying on voice instead of face when the lighting is very low. TrulySecure mode, on the other hand, requires that both face and voice meet a minimum match requirement, and that the combined score passes the current security level setting.

TrulySecure utilizes adaptive enrollment to improve FRR with virtually no change in FAR. Sensory’s Adaptive Enrollment technology can quickly enhance a user profile from the initial single enrollment and dramatically improve the detection rate, and is able to do this seamlessly during normal use. Adaptive enrollment can produce a rapid reduction in the false rejection rate. In testing, after just 2 adaptations, we have seen almost a 40% reduction in FRR. After 6 failed authentication attempts, we see more than 60% reduction. This improvement in FRR comes with virtually no change in FAR. Additionally, adaptive enrollment alleviates the false rejects associated with users wearing sunglasses, hats, or trying to authenticate in low-light, during rapid motion, challenging angles, with changing expressions and changing facial hair.

Guest post by Michael Farino

« Older EntriesNewer Entries »