For many years Sensory has proudly provided on-device facial verification also known as authentication solutions for banking institutions and well-known consumer-electronics companies like Fujitsu and HMD global (Nokia’s parent company). With the latest release of Nokia’s first 5G phone, dubbed “the only gadget you’ll ever need” and featured in the latest James Bond film, we thought it might be a good idea to offer some clarity on the broad subject of facial recognition.
Facial recognition technology has been around in theory since the 1960s. Innovators Woody Bledsoe, Helen Chan Wolf, and Charles Bisson pioneered mathematical models for identifying faces that would lay the foundation for artificial intelligence or machine learning for decades to come. Due in part to the rise of social media coupled with the use of facial recognition technology in law enforcement, concerns around the ethics and pitfalls of facial recognition have received considerable media coverage. However, most of this media coverage stops short of defining critically different facial recognition applications. The controversial model of facial recognition which is better defined as facial identification, also described as the one-to-many model, seeks to identify a person based on facial images and corresponding identifiable information. Embedded, or one-to-one facial verification is used to verify a user or users based on matching an image with an enrolled biometric profile stored as highly encrypted irreversible code on a device. Why is this difference critical?
The cloud based version of the One-to-Many paradigm is fraught with issues that have led to mistrust and decrees for the abolishment of the usage of facial recognition technology altogether. This model attempts to identify a person or persons by matching their image to an image associated with their identity and requires access to information; information that is usually stored in cloud-based databases. Project Greenlight, a camera-based crime deterrent program launched in Detroit, serves as one such example. Residents were surprised to find out that the cameras were fitted with software that could suggest their identity when captured on video by using driver’s licenses and mugshots stored in the Michigan police database. Social media subscribers using auto-tagging features to label their friends don’t realize that they are feeding facial identification algorithms which may be used for surveillance as revealed in a report from the ACLU in 2016. In these cases, egregious violations of privacy and cases of mistaken identity are enabled by a number of reasons besides the documented vulnerabilities of databases with user’s information. One reason is algorithmic bias caused by lack of training data, another is the sheer variance in the conditions under which photographs are taken, and lastly some people aren’t aware of nor have they consented to sharing an image of their face and identity. The potential for harm with the one-to-many model of facial recognition has encouraged cities like San Francisco to go as far as banning the use of facial recognition technology for identification by law enforcement. Paradoxically, this form of facial recognition has bred a skepticism around the use of facial identification for authentication, a technology that aids in making our devices and the information stored on those devices more secure.
TrulySecure, Sensory’s embedded facial verification AI, attempts to validate a person based on the matching of a face with a locally stored and encrypted enrolled image by either accepting or rejecting the identity claim (one-to-one matching). This model is most commonly used to grant access to a device as seen in mobile phones, tablets, and PCs. This system by design is inherently different and eliminates many of the problems with the cloud-based one-to-many facial identification model in a number of ways. First and critically, the user enrolls and opts in to facial verification, the enrollment process ensures not only that the user is aware of the purpose for the image capture, but also that the user enrolls under optimal conditions. This process ensures high accuracy by ensuring the verification systems precisely model the unique biometric characteristics of the user and therefore are less susceptible to training data biases. Furthermore, embedded facial authentication does not store user’s information in the cloud because the verification takes place on the user’s device.
Why does this distinction between cloud based facial identification and embedded facial verification matter? It’s a matter of understanding that it is possible to use AI to safeguard our data and to secure our devices in a time when there is an ever increasing need to do so. At Sensory we’re proud to say that we provide embedded facial verification technology that reliably and securely authenticates users on millions of devices without ever sending biometric data to the cloud. We are committed to developing AI that improves the user experience but not at the expense of privacy.
In some implementations the SDK is embedded but the use-case is sort of “one-to-few”. A tablet or tv could have, multiple users enrolled. The matching there is a series of one-to-one checks but with multiple users enrolled.