Apple erred on facial recognition
September 15, 2017
On the same day that Apple rolled out the iPhone X on the coolest stage of the coolest corporate campus in the world, Sensory gave a demo of an interactive talking and listening avatar that uses a biometric ID to know who’s talking to it. In Trump metrics, the event I attended had a few more attendees than Apple.
Interestingly, Sensory’s face ID worked flawlessly, and Apple’s failed. Sensory used a traditional camera using convolutional neural networks with deep learning anti-spoofing models. Apple used a 3D camera.
There are many theories about what happened with FaceID at Apple. Let’s discuss what failure even means and the effects of 2D versus 3D cameras. There are basically three classes of failure: accuracy, spoofability, and user experience. It’s important to understand the differences between them.
It’s easy to reach one in a million or one in a billion FAs by making it FR all of the time. For example, a rock will never respond to the wrong person… it also won’t respond to the right person! This is where Apple failed. They might have had amazing false accepts rates, but they hit two false rejects on stage!
I believe that there is too much emphasis placed on FA. The presumption is random users trying to break in, and 1 in 50,000 seems fine. The break-in issue typically relates to spoofability, which needs to be thought of in a different way – it’s not a random face, it’s a fake face of you.
Every biometric that gets introduces gets spoofed. Gummy bears, cameras, glue, and tape were all used to spoof fingerprints. Photos, masks, and videos have been used to spoof faces.
To prevent this, Sensory built anti-spoof models that weaken the probability of spoofing. 3D cameras also make it easier to reduce spoofs, and Apple moved in the right direction here. But the real solution is to layer biometrics, using additional layers when more security is needed.
Apple misfires on UX?
Apple set the FA so high on FaceID that it hurt the consumer experience by rejecting too much, which is what we saw on stage. But there’s more to it in the tradeoffs.
The easiest way to prevent spoofing is to get the user to do unnatural things, live and randomly. Blinking was a less intrusive version that Google and others have tried, but a photo with the eyes cut out could spoof it.
Having people turn their face, widen their nostrils, or look in varying directions might help prevent spoofing, but also hurt the user experience. The trick is to get more intrusive only when the security needs demand it. Training the device is also part if the user experience.