There's been a lot of fuss in the identity world recently about so-called "liveness detection". In fact, there's even confusion over the correct terminology; the National Institute of Standards and Technology (NIST) categorises liveness detection as a subset of Presentation Attack Detection (PAD) and defines it as follows:
"The measurement and analysis of anatomical characteristics or involuntary or voluntary reactions, in order to determine if a biometric sample is being captured from a living subject present at the point of capture."
Applied to the use case of remote onboarding, it simply addresses the question: is the person taking the selfie really who they claim to be, or are they using a printed picture, mask, or other "spoofing" technique to try and fool the system?
Early PAD "Active Liveness" technologies focused on instructing users to carry out voluntary reactions to prompts, such as tilting their head a certain way or following a randomly moving object with their eyes. These methods, while effective to a point, can be outsmarted with a little effort and ingenuity.
More recently, technology vendors have come up with more advanced methods of detecting presentation attacks. These new methods fall under the category of "Passive Liveness", as they do not require the user to carry out any actions in order to allow the algorithm to calculate their liveness score. This is where things start to get interesting.
While Active Liveness technology is easy to understand, Passive Liveness is more of a mystery because it's difficult to explain what's actually happening in the background. Passive Liveness algorithms are neural networks trained using machine learning techniques on very large datasets containing many variations of spoof vectors (images of presentation attacks), i.e. printed masks, 3D masks, images from mobile and PC screens, etc. The training dataset also contains genuine selfie images and each image is labelled "genuine" or "fake". The neural network then runs through multiple rounds of training, tweaking, and tuning until it is able to detect presentation attacks with a high degree of accuracy using just one frame as a reference image.
Research has shown a significantly increased onboarding completion rate for organisations using passive liveness approaches:
- Active Liveness: 63% of customers successfully completed this step, taking an average of 13 seconds
- Passive Liveness: 99.9% of customers successfully completed this step, taking an average of 1 second
Another strong argument supporting Passive Liveness is the fact that there is an ISO standard (30107-3), which sets out principles and methods for performance assessment of presentation attack detection mechanisms. A testing lab named iBeta, based in Denver, Colorado was the first to carry out testing according to this ISO standard in two levels; iBeta Level 1 PAD and iBeta Level 2 PAD.
Biometric benchmarks are useful to a point, however, I always recommend that organisations test biometric technology on their own data against two main criteria; speed and accuracy. If it's very fast but not accurate, you can't use it. If it's very accurate but not fast, you shouldn't use it either.