security

Some thoughts on Mobile Face Recognition (part 3 - Template Protection)

Continuing with another security threat in biometric systems, one of the main concerns both from user and service provider sides is what happens if someone steals the biometric templates. A hacker might directly access system databases, obtaining the biometric templates from the users. A recent example can be found in the US government data breach in December 2014, when 5.6 million fingerprints were stolen. With them, the hacker could get improper access to the system, to other systems, and even track users in different systems. This is a big threat for the privacy of the users and the security of the system. Besides, another question arises: will the stolen biometric traits be persistently invalidated?

This threat motivates the need of protected biometric templates. The industry and the scientific community are now making big efforts for researching, standardising and extending the use of protection mechanisms, since we are aware of the problems related to the use of unprotected biometric templates. As defined in the standard ISO/IEC 24745 for biometric information protection, protected templates are required to comply with some requirements, namely:

  1. Irreversibility: property of a transform that creates a biometric reference from biometric samples or features such that knowledge of the transformed biometric reference cannot be used to determine any information about the original biometric samples or features.
  2. Renewability: property of a transform or process to create multiple, independent transformed biometric references derived from one or more biometric samples obtained from the same data subject and which can be used to recognize the individual while not revealing information about the original reference.
  3. Revocability: ability to prevent future successful verification of a specific biometric reference and the corresponding identity reference.

The use of template protection schemes is not as extended in mobile face recognition systems as it is in other biometrics (e.g. fingerprint), so we believe it is one of the keystones to be developed shortly, in order to achieve the desired levels of privacy and security. Some of the problems to solve are to properly characterise the output signals from the different face recognition algorithms and to get the amount of entropy required for the template protection schemes to achieve a good performance in terms of recognition rates, response time and, at the same time, comply with privacy requirements.

Some thoughts on Mobile Face Recognition (part 2 - Anti-Spoofing)

Some biometric traits might be easily captured by an attacker. This is the case of faces, since almost everyone has photos publicly available in social networks like LinkedIN or Facebook. This problem motivates the recent efforts in liveness detection for a secure use of face biometrics. Anti-spoofing methods go from simple ones, for example those based on blink detection, to more complex algorithms for analysing the texture or the light in the scene.

As shown in different publications, these machine learning-based anti-spoofing methods tend to be strongly dependent on the dataset used for training the model. This means that the robustness of the liveness analysis depends on the training dataset (genuine accesses and attacks) and the technology used for face presentation and acquisition, so several concerns appear. Can their behaviour be predicted in the presence of a new attack which has not been taken into account in the training set? Can a single anti-spoofing method be enough to guarantee the security of the system?

Given the cross-dataset analysis in recent publications and real scenario tests it does not seem a good idea entrusting the security of the system to a single anti-spoofing method. This is why we believe the use of a single non-collaborative liveness detection method is not enough for guaranteeing the security of the system in real scenarios, now and in the future, since their robustness is dependent on the presentation technology used by the attacker (video quality measures, light reflectance analysis, etc.).

Alternatively, to counteract presentation attacks, a more robust solution would be the combination of several methods working together and combining automatic analysis tools with user interaction. If the system is able to provoke a reaction in the user and then analyse this reaction, fake attempts using photos or videos from the genuine users could be detected and avoided. Unfortunately, interaction can be a time consuming operation and it could reduce the usability, so the challenge here is to achieve a proper balance between security and convenience. The less perceptible the interaction is, the more usable and difficult to spoof the system will be. Current methods rely on asking the user to perform some action, but we think the future points to unconsciously action-reaction interaction analysis in order to increase both security and usability.