Institute of Electrical and Electronics Engineers Inc.
Abstract
Unimodal biometric systems are commonplace nowadays. However, there remains room for performance improvement. Multimodal biometrics, i.e., the combination of more than one biometric modality, is one of the promising remedies; yet, there lie various limitations in deployment, e.g., availability, template management, deployment cost, etc. In this paper, we propose a new notion dubbed Conditional Biometrics representation for flexible biometrics deployment, whereby a biometric modality is utilized to condition another for representation learning. We demonstrate the proposed conditioned representation learning on the face and periocular biometrics via a deep network dubbed the Conditional Biometrics Network. Our proposed Conditional Biometrics Network is a representation extractor for unimodal, multimodal, and cross-modal matching during deployment. Our experimental results on five in-the-wild periocular-face datasets demonstrate that the network outperforms their respective baselines for identification and verification tasks in all deployment scenarios. Author