Lip Reading User ID Authentication System LipPass to Help Secure Phone and Wallet
To prevent users' privacy from leakage, more and more mobile devices employ biometric-based authentication approaches, such as fingerprint, face recognition, voiceprint authentications, and so on, to enhance privacy protection. However, these approaches are vulnerable to replay attacks. Although the state-of-art solutions utilize liveness verification to combat the attacks, existing approaches are sensitive to ambient environments, such as ambient lights and surrounding audible noises.
LipPass which is a new user verification system for phones relies on the unique way a person moves his or her lips.
How Does it Work?
Since each person manifests individual speaking habits, like lip projection and closure, tongue stretch and constriction, and more, this creates a unique Doppler effect profile that can be detected by the phone. The platform then uses a deep learning algorithm, which excerpts distinct features from the user’s Doppler profile as he or she speaks. Next, a binary tree-based approach is applied to distinguish the new user’s profile from previously registered users, which also helps discriminate between the identity of legal users and spoofers.
In a controlled environment, LipPass managed an authentication precision of 95.3%, which is akin to what two other platforms examined in the study achieved, with voiceprint recognition of WeChat at 96.1 percent, and Alipay faces recognition at 97.2%. However, the accuracy of LipPass performance stayed relatively stable across the various environments, while the accuracy of WeChat dropped as low as 21.3% in noisy environments, and the accuracy of Alipay dropped to 20.4 percent in dark environments.
“To resist an attack, existing solutions either employ specialized infrastructure, such as Apple FaceID, or require users to involve extra operations, such as eye blinking, which introduces additional cost and effort and further reduces user experience,”
says Jiadi Yu who created the product. Yu also plans to extend the lip reading-based user authentication for smart speakers, which serves as the core commander of a smart home, such as Amazon Echo, and Google Home.