Fasecure: Facial Anti-Spoofing
Face recognition in secure access and payments is vulnerable to spoofing via photos, videos, or 3D masks, posing major biometric security risks.
Services
CyberSecurity: Facial Anti-Spoofing
Tools
VScode, MNV2, CDCN++, MAFM, Spoof Cue Map Generator, Feature Fusion Layer and Final Decision Layer
Value
High performance and Reliability
Timeline
6 weeks

This project presents a real-time, RGB-only facial anti-spoofing (FAS) system that leverages deep learning to detect and prevent such attacks without relying on additional hardware. The system integrates a Central Difference Convolutional Network (CDCN++) to capture fine-grained texture cues, a Multiscale Attention Fusion Module (MAFM) to focus on critical facial regions prone to spoofing, a U-Net-based Spoof Cue Map Generator for pixel-wise anomaly detection, and temporal artifact analysis to catch inconsistencies in motion during video-based attacks. Trained and validated across diverse datasets including CelebA Spoof and a custom dataset, the model achieved a training accuracy of 76.6% and a validation accuracy of 83.2%, with a False Acceptance Rate (FAR) of 21.69%, a False Rejection Rate (FRR) of 14.84%, and a Half Total Error Rate (HTER) of 18.27%. Optimized for deployment on edge devices like smartphones and embedded systems, this lightweight architecture offers a scalable, efficient, and practical defense against modern facial spoofing techniques, strengthening the security and reliability of biometric authentication systems in real-world application

In summary, this lightweight, RGB-only FAS system delivers robust real-time spoof detection—achieving strong accuracy and low error rates across diverse datasets—and is optimized for seamless deployment on edge devices, offering a practical, scalable defense for biometric authentication.


