AI-Generated face detection using multi-feature fusion
Abstract
Deep learning-based image generation techniques have made synthetic face images nearly indistinguishable from real ones. The misuse of these synthetic images can lead to serious issues, including the spread of misinformation, fraud, and identity theft. Therefore, accurate detection of the authenticity of face images is crucial. Most existing studies rely on single feature extraction techniques, making it difficult to effectively detect the increasingly diverse and realistic synthetic face images. To address this challenge, we introduce a novel parallel architecture that fuses multiple feature inputs, combining four feature extraction techniques: raw images, Gray Level Co-occurrence Matrix (GLCM), Edge features, and Local Binary Pattern (LBP). The raw image extracts global features, while GLCM captures texture information, Edge features identify image structures, and LBP extracts local texture features. By combining these complementary features, the parallel architecture analyzes images from multiple dimensions, effectively identifying subtle differences between synthetic and real faces. Furthermore, we compiled a Multidimensional Facial Image Dataset (MFID), containing approximately 92,000 images, considering factors such as the type of generation model, real face quality, and face angle and age, ensuring the dataset's generalization ability. Experimental results demonstrate that our architecture achieves an accuracy of 99.17% and significantly outperforms existing methods on MFID.