Research on adaptive driver drowsiness detection method based on multi-facial information of drivers
Abstract
Driver drowsiness detection is an important application in the field of intelligent driving. In this paper, to solve the problem of driver drowsiness detection based on facial expression feature recognition, an Adaptive Driver Detection Model (ADDM) is proposed. ADDM adopts a dual-driven approach based on data and prior knowledge to integrate multi-source facial information, capturing the coordinated dynamics among different facial regions and overcoming the high misjudgment rate caused by using only one or more facial action units (such as mouth and/or eye regions). Additionally, ADDM combines class information (K-means), temporal information, and attention information to address the poor generalization caused by individual differences among drivers. Finally, ADDM employs a Graph Convolutional Network (GCN) to model the relationships among facial regions, enhancing detection performance by allowing information exchange between nodes. Experiments conducted on two public benchmark datasets demonstrate that the proposed ADDM method outperforms the state-of-the-art (SOTA) methods and shows excellent performance in drowsiness detection.