Abstract
This paper describes a Approach for Prediction of driver-fatigue monitor. It uses remotely located charge-coupled-device came-ras equipped with active infrared illuminators to acquire video images of the driver. Various visual cues that typically characterize the level of alertness of a person are extracted in real time and systematically combined to infer the fatigue level of the driver. The visual cues employed characterize eyelid movement, gaze movement, head movement, and facial expression. The eyes are one of the most salient features of the human face, playing a critical role in understanding a person’s desires, needs and emotional states. Robust eye detection and tracking is therefore essential not only for human-computer interaction, but also for Attentive user interfaces (like driver assistance systems), since the eyes contain a lot of information about the driver’s condition: gaze, attention level, fatigue. Furthermore, due to their unique physical properties (shape, size, reflectivity), the eyes represent very useful cues in more complex tasks, such as face detection and face recognition. A probabilistic model is developed to model human fatigue and to predict fatigue based on the visual cues obtained. The simultaneous use of multiple visual cues and their systematic combination yields a much more robust and accurate fatigue characterization than using a single visual cue. This system was validated under real-life fatigue conditions with human subjects of different ethnic backgrounds, genders, and ages; with/without glasses; and under different illumination conditions. It was found to be reasonably robust, reliable, and accurate in fatigue characterization.