Case Study
Monday, June 22
02:55 PM - 03:25 PM
Live in Berlin
Less Details
Conditionally automated vehicles require that the driver remain constantly available to resume control when necessary. To support this, in-cabin systems must accurately monitor and interpret the driver’s readiness. However, current driver monitoring systems (DMS) face limitations, most notably, the reliance on eye gaze tracking or steering wheel interaction, which may not reliably reflect a driver’s situational awareness. Addressing these limitations requires a comprehensive approach that combines multi-modal data fusion with deep learning techniques to simultaneously evaluate both in-cabin context and external traffic scenes. By integrating diverse vision inputs and incorporating insights from human factors research, we aim to redefine DMS capabilities to enable a smoother and safer transition between automated and manual driving.
This presentation will highlight recent research in academia that advances beyond conventional monitoring techniques in the industry and offers insights to enhance the safety and efficiency of automated vehicles.
Key topics you will hear about: