To be held at IJCAI 2025, 16th-22nd August 2025, Montreal, Canada & Guangzhou, China
We jointly hold the first workshop and challenges for 4D Micro-Expression Recognition for Mind Reading (4DMR) at IJCAI 2025, 16th-22nd August 2025.
We warmly welcome your contribution and participation!
March 23 : The website of 4DMR workshop & challenge is under construction...
April 8 : The Kaggle website of 4DMR challenge is available, the training & resource data will be released in few days.
We hold the 1st 4DMR Workshop & Challenge to explore the application of 4D technologies in facial expression analysis, to be held at IJCAI 2025, 16th-22nd August 2025, Montreal, Canada & Guangzhou, China.
Humans display a vast array of emotional and cognitive states. The ability to interpret these states, often referred to as mind reading, is unparalleled in the animal kingdom and is fundamental to human social interaction and communication. A key component of mind reading is facial expression, which accounts for 55% of how we understand others' feelings and attitudes, playing a vital role in conveying essential information about mental states.
Micro-expressions (ME) are a special form of facial expressions which may occur when people try to hide their true feelings for some reasons. Unlike 2D and 3D methods, 4D analysis (3D mesh + temporal changes) excels at detecting fleeting micro-expressions.4D information can be leveraged to enhance accuracy and robustness of facial expression, effectively addressing challenges such as variations in lighting, pose, and noisy environments, making it ideal for real-world applications. Despite its promise, 4D facial expression research faces challenges that limit its progress.
This workshop aims to explore the application of 4D technologies in facial expression analysis. It will feature the inaugural 4D micro-expression recognition challenge to propel the field forward and provide a platform for researchers to benchmark their methodologies. The workshop will delve into cutting-edge techniques for both macro- and micro-expression recognition, discuss the implications of these methodologies for global communication and AI systems, and highlight practical applications in domains such as security, healthcare, and customer service. Interactive sessions with leading experts will foster deeper insights into how 4D facial expression analysis can revolutionize our understanding of human emotions and cognitive states.
Humans display a vast array of emotional and cognitive states. The ability to interpret these states, often referred to as mind reading, is unparalleled in the animal kingdom and is fundamental to human social interaction and communication. A key component of mind reading is facial expression, which accounts for 55% of how we understand others' feelings and attitudes, playing a vital role in conveying essential information about mental states.
Fig. 1. 4D micro-expressions (3D mesh + temporal changes) examples.
To date, while extensive research has been conducted on facial expressions, the advent of 4D facial expression analysis marks a transformative leap in the field. By capturing the temporal evolution of expressions in three-dimensional space, 4D analysis reveals the intricate dynamics of facial muscle movements over time. Unlike 2D and 3D methods, 4D analysis (3D mesh + temporal changes) excels at detecting fleeting micro-expressions(Figure 1), which are brief, involuntary displays of hidden emotions by incorporating multiple views and temporal information for richer and more precise data. 4D information can be leveraged to enhance accuracy and robustness of facial expression, effectively addressing challenges such as variations in lighting, pose, and noisy environments, making it ideal for real-world applications. Despite its promise, 4D facial expression research faces challenges that limit its progress. The lack of diverse and realistic datasets, particularly for spontaneous micro-expressions, constrains its applicability to practical scenarios. Moreover, the computational demands of processing the complex temporal and spatial data inherent in 4D analysis pose significant technical challenges. Existing methodologies often struggle with capturing rapid and subtle micro-expressions and adapting to real-world conditions, such as occlusions, pose variations, and noisy backgrounds. Advancing the field requires the development of innovative algorithms, efficient computational techniques, and large-scale datasets to bridge these gaps, enabling applications in healthcare, security, and education.
This workshop aims to explore the application of 4D technologies in facial expression analysis. It will feature the inaugural 4D micro-expression recognition challenge to propel the field forward and provide a platform for researchers to benchmark their methodologies. The workshop will delve into cutting-edge techniques for both macro- and micro-expression recognition, discuss the implications of these methodologies for global communication and AI systems, and highlight practical applications in domains such as security, healthcare, and customer service. Interactive sessions with leading experts will foster deeper insights into how 4D facial expression analysis can revolutionize our understanding of human emotions and cognitive states.
Paper submission is open.
Note: Each paper must be presented on-site by an author/co-author at the conference.
Micro-expressions (MEs) are subtle, rapid, and involuntary facial movements that often occur in high-stakes scenarios or when individuals attempt to gain advantages or conceal their true emotions. Due to their extremely short duration and low intensity, MEs are difficult to detect and demand high-precision facial data. This challenge leverages the power of 4D facial analysis—capturing the temporal evolution of facial expressions in 3D space—to uncover the complex dynamics of facial muscle movements over time. Unlike traditional 2D or static 3D approaches, 4D analysis (3D mesh + temporal sequence) excels at identifying fleeting, involuntary micro-expressions by incorporating both spatial depth and motion cues. This multi-view, temporal information enriches the data and significantly improves recognition accuracy and robustness.
This challenge will be organized on the Kaggle Website. On the Kaggle website, instructions and data will be shared, and results from participants will be submitted and ranked. The top 3 teams will be awarded certificates if the top 3 teams submit papers and are present at the workshop.
Contact Information: