VER_NEA

VER

Virtual Emotion Reader

In this project, VicarVision, Noldus IT and NIZO food research are developing an innovative method for measuring emotions while people are eating. There is a lot of research that tries to find out which consumer products are the most successful. Tools for understanding how and why consumers make their choices are critical to making that process more effective. This project combines state-of-the-art measurement systems together with virtual and real-life situations to make optimal estimations of product appreciation.

Measuring Emotions While Eating

FaceReader is a powerful method to quantify affective responses to a wide range of stimuli. An important obstacle when you want to measure facial expressions while people are eating is that the food or beverage is covering the face. Facial expression analysis under varying head poses and occlusions due to eating and drinking is a challenging task. Existing facial analysis tools mostly suffer from lower accuracy under such conditions due to lack of visible facial regions. In this project, VicarVision will tackle this problem by adjusting their existing deep learning framework for the given task. Additional visual data to be collected during the project will make it possible to train deep networks with accurate facial modelling capacity even in the occluded and non-direct head pose scenarios. Some other relevant features that we will be developing are a chewing and taking a bite classifier. It is of great importance to researchers to be able to provide a natural user experience from the participants’ point of view to capture reliable information regarding consumer behaviour and preferences.

This project is funded by the partners and by the Dutch government under its MIT scheme, which is designed to stimulate innovation in small and medium companies.