We have surpassed the previous version again and are very satisfied with the new FaceReader 9 release. FaceReader 9 has improved deep learning based modelling capabilities for higher quality and faster analysis. It features a completely new project analysis module which will make the analysis of your results much easier. There are also some new features and outputs, such as gaze direction, head position, heart rate variability, and operators in the custom expression module.
AI innovations are one of the most relevant topics of the 21st century. AI is everywhere in our everyday life, but to many people this presence is very abstract. Therefore, the Technological Museum of Vienna has created an exhibition on robotics and AI to help uncover the surrounding myths. The museum wants to give visitors a transparent look at the utopias and hysterias surrounding humanoid robots and autonomous systems. The exhibition allows visitors to dive into the fascinating algorithms of artificial intelligence. We were happy to collaborate on one of the exhibits: The Mirror Cube, where people can experience what it is like to “become data”.
Last year we received an innovation grant from INNOLABS, for our project H2A2 – A Healthy Heart with Automated Assistance – to create an unobtrusive health monitoring tool. With an innovative technique, called remote photoplethysmography (remote PPG), heart rate can be detected from the face. This functionality is already available in FaceReader. Since this technique requires high quality recordings, we wanted to test whether it is also accurate when the camera of a mobile device was used. Together with our partner PLUX, a Portuguese company specialized in advanced biosignals monitoring platforms, and a Portuguese telecommunication company IT, we collected physiological ground truth data and video recordings from a tablet. This data can validate the heart rate assessment and emotion classification on a mobile device.
We are happy to announce the release of FaceReader 8, perhaps the most ambitious and elaborate release so far. It is now possible to measure expressions of children under the age of 2 (Baby FaceReader), to record audio and make infrared recordings, to measure consumption behavior, to analyze left and right action units separately, and to create your own expressions. For a more elaborative overview, see our partner’s page what’s new. In this blog post, we want to further go into the ‘create your own expression’ module.
In many futuristic movies, you see robots performing countless day-to-day tasks. Well… the future is here (almost)! For a project funded by COMMIT, we helped create a robot receptionist, named R3D3 (Rolling Receptionist Robot with Double Dutch Dialogue). The aim of this project was to create a combination of a virtual human and a robot capable of verbal and non-verbal interactions with humans. Together with University of Twente’s HMI and RAM, we succeeded in building a robot platform with the technical capacities to realize such interactions.
The R3D3 prototype can drive around, adjust its height, and carries a tablet with a virtual human face. The robot includes technology for speech recognition and speech production, and has FaceReader based computer vision techniques that can recognize gender, age and emotions. In addition, the virtual avatar on the tablet can interact with people. Here we report the results of three pilot studies, carried out to evaluate the performance of the robot and investigate how people reacted to it. Each pilot tested a different target population; shop visitors, police personnel, and children.