Project Multimodal Interaction
(don't) make me laugh
Can a mirror make you laugh and then keep you laughing? The answer is yes, if you add the right - sensitive - multimodal interface. MultimediaN proofs this with the Affective Mirror, developed at the worktable of the Multimodal Interaction project.
The Affective Mirror is an interactive stand alone demo. The user enters a cabin and looks at a display (mirror). Hardware behind the mirror allows the mirror to ‘look back’. Based on voice recognition and facial expressions the image changes. This will in turn effect the user’s emotions on which the mirror will react again.
The Affective Mirror is a multimodal adaptive user’s interface that adjusts itself to the emotions shown by the user. It is a data processing system that senses, interprets and influences (projects) emotions real-time. As yet, the Affective Mirror reacts to voice and facial expressions. In the future it should also be possible to react to gestures and posture.
The Golden Demo is made by technology that tracks a person’s emotional state and gives feedback to the person itself, or to other persons like caretakers or family members. The potential applications are numerous. Next to applications in games and health care, possibilities can be found for example in trainings for communication skills or call centers.
The Emotional Analyzer (N1)
Video Search Engine (N1)
Affective Mirror (N2)
Excercise in Immersion 4 (N2)
StreetTivo (N3 and N5)
The Investigator's Dashboard (N6)
The Surveillance Dashboard (N6)
On The Move (N6 and N9MI)
Concert Video Browser (N7 and N9MI)
Cultural Search Engine (N9C)