The Emotional-Imaging Composer is a biofeedback-driven multimedia emotional-imaging generator. A wireless finger sensor captures dynamic, realtime emotional profiles, reading physiological signals to effectively map a person’s emotional state. These signals are then processed to generate a fluid collage of aural and visual images which, when projected into an environment created to receive them, creates a rich, external manifestation of the person’s internal emotional experience.
Because this interface opens an immersive realm of emotional awareness accessible only at the nexus of art and science, we find ourselves crossing a threshold into a world that we will largely define.
Emotional dynamics manifest throughout the body/mind organism in specific yet fluid signal combinations, and the aural and visual images generated reflect the kinetic, sometimes chaotic, nature of emotion. But what are these images?
Our task is to develop what might be called the semiology of emotion, a mediatic vocabulary – the signs, symbols and syntax – that illuminates the otherwise invisible fabric of emotion connecting us all. What does the energy radiated by emotion look like and how does it move? What does someone’s dark mood sound like? More importantly, what does it feel like? How can we express emotional energy so that it is felt rather than indicated?
How can we “make sense” out of emotion?
Live performances offer opportunities for people to dynamically and unmistakably experience their emotional affect on another person. The purpose of this research is to discover a mediatic emotional language that will serve to heighten the stakes between performer and public. How can we render a performer emotionally transparent, available to bond with an audience on a still more committed, even ecstatic, plane?
If additional windows can be opened into a performer’s subjective experience, myriad new possibilities open up in the realm of performance, theatre and installation art.
In an environment designed to project an array of images and sound, a performer’s emotional state can be put entirely on display. Her responses to anything and anyone around her are instantly manifested. Say “Boo!”, and the nervous disruption she feels will be obvious for all to see.
Further, heightened awareness of the audience members to the effect they have on the performer implicates them directly in the feedback chain. They, too, begin to modulate themselves in response to the feedback they see generated all around them. The instrument then becomes, in a very real sense, something that performer and public play together.
The purpose of this research is to use the capabilities of the Emotional-Imaging Composer to create highly enriched environments to correspond to the realtime emotional experience of people with Autism Spectrum Disorders. It is hypothesized that if the environment is meaningfully responsive to the smallest nuance of feeling, it will ultimately be recognized by them as evidence of their emotional affect. It is hoped that experience with the Composer will open up new possibilities for awakening emotional recognition and empathy in some individuals.
As part of this investigation we wish to work with both clinicians and artists who are familiar with ASD to develop an ever-expanding bank of appropriate environments. We will also be investigating having those on the Spectrum create their own subjective environments.
Realtime facial expression recognition and mirroring
Automatic facial expression recognition while speaking