CHAMELEON: Methodology

Methods


The project will use cross disciplinary research methods, extending practice of the collaborators. For Gonsalves, the work explores a more sculptural installation, pushing new sensing techniques for more empathic interaction scenarios. For Frith and Critchley, it physically illustrates and tests hypothesis, for Picard and El Kaliouby it adapts and extends emotion recognition software, and implements it in a more engaging and creative context.

Over the two-year production period, the artist will continue to interact with and inform the research group at the WDIN and MIT through sharing ideas, presenting work in process for critical evaluation and promoting creative concepts with in both centres. The artist will work with SCAN to continually exhibit work in progress of CHAMELEON with in the centres and externally, allowing constant feedback from participant’s to shape the final work, while also building an audience for the work. SCAN, the artist, and mentors will present the project at conferences, festivals and exhibitions. An active research journal and video blog will be kept to document process, to invite to contribution from specific communities, shaping the collaborative goals, difficulties and achievements of the project. Final exhibition throughout the UK will be curated by SCAN.

The methodology for CHAMELEON involves an integrative cross disciplinary process:

RESEARCH/METHODOLOGY:
Emotions are part of our everyday life, fundamental to human interaction. Emotions extend beyond the essentially private nature of subjective feelings to encompass changes in behaviour (expression) and physiology (arousal) (William James 1894). The ability to read emotions in both others and ourselves is central to empathy and social understanding, informing group members of each other’s motivation and potential actions. In cohesive social interactions, we are highly attuned to subtle and covert emotional signals and our behaviours often mirror each other in minute detail. Through this unconscious mimicry, we forge a bond with each other long before we utter a word. Research indicates that mimicking another’s face elicits empathy, and the corresponding emotion (Berstein et al., 2000). For example, if we mimic the facial expression of happiness, we actually begin to feel happy (Hatfield 1994).Therefore, these emotions and body language spread in social collectives, which is termed emotional contagion. (Hatfield, Cacioppo, & Rapson, 1994). Further more, as we infect each other with our emotions, behavioural patterns, hierarchical and social power structures emerge. CHAMELEON is validated by experimental studies that tell us : 1) One person’s emotional state will influence another’s 2) Empathy involves the sharing of emotional states 3) different emotional expressions evoke different emotional arousals. 4) Feedback of different patterns of bodily arousal caused by different emotional expressions influence the way we feel, think and behave. Using a foundation of empirical research, CHAMELEON will reveal the often unconscious responses, predictions and decodings of human interaction, making us more aware of how non-verbal communication guides the formation of dynamic relationships within social groups.

Professor Chris Frith’s social neuroscience research group has established a new scientific discipline (neural hermeneutics), concerned with the neural basis of social interaction. In particular, they are attempting to delineate the mechanisms underlying the human ability to share representations of the world. There is scientific evidence that mirror neurons provide a foundation to understanding the actions of others, and thus a possible physiological basis for empathy. Mirror neurons are neurons that discharge when an individual performs an action, as well as when he/she observes a similar action done by another individual (Rizzolatti 2005). Frith believes that the brain’s mirror system is one of two components that makes communication possible. The first is an automatic form of priming (sometimes referred to as contagion or empathy), whereby our representations of the world become aligned with those of the person with whom we are interacting. The second is a form of forward modeling, analogous to that used in the control of our own actions. Such generative models enable us to predict the actions of others and use prediction errors to correct and refine our representations of the mental states of the person we are interacting with. Each screen (each emotional expression) of CHAMELEON will be triggered by a complex and contextual coding system based on Frith’s research. This will finally lead to the coding of more complex ‘forward modelling scenarios’ for each individual screen.

Hugo Critchley’s research focuses on how autonomic bodily arousal responses are generated and represented within the human brain to influence our emotional behaviour. Recently his group has demonstrated an automatic communication (‘mirroring’) of these physiological responses across individuals in specific emotional contexts (Harrison et al., 2006, Pupillary contagion. Social Cognitive and Affective Neuroscience (SCAN) 1:1-4) The research investigated how observed pupil size modulates our perception of other’s emotional expressions and examined the central mechanisms modulated by incidental perception of pupil size in emotional facial expressions. For example, diminishing pupil size enhances ratings of emotional intensity and valence for sad, but not happy, angry or neutral facial expressions. Concurrently, the observed pupil size was mirrored by the observers’ own pupil size. More recently his group has explored how your own emotional expression interacts subconsciously with those of others and to influence behaviour and feelings (Lee et al. 2007).CHAMELEON will exploit this research to make real time adjustments to the image in each screen (pupil size, blushing, responses that challenge cultural codes etc) in order to contextualise, and also provoke emotional expressions and subjective feelings with in the participant.

TECHNICAL METHODOLOGY
The Affective Computing Group at MIT”s Media Lab main aim is to give machines skills of emotional intelligence, including the ability to recognize, model, and understand human emotion, to appropriately communicate emotion, and to respond to it effectively. The artist will work with Prof Rosalind Picard and Dr Rana El Kaliouby to further develop and adapt real-time existing computational techniques developed by the Media Lab. Ubiquitous cameras installed around the room, or wearable cameras will automatically assess the facial expressions and head gestures of the viewer. This will include the development of multi-modal pattern recognition algorithms to identify co-occuring, asynchronous expressions. Each visual screen will be intelligent, aware and responsive to the ‘feelings’ of its neighbouring virtual expression. Each screen “...needs to have goals, emotions, motivations…a background, a culture, and history” (Bandler 2005).

•The artist will work with researchers at MIT affective computing lab to extend already existing facial recognition technology. Focus will be on machine learning techniques and classifier methods including support vector machines (SVM) and Bayesian Networks to select between and drive sequences of audiovisual narratives to provoke and respond to changes the participant’s emotional physiological state. Speed, acceleration, dimensional emotional complexity will be written into the coding system. The existing hardware/software will be extended by the media lab making it more beautiful, naturalistic and flexible to work in a range of spaces. Final iterations of hardware will be outsourced for time management and quality control.

•The Media Lab and the artist will work with researchers at UCL interaction center/Creativity and Cognition Studios (UTS) (see support letters) to optimize the presentation of audiovisual stimuli within the immersive environment and to record the individualized dynamic of viewers’ interactions with CHAMELEON. Four participant testing exhibitions (curated and promoted by SCAN) to test software audiovisual database and interaction scenarios will shape the project. The agenda is to create a flexible, robust and a significant project that engages a wide range of people, works in a range of spaces and that may be adapted in the future for other uses in the entertainment and medical industries.

CREATIVE METHODOLOGY
The artist has had wide experience working with leaders in psychology, neuroscience and affective computing in order to research and produce bio-sensitive artworks that respond to your emotions.

For the artist, the project conceptually emphasizes how art experiences can allow participants a more empathic conduit to explore their own vulnerability, and in doing so, reveal and share emotions, and form empathy and understanding. The work strongly references artist Beuys’ ideas about the role of artist practice to source vulnerability in order to connect with others and share/inspire creativity. It leads on from earlier project (Medulla Intimata – wearable jewellery that monitored and exposed the emotions of the wearer and also FEEL SERIES – a range of work that tracked and responded to the emotions of the viewers, allowing them to become more sensitive to their own psycho-physiology (see support material). While creating, the artist will be informed by similar works – inspirations are Stelarc’s Prosthetic Head which uses 3D facial expressions and voice recognition asking the audience to build an empathic dialogue with an avatar representing the artist. Audience interaction is limited by the use of a keyboard making the interaction scenario more fragmentary and disembodied. The agenda is to extend on this, using a more subtle, enactive, embodied and naturalistic interaction method. Another inspiration is Alexa Wright’s and Alf Linney’s Alter Ego (2002-2004), an interactive installation which presents audiences with an uncanny digital doppelganger that is re-embodied and interacts in a semi-autonomous fashion. With both Alter Ego and Prosthetic Head, only personal circuits are explored, and the interaction/ coding may also lead the participants to experience a loss of control and incomprehension. CHAMELEON will constantly adjust to the participant, making real time readings of the facial expressions and non-verbal behaviour with an agenda to build a sense of empathy and understanding with them. The virtual environment this triggers will understand and adjust to the participant’s behaviour in order to respond to them in appropriate (and sometimes inappropriate) ways. The artist will extend her work, focusing on a sculptural implementation of the installation, Inspirations are Mark Hansen/Ben Rubin’s Listening Post, and the more sculptural and architectural elements of Olafur Eliasson’s works.

In designing the aesthetic of the portraitures of emotional expression that will fill the screens in the space, the artist will draw on Frith’s and Critchley’s neuroscientific research to enhance emotional resonance, aesthetic and narrative structures of the creative work. In creating the database, artist will extend her directing skills, implementing the Stanislavsky method – a systematic approach to training actors to work from the inside outward. The artist will ask that the actors study and experience subjective emotions and feelings and to manifest them to the camera.

The artist will further interrogate the use of digital media (manipulation of video, sound, photography using software tools such as Final Cut Pro, Pro Tools, AfterEffects, Photoshop) to best evoke emotional resonance of the image. The database must work with real-time visual algorithms and effects. Compression will be tested so that the database can be disseminated real-time over networked computers. Sound databases will be built (reinterpretation of scientific sound databases/creation new sounds) with the expertise of a sound production engineer.

•Creation of a high quality and artistic expressive emotion recognition database (using photography/photoshop/sound recording, protools) using the Stanislavsky method of probing emotions.
•Production and implementation of more aesthetic and naturalistic subject monitoring techniques (real time facial recognition software) for unobtrusive application in an immersive installation environment, including integration of potential covert biosensory methods (GSR, Heart rate, movement).
•Lead fine integration of scientific methods into a creative output that engages a wide audience.
•Throughout production, user exhibitions will be promoted and curated by SCAN in order to shape the creative database, architecture of the installation and technology.