Program of
the workshop - September 2th 2013
0900-0915
Welcome and Introduction Talk
0915-1040 Session 1
"Context
is Personal"
Ronald
Böck, Stefan Glüg, Ingo Siegert and Andreas Wendemuth.
"Annotation and
Classification of Changes of Involvement in Group Conversation"
1040-1100 Coffee break
1100-1230 Session 2
Masaki
Takahashi, Masahide Naemura, Mahito Fuhii and Shin’Ichi Satoh.
"Estimation
of attentiveness of people watching TV based on their
emotional behaviors"
“Learning Context in Affect Recognition”
1230-1340
Lunch break
1340-1600 Session 3
Ruud
Mattheij, Marie Nilsenova and Eric Postma.
“Vocal and Facial Imitation of Humans Interacting with
Virtual Agents”
Title: “How Context Shapes how People Interpret the Expressions of People and Machines”
Discussion
and Closing
1. Workshop Description, Objectives and Relevance
Unconsciously, humans evaluate situations based on environment and social parameters when recognizing emotions in social interactions. Contextual information such as the ongoing task, the identity and natural expressiveness of the individual, and other people involved, helps us interpret and respond to social interactions. Without context, even humans may misunderstand the observed facial, vocal or body behavior. Then, an important related issue that should be addressed in automatic affect recognition is how to take into account the context information for real-world affect related applications.
Building upon the success of the first CBAR workshop 2012, the key aim of the second CBAR 2013 workshop is to explore new challenges in automatic context based audio, visual, body and/or multimodal affect recognition. The key aim of the workshop is to explore the challenges, benefits, and drawbacks of integrating context on affect production, interpretation and recognition. We wish to investigate the cutting-edge behavioral studies and methodologies that can be applied to (1) model the social and cognitive theories of context based social and/or affective interaction (2) automatic extraction of context information (like multi-modal sensing systems, observation, and behavior models), (3) incorporation of contextual information in emotion corpora (e.g. how it ought to be represented, what contextual information are relevant (i.e. is it domain specific or not)), and (4) integration of the context to the audiovisual frameworks for affect recognition to improve their performances.
The workshop is relevant for the fields of Affective Computing and Intelligent Interaction since contextual information cannot be discounted in doing automatic analysis of human affective behavior. Embedding contextual information, such as culture and environment, provides a different flavor to each interaction, and makes for an interesting scientific study. Such kinds of analysis lead us to consider real-world parameters (e.g. medical environment) and complexities in developing human-centric systems for affect recognition. The workshop focuses on making affect recognition more robust and deployable in real-world situations (e.g. work, home, school, and health care environment). We solicit papers on behavioral and automatic methodologies that can be applied to extract, interpret and/or include contextual information in audiovisual affect recognition and how it can improve existing frameworks for human-centered affect recognition.
For its second year, the workshop aims to invite scientists working in related areas of social signal processing, cognitive behavior, psychology, affective computing, ambient computing smarts environments, and machine learning to share their expertise and achievements in the emerging field of automatic and context based audio, visual and/or multimodal affect analysis and recognition.
We are inviting regular, position and application papers on, but not limited to, the following topics:
• Context identification, detection, and interpretation.
- Cognitive and behavioral theories and models
- Cutting-edge context detection models: sensing systems, observation, etc.
• Context based affect corpora recording and annotation.
- Context based data collection protocols (natural and/or induced affect)
- Context representation tools
- Domain specific annotation tools
• Context based affect recognition.
- Facial expressions
- Vocal expressions
- Body/Head languages
- (Neuro-) Physiological signals
- Other modalities
• Context based multimodal fusion.
- Asynchrony between the modalities
- Innate priority among the modalities
- Temporal variations in the relative importance of the modalities according
to the context
- Cutting-edge context based fusion tools
• Applications
- Context aware clinical applications such Depressions severity detection,
Pain motoring, Autism (e.g. the influence of age, gender, intimate vs. stranger
interaction, physician-patient relationship, home vs hospital environment, etc.)
- Context based and affect-aware intelligent tutors (e.g. learning profile,
personality, assessments, etc.)
- Affect-based human-robot or human-embodied conversational agent interactions
- Other context based and affect-aware applications
3. Invited Speakers
Jeffrey Cohn, University of Pittsburgh, Pittsburgh, USA
Björn Schuller, Technische Universität München (TUM), Germany
Hamid Aghajan, Stanford University, USA
4. Organizers
Zakia Hammal (zakia_hammal@yahoo.fr)
The Robotics Institute, Carnegie Mellon University
Pittsburgh, USA.
http://www.ri.cmu.edu/
http://www.pitt.edu/~emotion/ZakiaHammal.html
Merlin Teodosia Suarez (merlin.suarez@delasalle.ph)
Center for Empathic Human-Computer Interactions
De La Salle University. Manila, Philippines
http://cehci.dlsu.edu.ph 5. Submission Policy
The submitted manuscripts should not be submitted to another conference or workshop.
Each paper will receive at least two reviews. Acceptance will be based on relevance to the workshop, novelty, and technical quality.
The review is double-blind. Please remove your names and any other information which might violate the double blind policy.
The paper format must follow the standard IEEE double-column format of single-spaced text in 10 point Times Roman (or dosely resembling), with 12 point interline space. All printed material, including illustrations, must be kept within a print area of 6-7/8 inches (17.5cm) wide by 8-7/8 inches (22.5cm) high.
The review is double-blind. Please remove your names and any other information which might violate the double blind policy.
The paper format must follow the standard IEEE double-column format of single-spaced text in 10 point Times Roman (or dosely resembling), with 12 point interline space. All printed material, including illustrations, must be kept within a print area of 6-7/8 inches (17.5cm) wide by 8-7/8 inches (22.5cm) high.
The papers will be published in ACII 2013 proceedings
by IEEE. The limit for papers is 6 pages, plus a maximum of 2 additional
pages, charged €100 each.
At least one author of each paper must register and attend the workshop to present the paper.
The papers have to be submitted at the following link (easychairCBAR2013)
7. Tentative Deadlines
Submission Deadline (EXTENDED): May 17, 2013
Notification of Acceptance: June 11, 2013
Camera Ready: June 21, 2013
Workshop Date: September 2, 2013
8. Program Committee
Anton Nijholt, University of Twente, the Netherlands
Carlos Busso, UT-Dallas, USA
Jeffrey Cohn, University of Pittsburgh, USA
Ursula Hess, Humboldt University, Berlin
Genevra Castellano, University of Birmingham, UK
Dirk Heylen, University of Twente, The Netherlands
Jeremy Cooperstock, McGill University, Montreal, Canada
Lionel Prevost, University of French West Indies and Guyana
Fernando De la Torre, CMU, USA
Louis-Philippe Morency, USC, USA
Nadia Bianchi-Berthouze, University College London, UK
Isabella Poggi, Universita' Roma Tre, Italy
Aleix Martinez, Ohio State University, USA
Sidney D'Mello, University of Notre Dame, USA
Albert Ali Salah, Bogazici University, Turkey
Rafael Cabredo, De La Salle University, Manila, Phillippines
Madelene Sta. Maria, De La Salle University, Manila, Phillippines