Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Emotional body language displayed by artificial agents
Beck A., Stevens B., Bard K., Cañamero L. ACM Transactions on Interactive Intelligent Systems2 (1):1-29,2012.Type:Article
Date Reviewed: Dec 19 2012

Do androids dream of electric sheep? [1], a novel by Philip K. Dick published in 1968, describes a future world in which robots--androids virtually indistinguishable from humans--are used as slaves in off-world colonies and considered by the human populace as mere machinery. Those that escape to earth are pursued, questioned to determine if they are in fact human, and, if not, killed as flawed machinery. To determine whether a creature is a human or an android, the interviewers use questions designed to evoke an emotional response. The humans respond emotionally, but the androids do not. Although Dick’s world (which later became the basis for the movie Blade Runner) is fiction, this paper is occupied with a concept that beguiles many writers and filmmakers: the ways in which real-life humans interact with their artificial counterparts.

The authors of this paper want to determine how emotional body language can best be generated and displayed by robots and artificial agents so as to be understood by their human counterparts. They first describe in detail various studies going back to the classic 1970 uncanny valley hypothesis, which found that as the visual representation of agents becomes more realistic, believability drops. More recent studies conflict with some of the uncanny valley findings. Those studies focus mainly on the use of facial expressions, while a smaller percentage deal with body language as the means of interactive communication between robots and humans.

This paper reports on a project to investigate how best to create a space-based complex social interaction between artificial agents and humans. The authors, from the University of Hertfordshire and the University of Portsmouth, constructed their study to (1) show whether agents can display features that show emotions that are believable and relevant, and (2) to show that the agent’s emotional displays can elicit human responses, indicating that the human is engaged in the interaction. The robot agents were placed into two categories. The first category was based on physical realism and appearance. The second, based on behavioral realism, compared character positions and movements to those of real-world humans. The dependent variables for these categories were defined as either ordinary, when compared to real-world people, or stylized, whether they were robots or animated visuals.

The participants consisted of two groups of 20 people each, male and female, between the ages of 20 and 60, mainly from the University of Portsmouth. They were asked to respond to the following questions during the course of the experiments:

(Q1) Does the character type affect the correct identification, strength, believability, and naturalness of the emotional body language displayed?
(Q2) Does the action style affect the correct identification, strength, believability, and naturalness of the emotional body language displayed?
(Q3) Does the frame rate [of the recorded scene] affect the correct identification, strength, believability, and naturalness of the emotional body language displayed?
(Q4) Are personal differences including emotional intelligence, experience in playing video games or familiarity with animated characters related to correct identification, strength, believability, and naturalness?

The first study, which compared the emotional body language displayed by humans and robots, indicated that the two groups were interpreted similarly. While some robots were capable of using facial expressions as part of the interaction, the researchers found that body language was the best medium for displaying emotion by robots at the current state of the technology.

In the second study, a professional actor and director were used and the performances were recorded on video. Key poses were taken from the actor’s portrayal of ten emotions: anger, disgust, shame, fear, sadness, surprise, relief, happiness, pride, and excitement. Each emotion was performed twice, once naturally and once stylized, similar to animation. These poses were extracted from the actor’s performances and then used to animate two characters, one natural and one stylized, so that they displayed the same body language. The extracted actions were also programmed into a Nao robot. The human participants in the study were able to correctly interpret emotions from the simulated poses. The participants found the actor’s emotions to be the most believable. The realistic animated character was found to be less believable than the actor, but more believable than a second, more simplified character.

In the third study, an affect space was generated by blending key poses; the result was validated. In addition, head positions were examined for their ability to transmit emotion. While head positions alone were found capable of transmitting emotion, the authors determined that, for some emotions, head positions needed to be combined with a related body position. The participants found that a head position combined with a congruent body movement had the most significance for believability.

The authors assert that their experiments in general show that body language is an appropriate medium for robots to display emotions, and, concurrently, that the uncanny valley model is incomplete. Their study, which includes highly detailed examples and accompanying statistical analyses for each segment, confirms that it is possible for a human viewer to accurately identify emotions displayed by a computer agent or robot using only body language, and that the robot’s physical appearance does not affect the identification of the emotion. They found, too, that stylized emotional body language, like that used in animation, is perceived as more believable than ordinary displays. Finally, they found that to convey believability, a character’s movements should be consistent with the way it looks.

Designers of interactive media using nonhuman agents will find the study’s conclusions to be both interesting and helpful for developing agents and characters that can produce empathy with their viewers. However, robots as used today in many other areas do not depend on user empathy. The robotic arms that form the spine of a modern automobile production line, for example, need only to perform the task of assembling and painting a new Prius to perfection for the driver to find empathetic satisfaction. And users of a Roomba cannot tell from the face of today’s model if it likes or disapproves of the carpet-cleaning schedule that has been selected.

Reviewer:  Bernice Glenn Review #: CR140765 (1303-0259)
1) Dick, P. K. Do androids dream of electric sheep?. Ballantine Books, New York, NY, 1968.
Bookmark and Share
  Featured Reviewer  
 
Evaluation/ Methodology (H.5.2 ... )
 
 
Human Factors (H.1.2 ... )
 
 
Software Psychology (H.1.2 ... )
 
 
Robotics (I.2.9 )
 
 
User Interfaces (H.5.2 )
 
Would you recommend this review?
yes
no
Other reviews under "Evaluation/Methodology": Date
Computer analysis of user interfaces based on repetition in transcripts of user sessions
Siochi A., Ehrich R. ACM Transactions on Information Systems 9(4): 309-335, 1991. Type: Article
Aug 1 1992
Software by design
Bauersfeld P., M & T Books, New York, NY, 1994. Type: Book (9781558282964)
Mar 1 1995
Prototyping for tiny fingers
Rettig M. Communications of the ACM 37(4): 21-27, 1994. Type: Article
Dec 1 1994
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy