Creating a movie, computer game, or virtual reality scenario with convincing human-like characters poses a number of challenges, among them the appearance and the behavior of autonomous virtual characters (AVCs). In movies and scripted parts of games, it is feasible (but expensive) for developers and designers to create and improve the behaviors such characters exhibit. In real-time virtual environments, AVCs require a good degree of autonomy combined with strategies to behave in a way that is reasonably close to that of humans.
The authors describe a flexible automatic attention model that directs the eye and head movements of an AVC. It extracts aspects such as proximity, movement, intrinsic object properties, and expected behaviors of objects from the current scene; focuses on specific objects of interest; and generates coordinated eye and head movements for the AVC.
To assess the performance of their model, they compared it against the default behavior of avatars in the Second Life (SL) virtual environment and against amateur actors playing out simple chat scenarios. In two experiments, about 50 participants expressed their subjective opinions of the AVCs. In the comparison against the SL default, the overall attention as well as the specific eye and head movements of the new model were considered more realistic. In the second comparison, involving humans, participants judged the automatic attention model highly in terms of perceived realism of the AVCs’ behavior.
This may be sufficient to convey the impression of an attentive student avatar in a Second Life classroom while the human counterpart is dozing off, until the instructor avatar directs a question at the student. But maybe a chatbot can at least stall the conversation until the human participant is back in action.