Unsocial VR: Faking active listening in social virtual environments

Unsocial VR: Faking active listening in social virtual environments

In face-to-face interaction dropping attention could be considered socially inappropriate. On the other hand, during traditional phone conversations we are doing this all the time. We drop attention and start doing something else, like checking our emails or caring on a whole sign-language type of conversation, in parallel to the call. We might even pretend to follow the conversation by replying automated utterances like “uh-huh” in the right timing. This study tries to translate this possibility, of faking attention to a conversation, from telephony, to virtual reality. It implements a collaborative virtual environment in which users can press a button to let automatic algorithms take control over their avatar. Meanwhile, they can do whatever they wish, while their avatar continues to present socially appropriate responses towards the other users. Three mechanisms support the automated behaviour: baseline recorded movement, automated head nods, and always looking at the speaker. An experiment evaluated the credibility of the automated behaviour. Four groups of three participants each were asked to discuss an ethical dilemma in the virtual environment. A scoring mechanism provided incentives to use the faked behaviour and to try to detect who is currently faking. The results show a surprisingly low ability to tell apart real and faked behaviour, and that most faking periods go undetected. This ability, however, was higher when participants were familiar with each other. The results suggest that the proposed mechanisms can be implemented in future communication technologies, and highlight the advantages of using virtual reality for social cognition research.

This is Tom Gurion’s Advanced Project Placement, Media and Arts Technology doctoral programme, Queen Mary University of London. Supervised by Prof. Patrick Healey and hosted by Inition. There are some extra details Tom Gurion’s site and a video presenting the project here.