Why emotions should be integrated into conversational agents


Conversational Informatics. Edited by Toyoaki Nishida c © 2001 John Wiley & Sons, Ltd This is a Book Title Name of the Author/Editor c © XXXX John Wiley & Sons, Ltd 2 WHY EMOTIONS SHOULD BE INTEGRATED INTO CONVERSATIONAL AGENTS When building conversational agents that are to take part in social interaction with humans, an important question is whether psychological concepts like emotions or personality of the agents need to be incorporated. In this chapter we argue for the integration of an emotion system into a conversational agent to enable the simulation of having “own emotions”. We first clarify the concept of emotions and we discuss different approaches to modeling emotions and personality in artificial systems. Drawing on our work on the multimodal conversational agent Max, we present motives for the integration of emotions as integral parts of an agent’s cognitive architecture. Our approach combines different psychological emotion theories and distinguishes between primary and secondary emotions as originating from different levels of this architecture. Exemplary application scenarios are described to show how the agent’s believability can be increased by the integration of emotions. In a cooperative setting, Max is employed as a virtual interactive guide in a public computer museum, where his emotion module enhances his acceptance as a coequal conversational partner. We further quote an empirical study that yields evidence that the same emotion module supports the believability and lifelikeness of the agent in a competitive gaming scenario. 1.


    7 Figures and Tables

    Download Full PDF Version (Non-Commercial Use)