Emotions are generated and modulated by many factors in the ever-changing surrounding environment. A new and challenging task is to emulate emotional responses on a robot that are caused by visual stimuli, such that the robot’s responses mirror that of the human user. This paper presents the initial stage of an affective system that has been trained on-line using reinforcement learning to generate and modulate emotions. The inputs of the system comprise a subset of emotionally relevant visual features extracted from the environment: colours, fractal dimension, and facial pareidolia. These inputs are mapped onto an output that expresses the associated emotion in terms of language. Pilot experiments demonstrate how a humanoid robot tries to learn through interaction with a human companion to express emotions associated with different environmental scenes in a (near) human-like manner.
2013 IEEE Symposium on Computational Intelligence for Creativity and Affective Computing (CICAC). Proceedings of 2013 IEEE Symposium on Computational Intelligence for Creativity and Affective Computing (CICAC) (Singapore 16-19 April, 2013) p. 9-16