Proceeding of International Conference
Hiroki Tanioka, Tetsushi Ueta, Masahiko Sano, Toward a Dialogue System Using a Large Language Model to Recognize User Emotions with a Camera, The 1st InterAI: Interactive AI for Human-Centered Robotics workshop in conjuction with IEEE Ro-MAN 2024, pp.6pgs, Pasadona, LA, USA, Aug. 2024.
Abstract: The performance of ChatGPT© and other LLMs has improved tremendously, and in online environments, they are increasingly likely to be used in a wide variety of situations, such as ChatBot on web pages, call center operations using voice interaction, and dialogue functions using agents. In the offline environment, multimodal dialogue functions are also being realized, such as guidance by Artificial Intelligence agents (AI agents) using tablet terminals and dialogue systems in the form of LLMs mounted on robots. In this multimodal dialogue, mutual emotion recognition between the AI and the user will become important. So far, there have been methods for expressing emotions on the part of the AI agent or for recognizing them using textual or voice information of the users utterances, but methods for AI agents to recognize emotions from the users facial expressions have not been studied. In this study, we examined whether or not LLM-based AI agents can interact with users according to their emotional states by capturing the user in dialogue with a camera, recognizing emotions from facial expressions, and adding such emotion information to prompts. The results confirmed that AI agents can have conversations according to the emotional state for emotional states with relatively high scores, such as Happy and Angry.