Abstract
Many computer games feature non-player character (NPC) teammates and companions; however, playing with or against NPCs can be frustrating when they perform unexpectedly. These frustrations can be avoided if the NPC has the ability to explain its actions and motivations. When NPC behavior is controlled by a black box AI system it can be hard to generate the necessary explanations. In this paper, we present a system that generates human-like, natural language explanations—called rationales—of an agent’s actions in a game environment regardless of how the decisions are made by a black box AI. We outline a robust data collection and neural network training pipeline that can be used to gather think-aloud data and train a rationale generation model for any similar sequential turn based decision making task. A human-subject study shows that our technique produces believable rationales for an agent playing the game, Frogger. We conclude with insights about how people perceive automatically generated rationales.
Original language | English |
---|---|
Journal | CEUR Workshop Proceedings |
Volume | 2282 |
State | Published - 2018 |
Event | 2018 Joint of the Artificial Intelligence and Interactive Digital Entertainment Workshops, AIIDE-WS 2018 - Edmonton, Canada Duration: Nov 13 2018 → Nov 14 2018 |
Bibliographical note
Publisher Copyright:© 2018 CEUR-WS. All Rights Reserved.
ASJC Scopus subject areas
- General Computer Science