Learning to generate natural language rationales for game playing agents

Upol Ehsan, Pradyumna Tambwekar, Larry Chan, Brent Harrison, Mark O. Riedl

Research output: Contribution to journalConference articlepeer-review

3 Scopus citations


Many computer games feature non-player character (NPC) teammates and companions; however, playing with or against NPCs can be frustrating when they perform unexpectedly. These frustrations can be avoided if the NPC has the ability to explain its actions and motivations. When NPC behavior is controlled by a black box AI system it can be hard to generate the necessary explanations. In this paper, we present a system that generates human-like, natural language explanations—called rationales—of an agent’s actions in a game environment regardless of how the decisions are made by a black box AI. We outline a robust data collection and neural network training pipeline that can be used to gather think-aloud data and train a rationale generation model for any similar sequential turn based decision making task. A human-subject study shows that our technique produces believable rationales for an agent playing the game, Frogger. We conclude with insights about how people perceive automatically generated rationales.

Original languageEnglish
JournalCEUR Workshop Proceedings
StatePublished - 2018
Event2018 Joint of the Artificial Intelligence and Interactive Digital Entertainment Workshops, AIIDE-WS 2018 - Edmonton, Canada
Duration: Nov 13 2018Nov 14 2018

Bibliographical note

Publisher Copyright:
© 2018 CEUR-WS. All Rights Reserved.

ASJC Scopus subject areas

  • General Computer Science


Dive into the research topics of 'Learning to generate natural language rationales for game playing agents'. Together they form a unique fingerprint.

Cite this