Automated rationale generation: A technique for explainable AI and its effects on human perceptions

Upol Ehsan, Pradyumna Tambwekar, Larry Chan, Brent Harrison, Mark O. Riedl

Research output: Contribution to conferencePaperpeer-review

119 Scopus citations

Abstract

Automated rationale generation is an approach for real-time explanation generation whereby a computational model learns to translate an autonomous agent's internal state and action data representations into natural language. Training on human explanation data can enable agents to learn to generate human-like explanations for their behavior. In this paper, using the context of an agent that plays Frogger, we describe (a) how to collect a corpus of explanations, (b) how to train a neural rationale generator to produce different styles of rationales, and (c) how people perceive these rationales. We conducted two user studies. The first study establishes the plausibility of each type of generated rationale and situates their user perceptions along the dimensions of confidence, humanlike-ness, adequate justification, and understandability. The second study further explores user preferences between the generated rationales with regard to confidence in the autonomous agent, communicating failure and unexpected behavior. Overall, we find alignment between the intended differences in features of the generated rationales and the perceived differences by users. Moreover, context permitting, participants preferred detailed rationales to form a stable mental model of the agent's behavior.

Original languageEnglish
Pages263-274
Number of pages12
DOIs
StatePublished - 2019
Event24th ACM International Conference on Intelligent User Interfaces, IUI 2019 - Marina del Ray, United States
Duration: Mar 17 2019Mar 20 2019

Conference

Conference24th ACM International Conference on Intelligent User Interfaces, IUI 2019
Country/TerritoryUnited States
CityMarina del Ray
Period3/17/193/20/19

Bibliographical note

Publisher Copyright:
© 2019 Copyright held by the owner/author(s). Publication rights licensed to ACM.

Keywords

  • Algorithmic decision-making
  • Algorithmic explanation
  • Artificial Intelligence
  • Explainable AI
  • Interpretability
  • Machine Learning
  • Rationale generation
  • Transparency
  • User perception

ASJC Scopus subject areas

  • Software
  • Human-Computer Interaction

Fingerprint

Dive into the research topics of 'Automated rationale generation: A technique for explainable AI and its effects on human perceptions'. Together they form a unique fingerprint.

Cite this