Planning algorithms generate sequences of actions that achieve a goal, but they can also be used in reverse: to infer the goals that led to a sequence of actions. Traditional plan-based goal recognition assumes agents are rational and the environment is fully observable. Recent narrative planning models represent agents as believable rather than perfectly rational, meaning their actions need to be justified by their goals, but they may act in ways that are not optimal, and they may possess incorrect beliefs about the environment. In this work we propose a technique for inferring the goals and beliefs of agents in this context, where rationality and omniscience are not assumed. We present two evaluations that investigate the effectiveness of this approach. The first uses partial observation sequences and shows how this impacts the algorithm’s accuracy. The second uses human data and compares the algorithm’s inferences to those made by humans.
|Title of host publication||Proceedings of the 16th AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, AIIDE 2020|
|Editors||Levi Lelis, David Thue|
|Number of pages||7|
|State||Published - 2020|
|Event||16th AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, AIIDE 2020 - Virtual, Online|
Duration: Oct 19 2020 → Oct 23 2020
|Name||Proceedings of the 16th AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, AIIDE 2020|
|Conference||16th AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, AIIDE 2020|
|Period||10/19/20 → 10/23/20|
Bibliographical noteFunding Information:
This work was funded in part by the Department of Defense.
Copyright © 2020, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
ASJC Scopus subject areas
- Visual Arts and Performing Arts
- Artificial Intelligence