Grants and Contracts Details
Description
Supporting Players of Interactive Narrative Games
by Recognizing Beliefs, Intentions, Memory, and Expectations
Stephen G. Ware, Ph.D. Assistant Professor, Dept. of Computer Science, University of Kentucky
Interactive story games invite the player be one character while the system controls non-player
characters and the environment. We propose interactive story games as a testbed for long-term
autonomous agents because they distill important challenges from relevant real world problems:
• Mixed Multi-Agent Problems: Virtual worlds feature agents with different capabilities and
relationships to the player: friendly, hostile, and neutral. The system needs to model these
agents individually and as a group, tracking their beliefs and intentions and reasoning about
their plans to consider millions of hypothetical futures. It must find an ideal outcome for
the story while ensuring agents act realistically; the antagonist cannot simply start to work
together with the protagonist because it is an efficient path to a happy ending.
• Modeling a Human Collaborator: The player is as an agent too, whose beliefs and
intentions affect what is possible. The player should not report these directly to the system;
it must infer them from the player’s actions and shape the story to support the player. The
system needs to reason about how the player’s memory and expectations affect what plans
they consider and how its actions will affect the player’s mental model to shift their goals,
correct wrong beliefs, and foreground important information.
We propose a research plan divided into three research tasks and a final evaluation:
1. Scale Multi-Agent Planning: We have developed a knowledge representation for individual
and group behaviors based on intentions and beliefs. We can leverage non-Markovian
heuristics, landmark-based search, and width-based pruning to generate millions of possible
futures in seconds.
2. Infer Player Belief and Intent: The same models of belief and intention that define what
an agent should do can be used to infer what beliefs and intentions best explain the observed
actions of the player, enabling the system to include the player as an agent when reasoning
about possible futures.
3. Model Event Salience: Computational operationalizations of the psychological research on
situation models allow us to model what people, places, time frames, causal relationships,
and goals players are reasoning about, as well as how each action by the system will affect
the player’s mental model. Modeling the player’s memory and expectations enables us to
generate more natural stories faster.
We will employ a fast multi-agent planner to model realistic agent behavior as well as infer the
beliefs and intentions of players. Combined with a mental model of what elements of the virtual
world are salient, we can design story systems that infer the player’s mental state and plans from
their action, adapting the story to support (or intentionally resist) them. We will evaluate this work
in two environments:
• Traffic Stop: A room-scale virtual reality serious game for police de-escalation training.
• Camelot: A medieval adventure game with many quests and ways to complete them.
We will evaluate our system with both simulated and real human players. To be successful, plan
recognition and player support must be fast, accurate, helpful, unobtrusive, robust, and scalable.
Status | Active |
---|---|
Effective start/end date | 7/22/24 → 7/21/27 |
Funding
- Army Research Office: $120,000.00
Fingerprint
Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.