Modern modeling and simulation environments, such as commercial games or military training systems, frequently demand interactive agents that exhibit realistic and responsive behavior in accordance with a predetermined specification, such as a storyboard or military tactics document. Traditional methods for creating agents, such as state machines or behavior trees, necessitate a significant amount of effort for developing state representations and transition processes through manual knowledge engineering. On the other hand, newer techniques for behavior generation, such as deep reinforcement learning, require a vast amount of training data (centuries in many cases), and there is no guarantee that the generated behavior will align with intended ob-jectives and courses of action. This paper examines the application of behavior cloning approaches in designing interactive agents. In our approach, users start by defining desired behavior through straightforward methods such as state machine models or behavior trees. Behavior cloning methods are then used to transform ground-truth trajectory data sampled from these models into differentiable policies that are further refined through engagement with interactive game environments. This method results in improvements in training results when compared on dimensions of task performance and stability of training.
|Journal||Proceedings of the International Florida Artificial Intelligence Research Society Conference, FLAIRS|
|State||Published - 2023|
|Event||36th International Florida Artificial Intelligence Research Society Conference, FLAIRS-36 2023 - Clearwater Beach, United States|
Duration: May 14 2023 → May 17 2023
Bibliographical notePublisher Copyright:
© 2023 by the authors. All rights reserved.
- end to end TOD
- natural language processing
- sgd dataset
- task oriented dialog system
- zero shot generalizable
ASJC Scopus subject areas
- Artificial Intelligence