Abstract
Decision processes with incomplete state feedback have been traditionally modelled as partially observable Markov decision processes. In this article, we present an alternative formulation based on probabilistic regular languages. The proposed approach generalises the recently reported work on language measure theoretic optimal control for perfectly observable situations and shows that such a framework is far more computationally tractable to the classical alternative. In particular, we show that the infinite horizon decision problem under partial observation, modelled in the proposed framework, is λ-approximable and, in general, is not harder to solve compared to the fully observable case. The approach is illustrated via two simple examples.
Original language | English |
---|---|
Pages (from-to) | 457-483 |
Number of pages | 27 |
Journal | International Journal of Control |
Volume | 83 |
Issue number | 3 |
DOIs | |
State | Published - Mar 2010 |
Keywords
- Discrete event systems
- Formal language theory
- Language measure
- Partial observation
- POMDP
ASJC Scopus subject areas
- Control and Systems Engineering
- Computer Science Applications