Skip to Main content Skip to Navigation
New interface
Conference papers

Enabling Markovian Representations under Imperfect Information

Abstract : Markovian systems are widely used in reinforcement learning (RL), when the successful completion of a task depends exclusively on the last interaction between an autonomous agent and its environment. Unfortunately, real-world instructions are typically complex and often better described as non-Markovian. In this paper we present an extension method that allows solving partially-observable non-Markovian reward decision processes (PONMRDPs) by solving equivalent Markovian models. This potentially facilitates Markovian-based state-of-the-art techniques, including RL, to find optimal behaviours for problems best described as PONMRDP. We provide formal optimality guarantees of our extension methods together with a counterexample illustrating that naive extensions from existing techniques in fully-observable environments cannot provide such guarantees.
Complete list of metadata
Contributor : Vadim Malvone Connect in order to contact the contributor
Submitted on : Friday, September 16, 2022 - 1:42:15 PM
Last modification on : Friday, November 11, 2022 - 3:41:16 PM

Links full text



Francesco Belardinelli, Borja G. León, Vadim Malvone. Enabling Markovian Representations under Imperfect Information. 14th International Conference on Agents and Artificial Intelligence (ICAART 2022), Feb 2022, Online Streaming, France. pp.450-457, ⟨10.5220/0010882200003116⟩. ⟨hal-03779034⟩



Record views