A Tensor Network Approach to Finite Markov Decision Processes

Part of the tensor network representation of an MDP.

Abstract

Tensor network (TN) techniques - often used in the context of quantum many-body physics - have shown promise as a tool for tackling machine learning (ML) problems. The application of TNs to ML, however, has mostly focused on supervised and unsupervised learning. Yet, with their direct connection to hidden Markov chains, TNs are also naturally suited to Markov decision processes (MDPs) which provide the foundation for reinforcement learning (RL). Here we introduce a general TN formulation of finite, episodic and discrete MDPs. We show how this formulation allows us to exploit algorithms developed for TNs for policy optimisation, the key aim of RL. As an application we consider the issue - formulated as an RL problem - of finding a stochastic evolution that satisfies specific dynamical conditions, using the simple example of random walk excursions as an illustration.

Publication
ArXiv:2002.05185
Dominic C. Rose
Dominic C. Rose
Research Fellow in Theoretical Physics

My research interests include nonequilibrium physics, large deviations and reinforcement learning.