*ArXiv Preprint ArXiv:1906.10228*.

Sequential decision making in the presence of uncertainty and stochastic dynamics gives rise to distributions over state/action trajectories in reinforcement learning (RL) and optimal control problems. This observation has led to a variety of connections between RL and inference in probabilistic graphical models (PGMs). Here we explore a different dimension to this relationship, examining reinforcement learning using the tools and abstractions of statistical physics. The central object in the statistical physics abstraction is the idea of a partition function , and here we construct a partition function from the ensemble of possible trajectories that an agent might take in a Markov decision process. Although value functions and -functions can be derived from this partition function and interpreted via average energies, the -function provides an object with its own Bellman equation that can form the basis of alternative dynamic programming approaches. Moreover, when the MDP dynamics are deterministic, the Bellman equation for is linear, allowing direct solutions that are unavailable for the nonlinear equations associated with traditional value functions. The policies learned via these -based Bellman updates are tightly linked to Boltzmann-like policy parameterizations. In addition to sampling actions proportionally to the exponential of the expected cumulative reward as Boltzmann policies would, these policies take entropy into account favoring states from which many outcomes are possible.

@article{rahme2019theoretical, year = {2019}, title = {A theoretical connection between statistical physics and reinforcement learning}, author = {Rahme, Jad and Adams, Ryan P}, journal = {arXiv preprint arXiv:1906.10228} }