A Continuous Approach to Discrete MCMC
Continuous problems are often simpler to solve than discrete problems. This is true in many optimization problems (for instance, linear programming versus integer linear programming). In the case of Markov chain Monte Carlo (MCMC), sampling continuous distributions has some advantages over sampling discrete distributions due to the availability of gradient information in the continuous case. The paper “Continuous Relaxations for Discrete Hamiltonian Monte Carlo” by Yichuan Zhang, Charles Sutton, Amos Storkey, and Zoubin Ghahramani explores the idea of performing inference in the discrete setting by deriving and sampling a related continuous distribution. Here I describe the approach taken in this paper.
We consider binary Markov random fields, i.e. discrete distributions over binary vectors of the form
(1)
where the vector and the matrix
parametrize the distribution. The authors actually begin with discrete distributions described by more general undirected graphical models, but they ultimately coerce such distributions into the form of equation (1). Then, we apply the “Gaussian Integral Trick,” in which we define the auxiliary variable
to be Gaussian conditioned on
. This produces a very manageable joint distribution
. There are multiple ways to do this, but one way is to define something like
(2)
In practice, may not be positive definite, so you have to add a diagonal matrix to it, but equation (2) contains the essence of the trick. Expanding and simplifying the joint distribution, we get
(3)
Things simplify so that the cross terms from , which were present in equation (1), disappear in the joint distribution. Each component
is a binary variable, so summing out each
, we are left with the continuous marginal
Thus, we have produced a continuous distribution, , from the discrete distribution
. These distributions are intimately linked. For instance, the distribution over
can be thought of as a mixture of
Gaussians, one for each state
. Now, we can use continuous MCMC methods to sample
. The authors use Hamiltonian Monte Carlo because of the ready availability of gradient information, but it is likely that other powerful methods for sampling continuous distributions can be exploited for this task.
Once we have generated samples from
, they can be used to perform standard inference tasks like computing the marginals of
or evaluating the normalizing constant
from equation (1). To illustrate the first example, observe that we can write the marginal distribution of the component
as
The conditional can be computed easily from equation (3). This approach bypasses the need to ever produce samples from
, so that we may focus our attention on methods in the continuous domain.