Stochastic memoization in Haskell

Eyal Dechter · March 14, 2013

I thought it would be fun to quickly demonstrate how to write a stateless stochastic memoizer in Haskell. The idea behind stochastic memoization is to take some base generative process and transform it into another process which, on each sample, either draws from the original process or draws from the distribution of previously drawn samples. This allows you, for example, to introduce memory or smoothness into your sampling distribution [1, 2].

The code:

crpMem g0 alpha f = go	g0 []
    where go g customers = if	newTable then makeNewTableAndGo else chooseOldTableAndGo
                where (r, g') = randomR (0, 1) g    -- get r in (0,1) and next random gen                                                                                                                   
		      n	= fromIntegral $ length	customers
                      makeNewTableAndGo =  let b = f g'
                                               (_, g'') = next g'
                                           in b : go g'' (b:customers)
		      chooseOldTableAndGo = let (i, g'') = randomR (0, (length customers) - 1) g'
                                                           c = customers !! i
                                            in c : go g'' (c:customers)
                      newTable = r < (alpha / (n + alpha))

The function crpMem takes a random generator, a value alpha greater than 0, and a function of a single argument, a random generator.  This function is the sampler of the base probability distribution. crpMem generates an infinite stream of samples from the Chinese Restaurant Process (3) associated whose table values are sampled from the base probability distribution. The function is relatively straightforward: the loop go takes a new generator and the history of samples (customers) and flips a coin to decide whether to draw uniformly from past samples or draw a new sample. The new sample is tacked onto a recursive call to go with a new random generator and an updated history of samples.


1) Goodman, N. D., Mansinghka, V. K., Roy, D., Bonawitz, K., and Tenenbaum, J. B. (2008). Church: A language for generative models. In Uncertainty in Artificial Intelligence, Helsinki, Finland. AUAI Press.

2) Johnson, M., Griffiths, T. L., and Goldwater, S. (2007a). Adaptor Grammars: A framework for specifying compositional nonparametric Bayesian models. In Advances in Neural Information Processing Systems 19, Cambridge, MA. MIT Press.


Twitter, Facebook