# DPMs and Consistency

Jeff Miller and Matthew Harrison at Brown (go Bears!) have recently explored the posterior consistency of Dirichlet process mixture (DPM) models, emphasizing one particular drawback.

For setup, say you have some observed data from a mixture of two normals, such as

In this case, the number of clusters, , is two, and one would imagine that as grows, the posterior distribution of would converge to 2, i.e. . However, this is **not true** if you model the data with a DPM (or more generally, modeling the mixing measure as a Dirichlet process, ).

In this note, the authors show that the DPM is inconsistent for the number of clusters in a single Gaussian. In fact, in this case the inconsistency is so severe that as .

So if doesn’t lead to convergence on the number of clusters, then what does? Miller and Harrison bring attention to a viable alternative called the Mixture of Finite Mixtures (MFM). The mixing measure is generated by:

- , a pmf on
- .

This alternative mixing measure generation provides a consistent posterior on the number of clusters.

At the NIPS Workshop on Modern Nonparametric Methods in Machine Learning, Jeff presented his poster, which nicely outlines some of the advantages of the MFM model, as well as analogous reinterpretations of the model. For example, there exists a Chinese restaurant-like process for MFMs that is comparable to the DPM. Furthermore, the stick-breaking construction for MFMs boils down to a Poisson process on the unit interval. The poster also provides some visual intuition behind why the DP tends to overestimate the number of clusters.

So if what’s important to you is the number of latent clusters you might be better off using the MFM for posterior inference. It would be interesting to see how reversible jump MCMC or other trans-dimensional model selection methods compare to MFM posterior inference for recovering the number of clusters.