Generative Marginalization Models

Liu, S., Ramadge, P. J., & Adams, R. P. (2024). Generative Marginalization Models. Proceedings of the 41st International Conference on Machine Learning (ICML).
We introduce marginalization models (MAMs), a new family of generative models for high-dimensional discrete data. They offer scalable and flexible generative modeling by explicitly modeling all induced marginal distributions. Marginalization models enable fast approximation of arbitrary marginal probabilities with a single forward pass of the neural network, which overcomes a major limitation of arbitrary marginal inference models, such as any-order autoregressive models. MAMs also address the scalability bottleneck encountered in training any-order generative models for high-dimensional problems under the context of energy-based training, where the goal is to match the learned distribution to a given desired probability (specified by an unnormalized log-probability function such as energy or reward function). We propose scalable methods for learning the marginals, grounded in the concept of "marginalization self-consistency". We demonstrate the effectiveness of the proposed model on a variety of discrete data distributions, including images, text, physical systems, and molecules, for maximum likelihood and energy-based training settings. MAMs achieve orders of magnitude speedup in evaluating the marginal probabilities on both settings. For energy-based training tasks, MAMs enable any-order generative modeling of high-dimensional problems beyond the scale of previous methods. Code is available at github.com/PrincetonLIPS/MaM.
  @conference{liu2024generative,
  year = {2024},
  author = {Liu, Sulin and Ramadge, Peter J. and Adams, Ryan P.},
  title = {Generative Marginalization Models},
  booktitle = {Proceedings of the 41st International Conference on Machine Learning (ICML)},
  keywords = {ICML}
}