*International Conference on Learning Representations*.

Standard variational lower bounds used to train latent variable models produce biased estimates of most quantities of interest. We introduce an unbiased estimator of the log marginal likelihood and its gradients for latent variable models based on randomized truncation of infinite series. If parameterized by an encoder-decoder architecture, the parameters of the encoder can be optimized to minimize its variance of this estimator. We show that models trained using our estimator give better test-set likelihoods than a standard importance-sampling based approach for the same average computational cost. This estimator also allows use of latent variable models for tasks where unbiased estimators, rather than marginal likelihood lower bounds, are preferred, such as minimizing reverse KL divergences and estimating score functions.

@inproceedings{luo2020sumo, year = {2020}, title = {SUMO: Unbiased Estimation of Log Marginal Probability for Latent Variable Models}, author = {Luo, Yucen and Beatson, Alex and Norouzi, Mohammad and Zhu, Jun and Duvenaud, David and Adams, Ryan P and Chen, Ricky TQ}, booktitle = {International Conference on Learning Representations} }