*Proceedings of the 15th International Conference on Artificial Intelligence and Statistics (AISTATS)*.

One approach to modeling structured discrete data is to describe the probability of states via an energy function and Gibbs distribution. A recurring difficulty in these models is the computation of the partition function, which may require an intractable sum. However, in many such models, the mode can be found efficiently even when the partition function is unavailable. Recent work on Perturb-and-MAP (PM) models has exploited this discrepancy to approximate the Gibbs distribution for Markov random fields (MRFs). Here, we explore a broader class of models, called Randomized Optimum Models (RandOMs), which include PM as a special case. This new class of models encompasses not only MRFs, but also more general types of models that have intractable partition functions but permit efficient optimization, such as those based on bipartite matchings, shortest paths, or connected components in a graph. We develop several likelihood-based learning algorithms for RandOMs, and our empirical results indicate that our methods can produce better models than the moment-matching of PM.

@conference{tarlow2012optimum, year = {2012}, author = {Tarlow, Daniel and Adams, Ryan P. and Zemel, Richard S.}, title = {Randomized Optimum Models for Structured Prediction}, booktitle = {Proceedings of the 15th International Conference on Artificial Intelligence and Statistics (AISTATS)}, keywords = {structured prediction, AISTATS} }