Dealing with Reliability when Crowdsourcing

Robert NishiharaMachine Learning, StatisticsLeave a Comment

I recently read the paper “Variational Inference for Crowdsourcing,” by Qiang Liu, Jian Peng, and Alexander Ihler. They present an approach using belief propagation to deal with reliability when using crowdsourcing to collect labeled data. This post is based on their exposition. Crowdsourcing (via services such as Amazon Mechanical Turk) has been used as a cheap way to amass large quantities of labeled data. However, the labels are likely to be noisy. To deal with this, a common strategy is to employ redundancy: each task is labeled by multiple workers. For simplicity, suppose there are tasks and workers, and assume that the possible labels are . Define the matrix so that is the label given to task by worker (or … Read More

Complexity of Inference in Bayesian Networks

Jonathan HugginsMachine LearningLeave a Comment

Developing efficient (i.e. polynomial time) algorithms with guaranteed performance is a central goal in computer science (perhaps the central goal). In machine learning, inference algorithms meeting these requirements are much rarer than we would like: often, an algorithm is either efficient but doesn’t perform optimally or vice versa. A number of results from the 1990’s demonstrate the challenges of, but also the potential for, efficient Bayesian inference. These results were carried out in the context of Bayesian networks.

It Depends on the Model

Peter KrafftMeta, StatisticsLeave a Comment

In my last blog post I wrote about the asymptotic equipartition principle. This week I will write about something completely unrelated. This blog post evolved from a discussion with Brendan O’Connor about science and evidence. The back story is as follows.