Moved to Princeton!

Ryan AdamsBlog, Meta

After several wonderful years at Harvard, and some fun times at Twitter and Google, I’ve moved to Princeton. I’ll miss all my amazing colleagues at Harvard and MIT, but I’m excited for the unique opportunities Princeton has to offer. I’ve renamed the group from the “Harvard Intelligent Probabilistic Systems” (HIPS) group to the “Laboratory for Intelligent Probabilistic Systems” (LIPS). (I should’ve listened to the advice of not putting the name of the university in the group name…) I’ve moved all the HIPS blog posts over to this new WordPress site, but I will keep the HIPS Github as that is where some well-known projects live, such as Autograd and Spearmint. For new projects, I’ve created a new repository at https://github.com/PrincetonLIPS. … Read More

Introspection in AI

Roger GrosseBlog, Machine Learning, Meta

I’ve recently come across a fascinating blog post by Cambridge mathematician Tim Gowers. He and computational linguist Mohan Ganesalingam built a sort of automated mathematician which does the kind of “routine” mathematical proofs that mathematicians can do without backtracking. Their system was based on a formal theory of the semantics of mathematical language, together with introspection into how they solved problems. In other words, they worked through lots of simple examples and checked that their AI could solve the problems in a way that was cognitively plausible. The goal wasn’t to build a useful system (standard theorem provers are way more powerful), but to provide insight into our problem solving process. This post reminded me that, while our field has … Read More

Learning Theory: Purely Theoretical?

Jonathan HugginsMachine Learning, MetaLeave a Comment

What’s learning theory good for, anyway? As I mentioned in my earlier blog post, not infrequently get into conversations with people in machine learning and related fields who don’t see the benefit of learning theory (that is, theory of learning). While that post offered one specific piece of evidence of how work seemingly only relevant in pure theory could lead to practical algorithms, I thought I would talk in more general terms why I see learning theory as a worthwhile endeavor. There are two main flavors of learning theory, statistical learning theory (StatLT) and computational learning (CompLT). StatLT originated with Vladimir Vapnik, while the canonical example of CompLT, PAC learning, was formulated by Leslie Valiant. StatLT, in line with its “statistical” descriptor, … Read More

Is AI scary?

Eyal DechterMetaLeave a Comment

In today’s New York Times, Huw Price, professor of philosophy at Cambridge, writes about the need for considering the potential dangers associated with a possible “singularity.” The singularity is the idea, I guess, that if people create machines that are smarter than people then those machines would be smart enough to create machines smarter than themselves, etc., and that there would be an exponential explosion in artificial intelligence. Price suggests that whether or not the singularity is likely enough to warrant study in its own right, it is the possible danger associated with it that makes it important. I’m not remotely worried about this. As someone who has been toiling away for many months at creating an artificial intelligence algorithm … Read More

It Depends on the Model

Peter KrafftMeta, StatisticsLeave a Comment

In my last blog post I wrote about the asymptotic equipartition principle. This week I will write about something completely unrelated. This blog post evolved from a discussion with Brendan O’Connor about science and evidence. The back story is as follows.

Markov chain centenary

Elaine AngelinoMachine Learning, MetaLeave a Comment

I just attended a fun event, Celebrating 100 Years of Markov Chains, at the Institute for Applied Computational Science. There were three talks and they were taped, so hopefully you will be able to find the videos through the IACS website in the near future. Below, I will review some highlights of the first two talks by Brian Hayes and Ryan Adams; I’m skipping the last one because it was more of a review of concepts building up to and surrounding Markov chain Monte Carlo (MCMC). The first talk was intriguingly called “First Links in the Markov Chain: Poetry and Probability”

Should neurons be interpretable?

Eyal DechterMeta, NeuroscienceLeave a Comment

One basic aim of cognitive neuroscience is to answer questions like 1) what does a neuron or a group of neurons represent, and 2) how is cognitive computation implemented in neuronal hardware?  A common critique is that the field has simply failed to shed light on either of these questions. Our experimental techniques are perhaps too crude: fMRI’s temporal resolution is way too slow, EEG and MEG’s spatial resolution is far too coarse, electrode recordings miss the forest for the trees. But underlying these criticisms is the assumption that there is some implementation-level description of neural activity that is interpretable at the level of cognition: if only we recorded from a sufficient number of neurons and actually knew what the … Read More

New Blog

Ryan AdamsMetaLeave a Comment

I’m excited to announce a new collaborative blog, written by members of the Harvard Intelligent Probabilistic Systems group.  Broadly, our group studies machine learning, statistics, and computational neuroscience, but we’re interested in lots of things outside these areas as well.  The idea is to use this as a venue to discuss interesting ideas and results — new and old — about probabilistic modeling, inference, artificial intelligence, theoretical neuroscience, or anything else research-related that strikes our fancy.  There will be posts from folks at both Harvard and MIT, in computer science, mathematics, biophysics, and BCS departments, so expect a wide variety of interests.   — Ryan Adams