My aim in this introductory post is to provide context for future contributions by sharing my thoughts on the role of computer scientists in the study of brain, particularly in the field of computational neuroscience. For a field with “computation” in the name, it seems that computer scientists are underrepresented. I believe there are significant open questions in neuroscience which are best addressed by those who study the theory of computation, learning, and algorithms, and the systems upon which they are premised.
In my opinion “computational neuroscience” has two definitions: the first, from Marr, is the study of the computational capabilities of the brain, their algorithmic details, and their implementation in neural circuits; the second, stemming from machine learning, is the design and application of computational algorithms, either to solve problems in a biologically-inspired manner, or to aid in the processing and interpretation of neural data. Though quite different, I believe these are complementary and arguably co-dependent endeavors. The forward hypothesis generation advocated by the former seems unlikely to get the details right without the aid of computational and statistical tools for extracting patterns from neural recordings, guiding hypothesis generation, and comparing the evidence for competing models. Likewise, attempts to infer the fundamentals of neural computation from the bottom-up without strong inductive biases appear doomed to wander the vastness of the hypothesis space. How then, can computer scientists contribute to both aspects of computational neuroscience?
Ultimately the brain is a biological computation device and, as such, should fall within the purview of the theory of computation. The brain, like our silicon computers, is also a dynamical system, but when we describe in silico computation we do not discuss attractors and limit cycles (except in the case of infinite loops!). We discuss instructions and variables, the manipulation of symbols given a model of computation. Such abstractions allow us to analyze the complexity of programs. Similarly, I think neural computation should be studied with an appropriate model of computation that enables theoretical analysis and allows us to rule out hypotheses inconsistent with realistic complexity bounds.
For example, the Bayesian brain hypothesis states that the brain is an inference engine, evaluating posterior probabilities of the world given sensory observations. Unfortunately even approximate inference is, in general, NP-hard (w.r.t. Turing machines, though there is little evidence the brain solves NP-hard problems). Of course the brain could be doing variational inference, operating on sufficient statistics, using belief propagation, etc.; such refined algorithmic hypotheses are born of simple complexity arguments. Though I am skeptical of the algorithmic theories put forth thus far, I think the Bayesian brain is an attractive computational hypothesis. Probabilistic programming, which combines the expressiveness of Bayesian networks with deterministic control flow, seems like a natural language for neural computation. How such programs are “compiled” to run on a plausible neural architecture, what instruction set is appropriate for such an architecture, what functions would be expected of a neural “operating system,” and how neural programs are learned and updated through experience, are all questions that computer scientists are particularly suited to address.
In some sense, however, formulating reasonable hypotheses is not the hard part. Choosing among reasonable hypotheses seems to be the challenge. Ideally, we would like to compare the statistical evidence for competing hypotheses given actual data. Complex models with many parameters and large amounts of data present a computational challenge where, again, computer scientists, particularly machine learning researchers, may contribute. One of my projects has been the creation of a general probabilistic model for spike trains recorded from populations of neurons in which algorithmic hypotheses are expressed as priors on the functional connectivity among the population. A fully-Bayesian inference algorithm enables principled model comparison. This is just a start – not all hypotheses are naturally expressed as functional connectivity priors – but I believe this is ultimately how the comparison of complex models with many parameters must be done.
Of course, it is possible that none of our top-down hypotheses have particularly strong evidence, in which case we would like the data to guide hypothesis generation. This is, essentially, unsupervised machine learning. If the hypothesis class is a space of probabilistic programs, then it is program induction. If the hypothesis class is the space of binary functions, then it is learning Boolean circuits. I’ve expressed my doubts about achieving this in full generality, but I think there is hope in an active-learning paradigm, especially when our inductive biases limit the hypothesis space. Understanding the theoretical limits of such a paradigm would be of interest to the learning theory community, but also has a natural interpretation in the neural setting. Optical tools allow us to manipulate subpopulations of neurons and observe the response of others. A general method for leveraging this white-box access could find broad application in hypothesis testing and generation.
I suspect neural computation is governed by sophisticated programs rather than simple laws, and that the brain is better viewed as a computational machine than as a dynamical system. As we probe further into the workings of the brain the need for a computer science perspective in hypothesis generation and testing, as well as in the interpretation of large and complex datasets, will only become more pressing.