In my previous post I suggested that models of neural computation can be expressed as prior distributions over functional and effective connectivity, and with this common specification we can compare models by their posterior probability given neural recordings. I would like to explore this idea in more detail by first describing functional and effective connectivity and then considering how various models could be expressed in this framework.
Functional and effective connectivity are concepts originating in neuroimaging and spike train analysis. Functional connectivity measures the correlation between neurophysiological events (e.g. spikes on neurons or BOLD signal in fMRI voxels), whereas effective connectivity is a statement about the causal nature of a system. Effective connectivity captures the influence one neurophysiological event has upon another, either directly via a synapse, or indirectly via a polysynaptic pathway or a parallel connection. In my usage, effective connectivity may include deterministic as well as stochastic relationships. Both concepts are in contrast to structural connectivity which captures the physical synapses or fiber tracts within the brain. Of course these concepts are interrelated: functional and effective connectivity are ultimately mediated by structural connectivity, and causal effective connections imply correlational functional connections.
Many models of neural computation are naturally expressed in causal terms so it is natural to consider their implications for effective connectivity. First we must decide upon a neural representation of the model variables. Some models explicitly describe cell types as well as their synaptic interaction with other cells, for example the linear receptive field model of the retina has ON-center/OFF-center retinal ganglion cells which excite/inhibit their neighbors. In this case we can easily determine a distribution over effective connectivity given a distribution over cell types and their spatial locations. Other models are defined at a computational level in terms of abstract variables, but the encoding of these variables into neural firings is undetermined. For example, consider a motor control model which takes input variables corresponding to the length, velocity, and activation of muscles, and computes a coordinated firing of multiple muscles known as a muscle synergy. There are many ways these variables could be encoded – they could be represented by the firing rate of a set of neurons, or perhaps by a population code. In either case, we can treat the role each neuron plays in the representation of length, velocity, etc. as a latent variable and derive a joint distribution over latent variables and effective connectivity. By marginalizing out the latent variables we arrive at the desired distribution over effective connectivity.
We can use such a distribution over functional or effective connectivity as a prior in a Bayesian model of neural recordings. So long as the connectivity is a latent parameter which generates observable data, we can use Bayesian model comparison to evaluate the most likely model given measured data. Several issues still remain. We must have enough data to adequately fit latent parameters of the prior, and it is not clear how to determine this sample complexity or that modern recording techniques will suffice. Furthermore, I have neglected the murky question of how to actually define a functional or effective connectivity distribution, e.g. are pairwise connections sufficient or must we consider tertiary and higher order connectivity? These are practical considerations en route to realizing a principled model comparison framework.
For a nice discussion of functional and effective connectivity and models thereof, see Chapters 19 and 20 of Human Brain Function, 2nd Edition by Ashburner, Friston, and Penny. Draft available at: http://www.fil.ion.ucl.ac.uk/spm/doc/books/hbf2/