One basic aim of cognitive neuroscience is to answer questions like 1) what does a neuron or a group of neurons represent, and 2) how is cognitive computation implemented in neuronal hardware? A common critique is that the field has simply failed to shed light on either of these questions. Our experimental techniques are perhaps too crude: fMRI's temporal resolution is way too slow, EEG and MEG's spatial resolution is far too coarse, electrode recordings miss the forest for the trees. But underlying these criticisms is the assumption that there is some implementation-level description of neural activity that is interpretable at the level of cognition: if only we recorded from a sufficient number of neurons and actually knew what the underlying connectivity looked like, then we could finally figure out what neurons are doing, and what they represent -- whether it's features, or population codes, or prediction error, or whatever.
Is this a reasonable thing to hope for? Should neurons be interpretable at all? Clearly, no, Marr Level-1-ophiles will argue. After all, you wouldn't hope to learn how a computer works by watching its bits flip, right?
I wonder whether this analogy is fundamentally wrong. What is our assumption about how neurons work that makes it so? One possibility is that we are assuming that changes in the brain (changes in programming) are spatially local and fine-grained: that is, changes occur on the scale of individual neurons. Thus, if these changes can be semantically coherent then that fine gain must carry coherent semantics. In computers, on the other hand, the induced changes are the ones we induce when we interact with them: they are coarse. We write whole programs to change many behavioral elements at once. Therefore, smaller units than those we manipulate need not themselves carry a coherent semantic value.
Does this distinction seem like the right one?