# What is the Computational Capacity of the Brain?

Ryan Adams · January 30, 2013

One big recent piece of news from across the Atlantic is that the European Commission is funding a brain simulation project to the tune of a billion Euros. Roughly, the objective is to simulate the entire human brain using a supercomputer. Needless to say, many people are skeptical and there are lots of reasons that one might think this project is unlikely to yield useful results. One criticism centers on whether even a supercomputer can simulate the complexity of the brain. A first step towards simulating a brain is thinking about how many FLOP/s (floating point operations per second) would be necessary to implement similar functionality in conventional computer hardware. Here I will discuss two back-of-the-envelope estimates of this computational capacity. (Don't take these too seriously, they're a little crazy and just shooting for orders of magnitude.)

Take One: Count the Integrating Neurons
People who know about such things seem to think that the human brain has about 1e9 neurons. On average, let's assume that a neuron has incoming connections from 1e3 other neurons. Furthermore, let's assume that these neurons are firing at about 100Hz and that the post-synaptic neuron does the equivalent of 10 floating point operations each time it receives a spike. I base this sketchy guess on what would be necessary to perform a cumulative sum for an integrate-and-fire model. Put these numbers together and you need about 1e13 floating point operations one hundred times per second, or one petaFLOP/s -- 1e15 FLOP/s.

Take Two: Multiply Up the Retina
This computation is due to Hans Moravec, discussed in this article. The basic idea is to take the retina, which is a piece of the brain that we understand pretty well, and assume that the rest of the brain is similar. The starting point is to imagine that the retina is processing a 1e6 "pixel" image about ten times a second. If you figure that each of these images takes about 1e8 floating point operations to process, then you're looking at a gigaFLOP/s to match the processing power of the retina. The brain is about 1e5 times larger than the retina, which gets you to 1e14 FLOP/s. I think this is is kind of fun, because it's a different way to think about the problem, but isn't impossibly far away from the previous estimate.

It's also interesting, because a petaFLOP/s is well within our current computational capabilities. An off-the-shelf gaming GPU such as the NVIDIA GeForce GTX 680 can do 3 teraFLOP/s. A warehouse full of these is a lot, but not impossibly many. Indeed Oak Ridge National Laboratory has a machine that has hit a benchmark 18 petaFLOP/s.

(Edit: This post originally incorrectly said that the GTX 680 had 3 gigaFLOP/s.)