Detailed simulations of all the neurons in our brain, or large portions thereof, have been in the news lately. The first I am aware of was performed by Eugene Izhikevich in 2005. His model consisted of 100 billion model neurons, and a second of real time took about 50 days to simulate on a computer cluster. A number of other similar efforts are underway. Perhaps the most publicized is the Blue Brain project in Switzerland lead by Henry Markram (you can see his TED talk for an accessible explanation of the ideas behind the project).
Such simulations provide some evidence that computers may soon be powerful enough to simulate entire brains. But they also reveal how little we know about the central nervous system. Few of these simulation does much that would be of interest to non-scientists.
An exception may be the recent Spaun, a simulation of a network 10,000 times smaller than the brain. Yet these cell do something quite interesting: Spaun will recognize patterns in sequences of observed numbers. It can respond to questions about these sequences by writing out answers using a robotic arm. Spaun interacts with the world in a very limited way, however. It cannot choose what to look at, or move an object.
Despite containing 10,000 times more model neurons than Spaun, other large scale simulations of the brain typically do not perform any tasks. So what is missing? If we have machines that have the potential to simulate entire brains, why can’t we build computers that think and reason in the way we do? The answer is that we will need to know much more about our brain before we can even try to describe its function algorithmically.
Even our understanding of single cells in the nervous system is lacking. Some assume that this is unimportant. Possibly it will be sufficient to describe each neuron as a simple integrator that responds to inputs by producing an electrical pulse. However, there is much evidence that this may not be the case (here is an interesting discussion). The structure of most neurons is complex, and this may help them process information in complicated ways. To simulate their function we may need accurate descriptions of what happens along every one of their branches. And neurons may only be part of the story: None of the large scale simulations I’ve seen include cells called glia. Glia may outnumber neurons in the brain by a fact of 10, and be quite important in neural computation.
But suppose that we understand how single neurons work. Then we still need to know how they are wired to describe the networks of our brain. This is why a number of neuroscientists have advocated painstaikingly mapping out all the connections between the cells in our brains. (Olaf Sporns and Sebastian Seung have interesting books on the subject). But again, the connectome is likely only part of the necessary description. Consider the worm C. Elegans. Its connectome has been known for decades. While this blueprint was invaluable in helping us understand the organism, I know of no computer simulations of a behaving worm. Connections between cells are not static. They evolve constantly as a consequence of learning, neuromodulation, and other population level processes. (Sebastian Seung and Anthony Movshon recently debated the importance of finding out the connectome. The debate is worth watching.)
This is also how Spaun is different. Its overall architecture is somewhat akin to that of an actual brain. However, the details of its connections have been designed for a particular way of processing the data. While this makes Spaun work, it is unclear whether this is the way an actual brain process information. Thus Spaun does some very interesting things that almost make it seem intelligent. It uses networks of neuron like units to perform these tasks, but it is unclear whether it does so in the same way as our brains. This distinction is important: We have developed computers that perform as well or better than humans on many task, such as the Jeopardy playing Watson. However, it is almost certain that our brains perform the same tasks in a much different way.
The belief that large scale simulations of the brain will succeed sometimes becomes almost mystical. Although rarely outright, some suggest that intelligence can appear nearly miraculously. Perhaps when sufficiently many simple units are connected, their collective activity will self-organize and intelligent behavior will emerge. There are nice examples of emergent behavior. For instance, swarming and flocking animals frequently only interact with their neighbors. These simple local interactions can give rise to beautiful global patterns. Such patterned motion can help a population avoid predators, or make a decision. However, it is quite unclear that this type of “collective intelligence” will have direct parallels in the brain.
Ultimately, as in many other areas we are limited by the fact that software development is much slower than hardware development. Software improvements frequently require fundamentally new ideas, and not just perfection of previous ones. To simulate biological systems accurately we need to understand how they work, translate this knowledge into mathematical descriptions, and then translate the mathematics into algorithms and code that can be run in a reasonable time. And none of the steps in this process is easy! We are slowly learning more about the nervous system. As a result our mathematical models are getting better. Eventually we may be able to simulate an entire brain. However, I will be very surprised if my children see that day.
References and Notes:
Interesting thoughts about how to build a brain can be found here.
Here is a description with a very nice comment about the latest attempt at a large scale simulation.