Brain computation is a lot more analog than we thought
And that makes the brain a hell lot more powerful that we previously thought.
Scientists and science writers (myself included) often compare the brain to computers. Despite surface similarities, the type of computation greatly differs between the two. Computers run digitally on 0s and 1s; while the brain also has a digital component--a neuron either fires or not--that's only a measly part of how it actually processes information.
First, a very brief primer on how neurons work. Most neurons have three parts: the input cable, the dendrites; the cell body; the output cable, the axon. The cell body receives its information from its dendrites and integrates everything in a part of the cell body near the axon. If the activity exceeds a certain threshold, the neuron fires: it generates an electrical pulse--an action potential--that travels down the axon.
This is why scientists often think of neurons as digital units: they either generate an action potential, or they don't. There is no half- or quarter-action potential.
But as it turns out, that is a gravely simplified view of neuronal information processing.
In a new study, scientists found that the input cables, the dendrites, can also generate their own action potentials independently of the cell body. In other words, it's as if you suddenly discover that the cables leading into the CPU of your computer are processing information themselves.
What's more, while dendrites can work digitally (either fire or not fire an action potential), they also demonstrate massive swings in their electrical properties through a range of voltages independent of their ability to generate spikes. The team says that this is a sign that the dendrites are also processing information in an analog manner, making them digital-analog hybrids.
The most fascinating part of this (to me at least) is what the dendrites are computing. As it happens, it seems that neuronal cell bodies compute decisions about to happen, while dendrites process information about the here and now. This makes sense: since the cell body generates spikes that relays information to the next neuron, it needs to get sufficient information from its input cables (the dendrites) about what's happening at the moment to fine "tune" its activity.
For a long time now neuroscientists have had an inkling that dendrites are actually little computers of their own, rather than simple passive cables. This study is the latest that really digs into what these guys are specifically doing.
Because dendrites seem to process information as semi-independent units, scientists think that maybe they should be considered the basic computational unit of the brain, not the neuron.
Without doubt, the more we understand about brain computation, the more we'll be surprised by how versatile and complicated (and even chaotic) it is. Researchers like to say that these studies may one day lead to computers that work more similarly to the brain or brain-machine interfaces with hardware that's more compatible with our own. While true, it's a bit of a reach at this point. And really - do we have to justify every scientific finding for its potential applications? Let's just say that this finding is seriously cool, and I'm excited to see what more surprises our own brains have in store for us.
Full details are over at Singularity Hub in an easily digestible form, no expertise required.
For readers with some neuroscience background and interest in computational neuroscience, I've included parts of my interview with the lead author below. The content has been slightly edited to make it an easier read.
(As you can see, I generally ask lead authors--the profs--"big picture" questions about their study after reading the paper. Any questions about the specifics of a study generally goes to the study's first author--the grad student or post-doc doing the hands on work.)
Your team mentioned in the press release that this study suggests that neurons are capable of analog computing. I thought that this was already known - that spines can grown or shrink in increments based on the input they receive and their history, thus acting in an analog way. How does your study of dendritic computing corroborate with the spine findings, if at all?
Indeed spine strength is analog not digital. But, that is not the same thing as activation of neurons or dendrites.
What is known is that for the soma, the change in range of subthreshold voltages (about 10mV) is about ten times smaller than the amplitude of the somatic spike (100mV). In contrast, we find that the range of dendritic subthreshold voltages is about twice as large as the dendritic spikes. So, this large range of subthreshold voltages in the dendrites (larger than the dendritic spike amplitude) shows analog computations in the dendrite. This has been never seen before in any neural activity patterns.
Based on your findings, is it true that dendrites, not neurons, should be considered the basic computational unit of the brain? Why?
In a way, the answer is yes. Neurons were considered the basic computational units because neuronal soma was supposed to sum all synaptic currents and then make a decision about whether to spike or not. That is one reason why soma was considered the unit of computation..
Instead we find that the dendrites are generating their own spikes, so the input-to-spike transformation is already happening in the dendrite. Further, the dendritic branches have nearly ten times as many spikes as the soma. So, each dendritic branch seems to be a computational unit, making it's own decision about whether to spike or not. With only a small fraction of the dendritic spikes resulting in somatic spikes.
For these reasons it would seem that the dendritic branch is a more basic computational unit than the neuron.
How do you think these findings can help inform neuromorphic computing or artificial neural networks?
Neuromorphic computing is based on the assumption of binary or digital (all or none) computations performed by the soma. Instead, we show several new features including
a) Dendritic branches generate their own spikes, way more often than the soma.
b) The dendritic voltages show far greater range of fluctuations than the dendritic spike amplitude, showing both analog and digital, i.e. hybrid computation.
c) Only a small fraction of this seems to reach the soma.
These fundamental features should be incorporated in the next generation of neuromorphic computers.
Could you please briefly state the main significance of this work and future plans?
This has implications about how neuronal networks learn. For example, learning is thought to require coincidence between synaptic activation and postsynaptic spike (hence the term spike timing-dependent plasticity or STDP). Our results of large dendritic spikes suggest that what matters is input synaptic activation and dendritic spike (instead of somatic spike) for many branches. This is a fundamentally different learning rule. I had proposed such a dendritically constrained learning rule long ago. So these results provide a support for this novel form of learning rule, instead of the classic Hebbian learning rule. At least at the distal synapses.
Finally, since the dendrites have more than ten times many spikes compared to the soma, this suggest that when someone measures the total amount of brain activation (say as measured by fMRI), this maybe a primary reflection of dendritic spikes and not somatic spikes as commonly assumed. This may have implications for what fMRI tells us about neural activity and diseases.
Our future line of work: We will continue with our studies on how networks of neurons learn abstract concepts such as space and time. We have been doing this using virtual reality. We will now combine that with dendritic recordings.