#CAN2014 A bird’s eye view of whole-brain activation
(Notes from the 2014 Canadian Association for Neuroscience annual conference in Montreal, Canada. Image: Allen Brain Institute)
When it comes to brain function, we really want a detailed whole picture.
Take retrieving a memory: as simple as it sounds, the process relies on the coordinated activity of many neural circuits spread across the brain. Although scientists can visualize “remembering” in real-time with techniques like fMRI, these methods can only paint a low-res picture. Here, activation patterns (reconstructed mathematically) look like crude, shapeless blobs that “light-up” the brain. Of course this is still useful information, allowing us to infer which geographical brain regions are involved in memory recall and whether their activities are coordinated.
But it is the individual neurons nestled in these activated brain regions that actually drive behavior. Despite what you might’ve seen from CSI, zoom-and-enhance doesn’t work: the resolution from (f)MRI imaging is simply too low to isolate patterns of neuronal activity. Now a team of neuroscientists from the University of Toronto and University of Tokyo has devised a way to take a snapshot of activity patterns at the cellular level across the entire mouse brain. They reported their findings at the 2014 Canadian Association for Neuroscience annual meeting in Montreal, Canada.
During memory retrieval, neurons activate and drive the expression of a group of genes called “immediate early genes”, including one called Arc. The team used a type of transgenic mice that generates a fluorescent protein (Venus) when Arc is expressed. Thus only recently activated neurons contain Venus and glow under fluorescent microscopes, allowing them to be easily picked out.
To examine whole-brain activation, the brain is taken out and sliced horizontally into extremely thin pieces. Using a microscopy technique called two-photon tomography, each slice can be automatically examined for neural activation and the total number of neurons. Computer algorithms can then synthesize data from each brain slice (from multiple animals) to generate an “average” mouse brain. This synthetic brain is then matched up to a reference atlas of the mouse brain from the Allen Brain Institute, which allows computers to map out the boundaries of different brain regions in the brain slices. Knowing this, we can now count the number of activated cells in each segmented brain region involved in memory retrieval.
The team tested their technique to see if it can pick up whole-brain activation patterns during fear memory recall. They trained rats to fear both a specific context (such as an orange box) and a tone by pairing them with electrical shocks. Once the rats learned these shock-predicting cues, researchers briefly re-introduced them to either the box or tone to trigger memory recall. The brains were then removed and processed to look at brain networks involved in fear memory.
As expected, recalling fear requires the activation of a network of neurons distributed across multiple brain areas, including the hippocampus, amygdala and cortex. Context-triggered recall, which heavily relies on visual-spatial cues, strongly activated neurons in cortical areas with direct connections to the hippocampus. Tone-induced retrieval, on the other hand, relied more on cortical regions that process auditory information and sensations. These data cleanly match up with what we already know about fear information processing, suggesting that the new automated technique is fairly accurate, at least for this data set.
To further demonstrate the power of high-resolution brain activity images, the team next zoomed in on two tiny sub-regions of the hippocampus and amygdala (the granule cell layer of the dendate gyrus and the lateral amygdalar nucleus) and once again, automatically quantified the number of activated neurons there within – something unimaginable with MRI.
Of course the technique is still limited in many ways: the processing algorithms have to be verified on more data sets generated from other animals and behavioural protocols; the method is limited to transgenic mouse brains and impossible to implement in humans (does anyone have glowing neurons? anyone?); although the resolution is exquisite, the technique can only capture a single moment in time. Nevertheless, we finally have a bird’s eye view on whole brain activity with both detail and scope.
The team is hoping their approach can help other neuroscientists understand how neural networks are disrupted in animal models of schizophrenia or autism. As with any other nascent technique (including mine!), there's more room for it to grow and more work to be done.
Poster 2-G-187: Whole-brain mapping of neural activation in mice. Dulcie Vousden, Jonathan Epp, Hiroyuki Okuno, Brian Nieman, Matthijs van Eede, Jun Dazai, Tim Ragan, Haruhiko Bito, Paul Frankland, Jason Lerch, Mark Henkelman. University of Toronto, Hospital of Sick Kids and University of Tokyo.