Archive for June, 2010


When an image appears to the visual system, rapid feedforward processing (within about 120ms) leads to an activity patterns called base-groupings that are distributed across many cortical areas. Base-groupings are coded by single neurons tuned to multiple features. The question arises as to how more complex structures are bound together from a combination of base groupings from many cortical areas. For example, a line contour may require base-groupings that code for many smaller line segments that make up  part of the contour to be bound together as a whole. The combined pattern may not be catered for by an explicitly wired base-grouping. Opinions are polarized as to what methods the brain uses to bind disparate activations together. I will here outline two contenders.

Some suggest that groups of neurons across the cortex synchronise their firing patterns binding disparate parts of brain activation together (e.g. a population encoding for red and a population encoding for circle synchronize to encode for a red circle). This is said to explain rhythmic oscillations in the brain, particularly in the gamma band (30-40 hertz). Others believe this to be a mere epiphenomenon. In the 2006 Annual Review of Neuroscience Roelfsema points out that some recent studies on monkeys observed no direct relationship between synchrony and perceptual grouping, and in some instances, grouping is even associated with a reduction in synchrony.

Incremental grouping proposed my Roelfsema et al are said to make use of horizontal and feedback connections to enhances the responses of neurons coding features that are bound in perception. ‘By the time that the base representation has been computed, neurons that respond to features of the same object are linked by the interaction skeleton. This also holds for neurons that respond to widely separated image elements, although these are only indirectly connected through a chain of cells responsive to interspersed image elements. A rate enhancement has to spread through the interaction skeleton in order to make these additional groupings explicit.’

However, this does not blow synchronization theory out of the water. Firing rate labelling proposed by Roelfsema begs for an explanation of how the rate encodes the label bacause a simply higher rate of firing does not seem to say enough unless the rate itself shares a code between populations or unless we are to believe in some kind of threshold beyond which binding occurs. In addition, Pascal Fries explains how oscillatory rhythmic excitability fluctuations produce temporal windows for communication due to relaxation time needed between firing and when the a neuron is ready to again receive signals. ‘Only coherently oscillating neuronal groups can interact effectively, because their communication windows for input and for output are open at the same times’. A detail which adds an extra level of depth for the synchronization camp.

I mentioned in a recent post Murray Shanahan’s model of global broadcast using spiking neurons. In brief, Murrays model contains distinct groups of neurons (which may for example represent different sensory modalities) each of which may encode for many different responses. These groups of neurons are all connected together via a comunications infurstructure which he calls the global workspace. Different cell assemblies in different groups fight for contol of the global workspace and once one particular set gains control its influence is broadcast through the workspace to the entire system.

What is most notable about this model is that the comunications inferstructure is generic and does not contain any specifically meaningfull connections between particular nodes or cell assemblies in the different groups that one may wish to communicate with each other or behave in some kind of complementary fashion. The question therefore arises as to how the different cell assemblies in seperated groups know what they are responding to and how to respond in the appropriate way? I recently put this question to Murray. In his response he was quick to point out that he does not wish to propose a model in which the signals that pass through the global workspace contain information in some kind of language of the brain. However, without meaningful connectivity it is necissary that the activations patterns being passed along this generic communications infurstucture do contain information that allows the recieving cell assemblies to respond appropriately. Murray suggest that the recieving cell assemblies adapt to repond to particular activation patterns in particular ways that are behaviourally beneficial. In addition the cell assemblies which send the signals will adapt in order to take advantage of responses from recieving cell assemblies that are also behaviourally beneficial.

Sounds pretty plausable to me, but pretty a complicated adaptation task as adaption will have to work from bottom up sensory inputs as well as top down global workspace inputs. Nevertheless working out how such sub-systems can adapt to comunicate with each other in this ways sounds like a really cool and interesting area of research.

Groups of neurons firing in unison across the cortex synchronise at many different time scales. The most notable phase time scales are the gamma band (30-40 hertz) and the beta band (15-25 hertz). There is much debate in the community as to the role of phase synchronisation. Many think they are a mere epiphenomenon whilst others believe they play a vital role such as binding disparate parts of brain activation together (e.g. a population encoding for red and a population encoding for circle synchronize to encode for a red circle). Opinions are very polarized and I will not enter that debate right now.

What I would like to mention is Pascal Fries talk at the Brain Connectivity Workshop today. In his work studying phase synchrony in monkey’s brains he states that topologically higher areas in the visual hierarchy exhibit attentive influence on lower areas and in doing so they also manipulate synchrony. In his recent research he uses Granger causality, an analysis technique that can give you a metric for how much one part of a system at a particular time affects another at a later time. His results show that top down processes in the visual cortex have a causal synchronizing effect on lower areas but not the other way round. The implication of this is that higher level areas may facilitate the binding with or between lower areas through attentive modulation. For more details read here.

Granger causality analysis has become very popular in neuroscience in recent years. I will briefly describe here what it is and how it works:

Granger causality is a metric that can assess the amount of causal influence one thing has upon another. For example, the quality of the banana harvest will have a causal effect upon the price of bananas. There may be several causal variables. For example, the quality of banana harvest in India, the exchange rate between India and England, and the price of fuel for transport ships will all affect the price of a banana in England. The causal variables are called independent variables and the effect variable (price of banana in England) is called the dependant variable. Granger causality analysis will allow one to assess how much causal influence a particular causal variable has on the effect variable.

In order to perform Granger causal analysis one must first perform two regression analyses. Regression is a statistical method that allows one to model a system by assessing the relationship between independent variables and the dependant variable. Basically one has a set of sample data output from the system (e.g. values for harvest quality, fuel price, the banana price in England etc) and an equation one wishes to use to model the system. In that equation are several variable parameters that describe a particular relationship between the independent variables as well as between the independent variables and the dependant variable. Computational techniques are used to find the best values to set the parameters to so that the model fits the sample data. In the graph below the scattered dots are different samples for which the independent variable x and dependant variable y are plotted. The line running through them is the plot of a model equation for which a parameter has been fitted so that the line runs though the sample in a way that models the system well.

Simple linear regresion model fitted to sample data

Once we have a model the independent variables can be varied thus allowing one to see and predict the effect of changes on the system.

In order to perform Granger causality analysis one first builds a regression model which makes use of all the independent variables and then assesses how good the model is at making predictions. Next one builds another regression model the same as the first but this time with one of the independent causal variables removed from the model. The latter model is then tested for how good it is at making predictions. Now that one has a value for the prediction quality of each of the models one can use the discrepancy between the two values as a metric for how much causal influence the removed causal variable has.

Anil Seth from Sussex University has developed a MatLab toolbox for Granger causality which is reviewed here, and available here.

The Brain Connectivity Workshop is my first neuroscience excursion so I have been totally nerding out for the last two days. I am pleased to find such friendly and interesting people. My main interest here is network topology and dynamics. There has been a lot of talk about networks that display small world topological properties (i.e. small characteristic path length between any two nodes and a high cluster index in the network as a whole), as well as talk about the brains fractal and modular formation. As I am sure you are all aware these three properties are mutually compatible.

There was a very interesting talk today by Danielle Bassett form Cambridge University, who along with her colleagues has been studying network organisation properties in both artificial VLSI integrated computer circuits as well as animal brains. Her proposal is that all physical information processing systems share modular organisational properties. The brain was shown to display hierarchical formations which at each level can be modularised. Bassett also showed that the cost entailed in physical wiring was not strictly minimized, but that there is a trade off between this cost and topological complexity which gives rise to fractal and modular designs. In addition she illustrated how volume ratios between gray matter (which contain neural cell bodies) and white matter (which contains axonal connections) across a wide range of mammals remain similar. Interestingly VLSI circuits display an isometric scaling relationship between the number of connections and the number of processing elements. For more details read here.