Latest Entries »

woo hoo my spiking network simulations are working 🙂

A localised group of neurons firing synchronously at 30-100 hz is referred to as a local field potential gamma oscillation. These are important for spike-timing-dependent plasticity to occur. Synchronized activity of 10–30 ms in the gamma frequency create a narrow time window for the coincident activation of pre-synaptic and post-synaptic cell used for STDP (for more details read here). Slower oscillations do not provide a narrow enough window and faster oscillations, having more than one cycle in the STDP window, cause the post-synaptic cell to receive inputs both before and after having generated a spike.

However, STDP occurs if pre-synaptic and post-synaptic action potentials are correlated. Notably this occurs even if two cells with equally weak inputs correlate, which is not the kind of result that is useful to learning as we wish to learn strong coincidences. Gamma synchronization is not necessarily time-locked to a stimulus. Due to these two reasons long term potentiation (strengthening) of synapses induced by synchronized gamma activity alone does not attain the specificity of memory encoding, but an additional mechanism is required.

The hippocampus is considered to play a major role in memory. Learning-dependent synchronization of hippocampal theta activity is associated with large event-related potentials with frequency in the theta (4-8 hz) and delta (0-4 hz) range that appear to result from phase the reset of theta activity occurring at a fixed interval after presentation of a stimulus. Theta reset determines the theta phase at which a given stimulus affects a cell. Theta band learning is non-Hebbian and only involves pre-synaptic and not post-synaptic spikes. If the stimuli arrive during the peak of the theta oscillation long term potentiation (strengthening of synapses) occurs, inputs arriving at a trough of the theta cycle induce long term depression (weakening of synapses). Axmacher et al note that a combination between theta and gamma learning dynamics my provide the required specificity for memory learning:

‘Whereas gamma-dependent plasticity alone may not distinguish between correlated weak and strong inputs and occurs not necessarily time-locked to a given stimulus, plasticity during theta reset has these features. Theta-dependent plasticity alone, on the other hand, is too coarse to encode stimulus features with a high temporal resolution: at least Hebbian LTP requires precise spike timing. Moreover, sequence encoding (sequences of items as well as spatial paths) has been suggested to depend on action potentials during subsequent theta phases, with gamma periods binding each item.’

Im afraid there is more techie maths in this post :(. I recently posted here about spike timing dependant plasticity and how it can create unimodal or bimodal synaptic weight distributions depending on whether a term is added to the weight change function to allow dependence on the existing value of the synaptic weight.

Babadi et al show that by adding a delay (d) to the exponential term in additive spike timing depenant plasticity weight change equation (i.e. -(|Δt|-d) / τ) one can stabalize the distribution of synaptic weights to be unimodal instead of bimodal even when no limits are imposed. This is because apparently the relative strength of synapse induces a causal bump near Δt=0 invoking stronger increases the stronger the weight.This makes sense as the synaptic weight will cause the post-synaptic neuron to fire more quickly and often if it is stronger. This bump can be seen in the image below where on left side of the y axis is the depressive exponential term (though not shown with negative weighting) and on the other side is the potentiating exponential term with the bump:

The delay term makes the causal bump fall into the region where depression occurs.As in the image below (this time the depressive exponential part is shown with its negative weighting):


As the synapse gets stronger, a larger portion falls into the depression area, both because the causal bump gets bigger and because it moves closer to Δt=0. This prevents further growth of the synaptic strength and therefore stops saturation around the bimodal MIN and MAX possible values for synaptic weights.

Different studies in synchrony (for a description of synchrony see this post) and learning have shown it irrelevant whether excitatory post synaptic potential arrive shortly before or after the post-synaptic spike, but long term depression of synapses occurred when the same synchronous group input was oscillated 180 degrees out of phase so that the excitatory post-synaptic potentials arrived during the troughs of the oscillation, suggesting a certain robustness to the influence of synchronised activity on STDP (read this pdf for more info).

This finding raises interesting questions as to a possible interplay with a delay term introduced by Babadi et al.

I recently reported here on feed forward models of spiking neurons. Here is a follow up about an interesting recurrent system.

The computational power of a reciprocally connected group are likely to entail population codes rather than singular neurons encoding for stimuli. As the spiking neurons are either in a state of firing or not, they are not as easy to decode at a specific moment in time as a rate based model which contain an average of time spread information at one moment. Hosaka et al demonstrate a recurrent network organized to generate a synchronous firing according to the cycle of repeated external inputs. The timing of the synchrony depends on the input spatio-temporal pattern and the neural network structure. They conclude that network self-organizes its transformation function from spatio-temporal to temporal information. spike timing dependant plasticity makes the recurrent neural network behave as a filter with only one learned spatio-temporal pattern able to go through the filtering network in synchronous form (for more information on synchrony read here). Although their work includes a Monte-Carlo significance test for the synchrony, the synchrony is based on a global metric. Clearly distributed synchrony in which different cell assemblies in the network synchronise a different times due to the influence of stimuli would have to be considered if the network is to respond to multiple stimuli.

Synaptic inputs that are often active together are strengthened during learning so that statistical regularities in synaptic input lead to the post-synaptic neuron being selective to particular stimuli whilst also rendering it invariant to accidental features.Neural hierarchies allow neurons at higher level to capture information gained by many neurons at lower levels. A stimulus that drives a neuron at a high-level in a network hierarchy will almost always be part of a (visual or other) scene together with other stimuli. So although invariance aids object recognition when there are changes to irrelevant aspects of the stimulus, it also causes a problem because a given stimulus will never cover the complete receptive field of a high-level neuron but leave room for competing stimuli. This selective efficacy of subsets of a neuron’s input may be aided if converging neuronal inputs to higher-level neurons are functionally segmented and if only a relevant segment is selected at a time. Pascal Fries believes that whist connectivity provides selectivity and invariance, synchronisation provides the required segmentation and selection of a segment.

Gamma-band synchronization (a group or groups of neurons pulse firing together at the rate of 40-80 Hz) can emerge in a network of excitatory and inhibitory neurons. Inhibitory neurons provide shunting inhibition that stops other neurons from firing. This provides windows for synchrony at the moment inhibition wears off. Excitatory signals can then take advantage. Gamma band oscillations are sufficiently regular to allow prediction of the next excitability peak. As long as the travelling time from the sending to the receiving group is also reliable, their communication windows for input and output are open at the same times. Conduction delays between neurons are typically and order of magnitude shorter than the cycle length of the oscillation allowing sending and receiving to occur within one excitability peak. Packages of spikes can therefore arrive at other neuronal groups in precise synchronization and enhance their impact. Rhythmic inhibition therefore provides rhythmic modulation of excitatory input gain. Fries considers the mechanistic consequences of neuronal oscillations and calls this hypothesis ‘communication through coherence’.

Coincidence detection and rhythmic gain modulation create an exclusive communication link between a target group and a strongly synchronised source group. If there is a synchronization among the neurons in groups A and among the neurons in group B but not between A and B then a down stream group C will either synchronise to A or B but not both at the same time. Strong and precise gamma band synchronisation within group A will trigger many spikes in C and entrain C to the rhythm of A. Once entrained a winner-takes-all effect occur as the result of input gain. A competitive advantage is therefore given to one group of neurons.(read here for more details)

Uhlhaas et al note that the fast switching between synchronized and de-synchronized states observed in the data seems at odds with: the coupling strength that can be achieved through synaptic plasticity, the speed of changes in the functional topology, and mechanisms that could cause changes in transmission delays. Hence they conclude that the most likely option for the modulation of synchrony is to change the dynamical states of the coupled neuronal populations, such as the balance between excitation and inhibition. So in addition to the firing rates, precise timing of individual discharges may be used to gate transmission and synaptic plasticity, to selectively route activity across the cortical network and to define particular relations in distributed activity patterns.

Singer notes that such synchro­nous events are statistically improbable so their information content is high. Therefore they are likely to be very effective in eliciting responses in target populations. For the system to work, individual cells need to be able to rapidly change their synchronisation partners if new associations are required due to a change in Gestalt properties of the scene, and if more than one object is present in a scene several distinct assemblies should form.

 

Masquelier et al found evidence in support of the belief that spike timing dependant plasticity (STDP) makes the post-synaptic neuron respond more quickly. In their model multiple afferents converge upon a single post-synaptic neuron. Interestingly their work does not demand that a pattern to be learnt be present in all spike volleys. Distractor spike volleys are not only present in between presentations of the learned pattern, but in addition a constant population firing rate is effective throughout all the stimuli to ensure that the what network learns is not a side effect of conditions other that the coincidence of the pattern to be learned being repeated. Confirming earlier conclusions STDP first of all leads to an overall weakening of synapses, but by reinforcing the synaptic connections with the afferents that took part in firing the neuron when the pattern to be learned was present it then increases the probability that the neuron fires again next time the pattern is presented. After only 70 pattern presentations the neuron stops discharging outside of the pattern presentation. Though at first chance determines which part of the pattern the neuron becomes selective to, by reinforcing the connections to pre-synaptic neurons that fired slightly before the post-synaptic neuron the post-synaptic neuron learns to discharge earlier on presentation of the desired stimulus.

Masquelier et al have extended their model to make it respond to multiple patterns by using multiple post-synaptic neurons with inhibitory connections between them. In this case, the first neuron to fire inhibit others so it only one of the post-synaptic neurons to respond to each stimuli. However, because of the simplicity of this feed forward model and because additive STDP creates a bimodal weight distribution (see this post) distributed around 0 and MAX, in this case MAX being equal to 1, afferents are effectively turned on or off. One can only conclude that STDP is just becoming selective to particular inputs that happen to correspond to part of the stimulus to be learned that are good at identifying the desired stimulus and not the distractor. Network structure only is what is providing the computational power here. Further interesting work would proceed by studying more complex structures than simple feed-forward mechanisms by introducing reciprocal connections. I shall report on these later. For now, off to the pub 🙂

My PhD has finally got under way. Imperial College is not intimidating at all which is a relief. I have been given a desk in a room with other PhD students and have a shiny new and fast PC. The nice man from the computer support groups has set me up with Linux and Windows. I think I shall be using Linux as a preference. I have just spent 11 years as a Windows programmer and am glad not to have to be forced to use a particular OS and that particular OS.

I have signed up for a few personal development courses. The first one is on giving presentations which is something that scares me a lot. Not the course but presenting, doh!  I am prone to collapse into a quivering lump of jelly when put in front of an audience. I have to take seminar groups as part of my PhD so it looks like ill be forced to get over my hang ups. I also found it amusing that there is a networking course, not as in computer networks silly! but networking as in making contacts with people. I am quite socially retarded so this came as a nice surprise and I signed up for it too. All they really need to complete me as a fully rounded person is a course in how to talk to women 🙂

Primary consciousness is the holy grail of neuroscience. Unlike high level consciousness which concerns aspects such as the notion of self and consciousness of consciousness, primary consciousness is concerned with phenomenal qualia. In brief, why do we have a subjective phenomenal experience of something such as the redness of something red. This refers to the ‘what it is like’ to experience something. The philosophical problem of zombies is often used to illustrate some issues involved. Is it possible to have a zombie like creature that can respond and behave in exactly the same way as we do but that does not have a subjective experience? If so then why do we have one? Further to this, why should a mechanistic device such our brain produce experience but the zombie or even a thermostat not? How can something material produce something phenomenal?

The problem is a big question as we do not really understand at any level how phenomenal experience can arise. We can however deduce some properties that a system must satisfy to enable it and also identify neural correlates of consciousness. Both of these still leave an explanatory gap of say ‘why is the activation of this group of neurons accompanied by an experience of red?’.

William James noted that consciousness is a process, and although it undeniably forms an aspect of what primary consciousness is, I have trouble with people who use such a claim as an answer to the problem. A process is an abstract concept that only bestows meaning to an intelligent observer of a situation, as such its ontological status is vaguer than something such as for example a chair. Although hard to define, I don’t feel that primary consciousness suffers from this ontological ambiguity. This is probably because consciousness is the closest thing to us, it is us, and as a result although difficult to tie down conceptually due to its subjective nature, its being is direct, immediate and definitely not ambiguous or even interpretational. This may highlight a difference in ontological category between process and consciousness that has to be clarified.

Pursuing the process aspect of consciousness Gerald Edelman and Giulio Tononi present the dynamical core hypothesis as an explanation.

‘First, consciousness experience appears to be associated with neural activity that is distributed simultaneously across neuronal groups in many different regions of the brain. Consciousness is therefore not the prerogative of any one brain area; instead, its neural substrates are widely dispersed throughout the so-called thalamocortical system, and associated regions. Secondly, to support conscious experience, a large number of groups of neurons must interact rapidly and reciprocally through the process called reentry’.

The dynamic core relies upon the notion of complexity in a neural system. A neural system is highly integrated if it constituent clusters are well connected so that functionally their behaviour can synchronize. A highly integrated system although able to bind information in different parts cannot contain much information as everything ends up doing the same thing so the number of possible states is limited. A differentiated system is the opposite in which there is little communication to bind parts but the number of possible states is large. Complexity is defined as a balance between integration and differentiation in which many states are possible and desperate parts can communicate and bind. Given this, consciousness through the dynamic core is defined as follows:

1. A group of neurons can contribute directly to the conscious experience only if it is part of a distributed functional cluster that, through reentrant interactions in the thalamocortical system, achieves high integration in hundreds of milliseconds.

2. To sustain conscious experience, it is essential that this functional cluster be highly differentiated, as indicated by high levels of complexity.

A curious question that this model raises relates to the fact that different neuronal groups can be members of the dynamic core at different times allowing for the possibility that at two different moments in time the dynamic core may be constituted from totally different members. If this is the case what binds the continuity of consciousness? Is it just the process and if so how does this evade the problem of ontological status mentioned above?

For more on the dynamical core hypothesis read ‘A Universe Of Consciousness’ by Gerald Edelman and Giulio Tononi.

Sorry but the site has been down for a while due to problems with our hosting service. I hope no one was put out too much. What am I talking about?…..no one reads this 🙁

When an image appears to the visual system, rapid feedforward processing (within about 120ms) leads to an activity patterns called base-groupings that are distributed across many cortical areas. Base-groupings are coded by single neurons tuned to multiple features. The question arises as to how more complex structures are bound together from a combination of base groupings from many cortical areas. For example, a line contour may require base-groupings that code for many smaller line segments that make up  part of the contour to be bound together as a whole. The combined pattern may not be catered for by an explicitly wired base-grouping. Opinions are polarized as to what methods the brain uses to bind disparate activations together. I will here outline two contenders.

Some suggest that groups of neurons across the cortex synchronise their firing patterns binding disparate parts of brain activation together (e.g. a population encoding for red and a population encoding for circle synchronize to encode for a red circle). This is said to explain rhythmic oscillations in the brain, particularly in the gamma band (30-40 hertz). Others believe this to be a mere epiphenomenon. In the 2006 Annual Review of Neuroscience Roelfsema points out that some recent studies on monkeys observed no direct relationship between synchrony and perceptual grouping, and in some instances, grouping is even associated with a reduction in synchrony.

Incremental grouping proposed my Roelfsema et al are said to make use of horizontal and feedback connections to enhances the responses of neurons coding features that are bound in perception. ‘By the time that the base representation has been computed, neurons that respond to features of the same object are linked by the interaction skeleton. This also holds for neurons that respond to widely separated image elements, although these are only indirectly connected through a chain of cells responsive to interspersed image elements. A rate enhancement has to spread through the interaction skeleton in order to make these additional groupings explicit.’

However, this does not blow synchronization theory out of the water. Firing rate labelling proposed by Roelfsema begs for an explanation of how the rate encodes the label bacause a simply higher rate of firing does not seem to say enough unless the rate itself shares a code between populations or unless we are to believe in some kind of threshold beyond which binding occurs. In addition, Pascal Fries explains how oscillatory rhythmic excitability fluctuations produce temporal windows for communication due to relaxation time needed between firing and when the a neuron is ready to again receive signals. ‘Only coherently oscillating neuronal groups can interact effectively, because their communication windows for input and for output are open at the same times’. A detail which adds an extra level of depth for the synchronization camp.