Archive for December, 2010


Im afraid there is more techie maths in this post :(. I recently posted here about spike timing dependant plasticity and how it can create unimodal or bimodal synaptic weight distributions depending on whether a term is added to the weight change function to allow dependence on the existing value of the synaptic weight.

Babadi et al show that by adding a delay (d) to the exponential term in additive spike timing depenant plasticity weight change equation (i.e. -(|Δt|-d) / τ) one can stabalize the distribution of synaptic weights to be unimodal instead of bimodal even when no limits are imposed. This is because apparently the relative strength of synapse induces a causal bump near Δt=0 invoking stronger increases the stronger the weight.This makes sense as the synaptic weight will cause the post-synaptic neuron to fire more quickly and often if it is stronger. This bump can be seen in the image below where on left side of the y axis is the depressive exponential term (though not shown with negative weighting) and on the other side is the potentiating exponential term with the bump:

The delay term makes the causal bump fall into the region where depression occurs.As in the image below (this time the depressive exponential part is shown with its negative weighting):


As the synapse gets stronger, a larger portion falls into the depression area, both because the causal bump gets bigger and because it moves closer to Δt=0. This prevents further growth of the synaptic strength and therefore stops saturation around the bimodal MIN and MAX possible values for synaptic weights.

Different studies in synchrony (for a description of synchrony see this post) and learning have shown it irrelevant whether excitatory post synaptic potential arrive shortly before or after the post-synaptic spike, but long term depression of synapses occurred when the same synchronous group input was oscillated 180 degrees out of phase so that the excitatory post-synaptic potentials arrived during the troughs of the oscillation, suggesting a certain robustness to the influence of synchronised activity on STDP (read this pdf for more info).

This finding raises interesting questions as to a possible interplay with a delay term introduced by Babadi et al.

I recently reported here on feed forward models of spiking neurons. Here is a follow up about an interesting recurrent system.

The computational power of a reciprocally connected group are likely to entail population codes rather than singular neurons encoding for stimuli. As the spiking neurons are either in a state of firing or not, they are not as easy to decode at a specific moment in time as a rate based model which contain an average of time spread information at one moment. Hosaka et al demonstrate a recurrent network organized to generate a synchronous firing according to the cycle of repeated external inputs. The timing of the synchrony depends on the input spatio-temporal pattern and the neural network structure. They conclude that network self-organizes its transformation function from spatio-temporal to temporal information. spike timing dependant plasticity makes the recurrent neural network behave as a filter with only one learned spatio-temporal pattern able to go through the filtering network in synchronous form (for more information on synchrony read here). Although their work includes a Monte-Carlo significance test for the synchrony, the synchrony is based on a global metric. Clearly distributed synchrony in which different cell assemblies in the network synchronise a different times due to the influence of stimuli would have to be considered if the network is to respond to multiple stimuli.