A common problem in tasks involving the integration of spatial information from multiple senses, or in sensorimotor coordination, is that different modalities represent space in different frames of reference. Coordinate transformations between different reference frames are therefore required. One way to achieve this relies on the encoding of spatial information using population codes. The set of network responses to stimuli in different locations (tuning curves) constitute a basis set of functions which can be combined linearly through weighted synaptic connections in order to approximate non-linear transformations of the input variables. The question then arises how the appropriate synaptic connectivity is obtained. This model shows that a network of spiking neurons can learn the coordinate transformation from one frame of reference to another, with connectivity that develops continuously in an unsupervised manner, based only on the correlations available in the environment, and with a biologically-realistic plasticity mechanism (spike timing-dependent plasticity).
Model Type: Realistic Network
Region(s) or Organism(s): Generic
Model Concept(s): Synaptic Plasticity; Long-term Synaptic Plasticity; Unsupervised Learning; STDP
Simulation Environment: NEURON
Implementer(s): Davison, Andrew [Andrew.Davison at iaf.cnrs-gif.fr]
References:
Davison AP, Frégnac Y. (2006). Learning cross-modal spatial transformations through spike timing-dependent plasticity. The Journal of neuroscience : the official journal of the Society for Neuroscience. 26 [PubMed]