Cortex learning models (Weber at al. 2006, Weber and Triesch, 2006, Weber and Wermter 2006/7)


A simulator and the configuration files for three publications are provided. First, "A hybrid generative and predictive model of the motor cortex" (Weber at al. 2006) which uses reinforcement learning to set up a toy action scheme, then uses unsupervised learning to "copy" the learnt action, and an attractor network to predict the hidden code of the unsupervised network. Second, "A Self-Organizing Map of Sigma-Pi Units" (Weber and Wermter 2006/7) learns frame of reference transformations on population codes in an unsupervised manner. Third, "A possible representation of reward in the learning of saccades" (Weber and Triesch, 2006) implements saccade learning with two possible learning schemes for horizontal and vertical saccades, respectively.

Model Type: Connectionist Network

Model Concept(s): Rate-coding model neurons; Reinforcement Learning; Unsupervised Learning; Attractor Neural Network; Winner-take-all; Hebbian plasticity; Olfaction

Simulation Environment: C or C++ program

Implementer(s): Weber, Cornelius [cweber at fias.uni-frankfurt.de]; Elshaw, Mark [mark.elshaw at sunderland.ac.uk]

References:

Weber C, Wermter S, Elshaw M. (2006). A hybrid generative and predictive model of the motor cortex. Neural networks : the official journal of the International Neural Network Society. 19 [PubMed]

Triesch J, Weber C. (2006). A possible representation of reward in the learning of saccades Proc. of the Sixth International Workshop on Epigenetic Robots.

Weber C, Wermter S. (2006/7). A self-organizing map of sigma-pi units Neurocomputing. 70(13-15)


This website requires cookies and limited processing of your personal data in order to function. By continuing to browse or otherwise use this site, you are agreeing to this use. See our Privacy policy and how to cite and terms of use.