Alternative time representation in dopamine models (Rivest et al. 2009)


Rivest F, Kalaska JF, Bengio Y. (2010). Alternative time representation in dopamine models. Journal of computational neuroscience. 28 [PubMed]

See more from authors: Rivest F · Kalaska JF · Bengio Y

References and models cited by this paper

Bakker B. (2002). Reinforcement learning with long short-term memory Neural information processing systems.

Barto AG. (1995). Adaptive critics and the basal ganglia Models of Information Processing in the Basal Ganglia.

Barto AG, Sutton RS. (1998). Reinforcement learning: an introduction.

Bertin M, Schweighofer N, Doya K. (2007). Multiple model-based reinforcement learning explains dopamine neuronal activity. Neural networks : the official journal of the International Neural Network Society. 20 [PubMed]

Beylin AV et al. (2001). The role of the hippocampus in trace conditioning: temporal discontinuity or task difficulty? Neurobiology of learning and memory. 76 [PubMed]

Brody CD, Hernández A, Zainos A, Romo R. (2003). Timing and neural encoding of somatosensory parametric working memory in macaque prefrontal cortex. Cerebral cortex (New York, N.Y. : 1991). 13 [PubMed]

Brown J, Bullock D, Grossberg S. (1999). How the basal ganglia use parallel excitatory and inhibitory learning pathways to selectively respond to unexpected rewarding cues. The Journal of neuroscience : the official journal of the Society for Neuroscience. 19 [PubMed]

Buhusi CV, Meck WH. (2000). Timing for the absence of a stimulus: the gap paradigm reversed. Journal of experimental psychology. Animal behavior processes. 26 [PubMed]

Buhusi CV, Meck WH. (2005). What makes us tick? Functional and neural mechanisms of interval timing. Nature reviews. Neuroscience. 6 [PubMed]

Church RM. (2003). A concise introduction to scalar timing theory Functional and neural mechanisms of interval timing.

Clark RE, Squire LR. (1998). Classical conditioning and brain systems: the role of awareness. Science (New York, N.Y.). 280 [PubMed]

Cohen JD, Braver TS. (2000). On the control of control: The role of dopamine in regulating prefrontal function and working memory Control of cognitive processes: Attention and performance XVIII.

Cohen JD, Niv Y, Todd MT. (2009). Learning to use working memory in partially observable environments through dopaminergic reinforcement Neural information processing systems.

Daw ND, Courville AC, Touretzky DS. (2006). Representation and timing in theories of the dopamine system. Neural computation. 18 [PubMed]

Daw ND, Doya K. (2006). The computational neurobiology of learning and reward. Current opinion in neurobiology. 16 [PubMed]

Dormont JF, Condé H, Farin D. (1998). The role of the pedunculopontine tegmental nucleus in relation to conditioned motor performance in the cat. I. Context-dependent and reinforcement-related single unit activity. Experimental brain research. 121 [PubMed]

Doya K. (1999). What are the computations of the cerebellum, the basal ganglia and the cerebral cortex? Neural networks : the official journal of the International Neural Network Society. 12 [PubMed]

Doya K. (2000). Complementary roles of basal ganglia and cerebellum in learning and motor control. Current opinion in neurobiology. 10 [PubMed]

Dragoi V, Staddon JE, Palmer RG, Buhusi CV. (2003). Interval timing as an emergent learning property. Psychological review. 110 [PubMed]

Durstewitz D. (2004). Neural representation of interval time. Neuroreport. 15 [PubMed]

Eck D, Schmidhuber A. (2002). Learning the long-term structure of the blues. Artificial Neural Networks-ICANN. 2002

Fiorillo CD, Tobler PN, Schultz W. (2003). Discrete coding of reward probability and uncertainty by dopamine neurons. Science (New York, N.Y.). 299 [PubMed]

Florian RV. (2007). Reinforcement learning through modulation of spike-timing-dependent synaptic plasticity. Neural computation. 19 [PubMed]

Fukuda M, Ono T, Nakamura K, Tamura R. (1990). Dopamine and ACh involvement in plastic learning by hypothalamic neurons in rats. Brain research bulletin. 25 [PubMed]

Fukuda M, Ono T, Nishino H, Nakamura K. (1986). Neuronal responses in monkey lateral hypothalamus during operant feeding behavior. Brain research bulletin. 17 [PubMed]

Funahashi S, Bruce CJ, Goldman-Rakic PS. (1989). Mnemonic coding of visual space in the monkey's dorsolateral prefrontal cortex. Journal of neurophysiology. 61 [PubMed]

Gallistel CR, Gibbon J. (2000). Time, rate, and conditioning. Psychological review. 107 [PubMed]

Gers FA, Schmidhuber J, Cummins F. (2000). Learning to forget: continual prediction with LSTM. Neural computation. 12 [PubMed]

Hochreiter S, Schmidhuber J. (1997). Long short-term memory. Neural computation. 9 [PubMed]

Hollerman JR, Schultz W. (1998). Dopamine neurons report an error in the temporal prediction of reward during learning. Nature neuroscience. 1 [PubMed]

Hopson JW. (2003). Timing without a clock Functional and neural mechanisms of interval timing.

Houk JC, Adams JL, Barto AGA. (1995). A model of how the basal ganglia generate and use neural signals that predict reinforcement. Models Of Information Processing In The Basal Ganglia.

Ivry RB, Schlerf JE. (2008). Dedicated and intrinsic models of time perception. Trends in cognitive sciences. 12 [PubMed]

Izhikevich EM. (2007). Solving the distal reward problem through linkage of STDP and dopamine signaling. Cerebral cortex (New York, N.Y. : 1991). 17 [PubMed]

Joel D, Weiner I. (2000). The connections of the dopaminergic system with the striatum in rats and primates: an analysis with respect to the functional and compartmental organization of the striatum. Neuroscience. 96 [PubMed]

Kalaska JF, Bengio Y, Rivest F. (2005). Brain inspired reinforcement learning Neural information processing systems.

Karmarkar UR, Buonomano DV. (2007). Timing in the absence of clocks: encoding time in neural network states. Neuron. 53 [PubMed]

Kolodziejski C, Porr B, Wörgötter F. (2009). On the asymptotic equivalence between differential Hebbian and temporal difference learning. Neural computation. 21 [PubMed]

Komura Y et al. (2001). Retrospective and prospective coding for predicted reward in the sensory thalamus. Nature. 412 [PubMed]

Kotter R, Wickens J. (1995). Cellular models of reinforcement Models ofinformation processing in the basal ganglia.

Laubach M. (2005). Who's on first? What's on second? The time course of learning in corticostriatal systems Trends In Neurosciences. 28

Lebedev MA, O'Doherty JE, Nicolelis MA. (2008). Decoding of temporal intervals from cortical ensemble activity. Journal of neurophysiology. 99 [PubMed]

Levitt H. (1971). Transformed up-down methods in psychoacoustics. The Journal of the Acoustical Society of America. 49 [PubMed]

Lewis PA. (2002). Finding the timer. Trends in cognitive sciences. 6 [PubMed]

Ljungberg T, Apicella P, Schultz W. (1992). Responses of monkey dopamine neurons during learning of behavioral reactions. Journal of neurophysiology. 67 [PubMed]

Lucchetti C, Bon L. (2001). Time-modulated neuronal activity in the premotor cortex of macaque monkeys. Experimental brain research. 141 [PubMed]

Lucchetti C, Ulrici A, Bon L. (2005). Dorsal premotor areas of nonhuman primate: functional flexibility in time domain. European journal of applied physiology. 95 [PubMed]

Ludvig EA, Sutton RS, Kehoe EJ. (2008). Stimulus representation and the timing of reward-prediction errors in models of the dopamine system. Neural computation. 20 [PubMed]

Mirenowicz J, Schultz W. (1994). Importance of unpredictability for reward responses in primate dopamine neurons. Journal of neurophysiology. 72 [PubMed]

Montague PR, Dayan P, Sejnowski TJ. (1996). A framework for mesencephalic dopamine systems based on predictive Hebbian learning. The Journal of neuroscience : the official journal of the Society for Neuroscience. 16 [PubMed]

Montague PR, Hyman SE, Cohen JD. (2004). Computational roles for dopamine in behavioural control. Nature. 431 [PubMed]

Morris G, Arkadir D, Nevet A, Vaadia E, Bergman H. (2004). Coincident but distinct messages of midbrain dopamine and striatal tonically active neurons. Neuron. 43 [PubMed]

Morris G, Nevet A, Arkadir D, Vaadia E, Bergman H. (2006). Midbrain dopamine neurons encode decisions for future action. Nature neuroscience. 9 [PubMed]

Newsome WT, Schultz W, Fiorillo CD. (2008). The temporal precision of reward prediction in dopamine neurons. Nat Neurosci. 121

O'Reilly RC, Frank MJ. (2006). Making working memory work: a computational model of learning in the prefrontal cortex and basal ganglia. Neural computation. 18 [PubMed]

Otani S, Daniel H, Roisin MP, Crepel F. (2003). Dopaminergic modulation of long-term synaptic plasticity in rat prefrontal neurons. Cerebral cortex (New York, N.Y. : 1991). 13 [PubMed]

Pan WX, Schmidt R, Wickens JR, Hyland BI. (2005). Dopamine cells respond to predicted events during classical conditioning: evidence for eligibility traces in the reward-learning network. The Journal of neuroscience : the official journal of the Society for Neuroscience. 25 [PubMed]

Potjans W, Morrison A, Diesmann M. (2009). A spiking neural network model of an actor-critic learning agent. Neural computation. 21 [PubMed]

Reynolds JN, Wickens JR. (2002). Dopamine-dependent plasticity of corticostriatal synapses. Neural networks : the official journal of the International Neural Network Society. 15 [PubMed]

Roberts PD, Santiago RA, Lafferriere G. (2008). An implementation of reinforcement learning based on spike timing dependent plasticity. Biological cybernetics. 99 [PubMed]

Roesch MR, Calu DJ, Schoenbaum G. (2007). Dopamine neurons encode the better option in rats deciding between differently delayed or sized rewards. Nature neuroscience. 10 [PubMed]

Romo R, Apicella P, Schultz W, Scarnati E. (1995). Context-dependent activity in primate striatum reflecting past and future behavioral events Models Of Information Processing In The Basal Ganglia.

Romo R, Brody CD, Hernández A, Lemus L. (1999). Neuronal correlates of parametric working memory in the prefrontal cortex. Nature. 399 [PubMed]

Rougier NP, Noelle DC, Braver TS, Cohen JD, O'Reilly RC. (2005). Prefrontal cortex and flexible cognitive control: rules without symbols. Proceedings of the National Academy of Sciences of the United States of America. 102 [PubMed]

Samejima K, Ueda Y, Doya K, Kimura M. (2005). Representation of action-specific reward values in the striatum. Science (New York, N.Y.). 310 [PubMed]

Schmidhuber J, Gers FA, Schraudolph N. (2002). Learning precise timing with LSTM recurrent networks J Mach Learn Res. 3

Schultz W, Apicella P, Ljungberg T. (1993). Responses of monkey dopamine neurons to reward and conditioned stimuli during successive steps of learning a delayed response task. The Journal of neuroscience : the official journal of the Society for Neuroscience. 13 [PubMed]

Schultz W, Apicella P, Scarnati E, Ljungberg T. (1992). Neuronal activity in monkey ventral striatum related to the expectation of reward. The Journal of neuroscience : the official journal of the Society for Neuroscience. 12 [PubMed]

Schultz W, Dayan P, Montague PR. (1997). A neural substrate of prediction and reward. Science (New York, N.Y.). 275 [PubMed]

Suri RE, Schultz W. (1998). Learning of sequential movements by neural network model with dopamine-like reinforcement signal. Experimental brain research. 121 [PubMed]

Suri RE, Schultz W. (1999). A neural network model with dopamine-like reinforcement signal that learns a spatial delayed response task. Neuroscience. 91 [PubMed]

Sutton RS, Kehoe EJ, Ludvig EA, Verbeek E. (2009). A computational model of hippocampal function in trace conditioning Neural Information Processing Systems.

Thibaudeau G, Potvin O, Allen K, Doré FY, Goulet S. (2007). Dorsal, ventral, and complete excitotoxic lesions of the hippocampus in rats failed to impair appetitive trace conditioning. Behavioural brain research. 185 [PubMed]

Touretzky DS, Daw ND, Courville AC. (2003). Timing and partial observability in the dopamine system Neural information processing systems.

Yang C, Balsam PD, Drew MR. (2002). Timing at the start of associative learning Learning And Motivation. 33

References and models that cite this paper
This website requires cookies and limited processing of your personal data in order to function. By continuing to browse or otherwise use this site, you are agreeing to this use. See our Privacy policy and how to cite and terms of use.