Legends: |
Link to a Model |
Reference cited by multiple papers |

## References and models cited by this paper | ## References and models that cite this paper | |||

Bengio Y, Simard P, Frasconi P (1994) Learning long-term dependencies with gradient descent is difficult. IEEE Trans Neural Netw 5:157-66 [Journal] [PubMed]Boden M, Wiles J (2000) Context-free and context-sensitive dynamics in recurrent neural networks Connection Science 12:196-210Carandini M, Heeger DJ (1994) Summation and division by neurons in primate visual cortex. Science 264:1333-6 [PubMed]Christiansen MH, Chater N (1999) Toward a connectionist model of recursion in human linguistic performance Cogn Scien 23:157-205Christiansen MH, Chater N (1999) Connectionist natural language processing: The state of the art Cogn Sci 28:417-437Cleeremans A, Servan-Schreiber D, Mcclelland JL (1989) Finite state automata and simple recurrent networks Neural Comput 1:372-381Cover TM, Thomas JA (1991) Elements of Information TheoryCrutchfield JP (1994) The calculi of emergence: Computation, dynamics, and induction Physica D 75:11-54de Kamps M, van der Velde F (2006) Neural blackboard architectures: the realization of compositionality and systematicity in neural networks. J Neural Eng 3:R1-12 [Journal] [PubMed]Ellis R, Humphreys G (1999) Connectionist psychologyFelleman DJ, Van Essen DC (1991) Distributed hierarchical processing in the primate cerebral cortex. Cereb Cortex 1:1-47 [PubMed]Gruning A (2004) Neural networks and the complexity of languages Unpublished doctoral dissertation, University of LeipzigGruning A (2005) Back-propagation as reinforcement in prediction tasks Proc Intl Conf Artificial Neural Networks, Duch W:Kacprzyk J:Oja E:Zadrozny S, ed. pp.547Gruning A (2006) Stack- and queue-like dynamics in recurrent neural networks Connection Science 18:23-42Hammer B, Tino P (2003) Recurrent neural networks with small weights implement definite memory machines Neural Comput 15:1897-1929Hopcroft J, Ullman J (1979) Introduction to automata theory, languages, and computationJackendoff R (2002) Foundations Of Language: Brain, Meaning, Grammar, EvolutionJacobsson H (2006) The crystallizing substochastic sequential machine extractor: CrySSMEx. Neural Comput 18:2211-55 [Journal] [PubMed]Kitchens BP (1998) Symbolic dynamicsKuan CM, Hornik K, White H (1994) A convergence result for learning in recurrent neural networks Neural Comput 6:420-440Lind D, Marcus B (1995) An introduction to symbolic dynamics and codingMoore C (1998) Dynamical recognizers: Real-time language recognition by analog computers Theoretical Computer Science 201:99-136Nowlan SJ, Sejnowski TJ (1995) A selection model for motion processing in area MT of primates. J Neurosci 15:1195-214 [PubMed]Rodriguez P (2001) Simple recurrent networks learn context-free and context-sensitive languages by counting. Neural Comput 13:2093-118 [Journal] [PubMed]Roelfsema PR, van Ooyen A (2005) Attention-gated reinforcement learning of internal representations for classification. Neural Comput 17:2176-214 [Journal] [PubMed]Rowland BA, Maida AS, Berkeley ISN (2006) Synaptic noise as a means of implementing weight-perturbation learning Connection Science 18:69-79Schultz W (1998) Predictive reward signal of dopamine neurons. J Neurophysiol 80:1-27 [Journal] [PubMed]Sutton RS, Barto AG (2002) Reinforcement learning: An introduction (2nd ed) [Journal]
Tino P, Cernanský M, Benusková L (2004) Markovian architectural bias of recurrent neural networks. IEEE Trans Neural Netw 15:6-15 [Journal] [PubMed]Tino P, Dorffner G (2001) Predicting the future of discrete sequences from fractal representations of the past Mach Learn 45:187-217Tremblay L, Schultz W (1999) Relative reward preference in primate orbitofrontal cortex. Nature 398:704-8 [Journal] [PubMed]Usher M, McClelland JL (2001) The time course of perceptual choice: the leaky, competing accumulator model. Psychol Rev 108:550-92 [PubMed]Williams RJ, Peng J (1990) An efficient gradient-based algorithm for on-line training of recurrent network trajectories Neural Comput 2:490-501Williams RJ, Zipser D (1989) A learning algorithm for continually running fully recurrent neural networks Neural Comput 1:270-280Wörgötter F, Porr B (2005) Temporal sequence learning, prediction, and control: a review of different models and their relation to biological mechanisms. Neural Comput 17:245-319 [Journal] [PubMed] |