## References and models cited by this paper | ## References and models that cite this paper | |

Andrews R, Diederich J, Tickle AB (1995) Survey and critique of techniques for extracting rules from trained artificial neural networks Angluin D (1987) Learning regular sets from queries and counterexamples Angluin D (2004) Queries revisited Bergadano F, Gunetti D (1996) Testing by means of inductive program learning Blair A, Pollack J (1997) Analysis of dynamical recognizers Boden M, Wiles J (2000) Context-free and context-sensitive dynamics in recurrent neural networks Bryant CH, Muggleton SH, Page CD, Sternberg MJE (1999) Combining active learning with inductive logic programming to close the loop in machine learning Casey M (1996) The dynamics of discrete-time computation, with application to recurrent neural networks and finite state machine extraction. Chaitin GJ (1987) Christiansen MH, Chater N (1999) Toward a connectionist model of recursion in human linguistic performance Cleeremans A, Servan-Schreiber D, Mcclelland JL (1989) Finite state automata and simple recurrent networks Cohn DA, Atlas L, Ladner RE (1994) Improving generalization with active learning Colton S, Bundy A, Walsh T (2000) On the notion of interestingness in automated mathematical discovery Cover TM, Thomas JA (1991) Craven MW, Shavlik JW (1994) Using sampling and queries to extract rules from trained neural networks Craven MW, Shavlik JW (1996) Extracting tree-structured representations of trained networks Craven MW, Shavlik JW (1999) Rule extraction: Where do we go from here? Crutchfield JP (1994) The calculi of emergence: Computation, dynamics, and induction Crutchfield JP, Young K (1990) Computation at the onset of chaos de_la_Higuera C (2005) A bibliographical study of grammatical inference Devaney RL (1992) Elman JL (1990) Finding structure in time Everitt BS, Landau S, Leese M (2001) Gers FA, Schmidhuber E (2001) LSTM recurrent networks learn simple context-free and context-sensitive languages. Giles CL, Miller CB, Chen D, Chen HH, Sun GZ, Lee YC (1992) Learning and extracting finite state automata with second-order recurrent neural networks Giles CL, Miller CB, Chen D, Sun GZ, Chen HH, Lee YC (1992) Extracting and learning an unknown grammar with recurrent neural networks Gold ME (1967) Language identification in the limit Hammer B, Tino P (2003) Recurrent neural networks with small weights implement definite memory machines Hopcroft J, Ullman J (1979) Jacobsson H (2005) Rule extraction from recurrent neural networks: A taxonomy and review Jacobsson H, Ziemke T (2003) Improving procedures for evaluation of connectionist context-free language predictors. Jacobsson H, Ziemke T (2003) Reducing complexity of rule extraction from prediction RNNs through domain interaction Jacobsson H, Ziemke T (2005) Rethinking rule extraction from recurrent neural networks Kolen JF, Kremer SC (2001) Kremer SC (2001) Spatiotemporal connectionist networks: A taxonomy and review Kumar R, Garg VK (2001) Control of stochastic discrete event systems modeled by probabilistic languages Lang KJ (1992) Random DFAs can be approximately learned from sparse uniform examples Langley P, Shrager J, Saito K (2002) Computational discovery of communicable scientific knowledge Ljung L (1999) Manolios P, Fanelli R (1994) First order recurrent neural networks and deterministic finite state automata Marculescu D, Marculescu R, Pedram M (1996) Stochastic sequential machine synthesis targeting constrained sequence generation Mcculloch WS, Pitts W (1943) A Logical Calculus of Ideas Immanent in Nervous Activity Moore EF (1956) Gedanken-experiments on sequential machines Muggleton S, Raedt LD (1994) Inductive logic programming: Theory and methods Paz A (1971) Popper KR (1990) Rabin MO (1963) Probabilistic automata Sharkey NE, Jackson SA (1995) An internal report for connectionists Simon HA (1973) Does scientific discovery have a logic? Simon HA (1996) Machine discovery Tickle AB, Andrews R, Golea M, Diederich J (1998) The truth will come to light: Directions and challenges in extracting the knowledge embedded within mined artificial neural networks Tino P, Cernanský M, Benusková L (2004) Markovian architectural bias of recurrent neural networks. Tino P, Köteles M (1999) Extracting finite-state representations from recurrent neural networks trained on chaotic symbolic sequences. Tino P, Vojtek V (1998) Extracting stochastic machines from recurrent neural networks trained on complex symbolic sequences Tonkes B, Blair A, Wiles J (1998) Inductive bias in context-free language learning Vahed A, Omlin CW (2004) A machine learning method for extracting symbolic knowledge from recurrent neural networks. Valiant LG (1984) A theory of the learnable Watrous RL, Kuhn GM (1992) Induction of finite-state automata using second-order recurrent networks Wiles J, Elman JL (1995) Learning to count without a counter: A case study of dynamics and activation landscapes in recurrent neural networks Williamson J (2004) A dynamic interaction between machine learning and the philosophy of science Young S, Garg VK (1995) Model uncertainty in discrete event systems Zeng Z, Goodman RM, Smyth P (1993) Learning finite state machines with self-clustering recurrent networks | Grüning A (2007) Elman backpropagation as reinforcement for simple recurrent networks. |