Andrews R, Diederich J, Tickle AB. (1995). Survey and critique of techniques for extracting rules from trained artificial neural networks Knowledge Based Systems. 8
Angluin D. (1987). Learning regular sets from queries and counterexamples Information And Computation. 75
Angluin D. (2004). Queries revisited Theoretical Computer Science. 313
Atlas L, Cohn DA, Ladner RE. (1994). Improving generalization with active learning Mach Learn. 15
Bergadano F, Gunetti D. (1996). Testing by means of inductive program learning ACM Transactions On Software Engineering And Methodology. 5
Blair A, Pollack J. (1997). Analysis of dynamical recognizers Neural Comput. 9
Blair A, Wiles J, Tonkes B. (1998). Inductive bias in context-free language learning Proceedings of the Ninth Australian Conference on Neural Networks.
Boden M, Wiles J. (2000). Context-free and context-sensitive dynamics in recurrent neural networks Connection Science. 12
Bryant CH, Muggleton SH, Page CD, Sternberg MJE. (1999). Combining active learning with inductive logic programming to close the loop in machine learning Proceedings of the AISB99 symposium on AI and scientific creativity (unpublished manuscript).
Casey M. (1996). The dynamics of discrete-time computation, with application to recurrent neural networks and finite state machine extraction. Neural computation. 8 [PubMed]
Chaitin GJ. (1987). Algorithmic information theory.
Chen D et al. (1992). Learning and extracting finite state automata with second-order recurrent neural networks Neural Comput. 4
Chen D et al. (1992). Extracting and learning an unknown grammar with recurrent neural networks Advances in neural information processing systems. 4
Christiansen MH, Chater N. (1999). Toward a connectionist model of recursion in human linguistic performance Cogn Scien. 23
Colton S, Bundy A, Walsh T. (2000). On the notion of interestingness in automated mathematical discovery Int J Human Computer Stud. 53
Cover TM, Thomas JA. (1991). Elements of Information Theory.
Crutchfield JP. (1994). The calculi of emergence: Computation, dynamics, and induction Physica D. 75
Crutchfield JP, Young K. (1990). Computation at the onset of chaos Complexity, entropy and the physics of information.
Devaney RL. (1992). A First Course In Chaotic Dynamical Systems: Theory And Experiment.
Elman JL. (1990). Finding structure in time Cognitive Science. 14
Elman JL, Wiles J. (1995). Learning to count without a counter: A case study of dynamics and activation landscapes in recurrent neural networks Proceedings of the Seventeenth Annual Conference of the Cognitive Science Society.
Everitt BS, Landau S, Leese M. (2001). Cluster analysis.
Garg VK, Young S. (1995). Model uncertainty in discrete event systems SIAM Journal On Control And Optimization. 33
Gers FA, Schmidhuber E. (2001). LSTM recurrent networks learn simple context-free and context-sensitive languages. IEEE transactions on neural networks. 12 [PubMed]
Gold ME. (1967). Language identification in the limit Information And Control. 10
Golea M, Andrews R, Diederich J, Tickle AB. (1998). The truth will come to light: Directions and challenges in extracting the knowledge embedded within mined artificial neural networks IEEE Transactions On Neural Networks. 9
Hammer B, Tino P. (2003). Recurrent neural networks with small weights implement definite memory machines Neural Comput. 15
Hopcroft J, Ullman J. (1979). Introduction to automata theory, languages, and computation.
Jacobsson H. (2005). Rule extraction from recurrent neural networks: A taxonomy and review Neural Comput. 17
Jacobsson H, Ziemke T. (2003). Improving procedures for evaluation of connectionist context-free language predictors. IEEE transactions on neural networks. 14 [PubMed]
Jacobsson H, Ziemke T. (2003). Reducing complexity of rule extraction from prediction RNNs through domain interaction Tech. Rep. No. HS-IDA-TR-03-007.
Jacobsson H, Ziemke T. (2005). Rethinking rule extraction from recurrent neural networks Paper presented at the IJCAI-05 Workshop on Neural-Symbolic Learning and Reasoning.
Kolen JF, Kremer SC. (2001). A field guide to dynamical recurrent networks.
Kremer SC. (2001). Spatiotemporal connectionist networks: A taxonomy and review Neural Comput. 13
Kumar R, Garg VK. (2001). Control of stochastic discrete event systems modeled by probabilistic languages IEEE Transactions On Automatic Control. 46
Lang KJ. (1992). Random DFAs can be approximately learned from sparse uniform examples Proceedings of the Fifth ACM Workshop on Computational Learning Theory.
Ljung L. (1999). System identification: Theory for the user (2nd ed).
Manolios P, Fanelli R. (1994). First order recurrent neural networks and deterministic finite state automata Neural Comput. 6
Marculescu D, Marculescu R, Pedram M. (1996). Stochastic sequential machine synthesis targeting constrained sequence generation Dac96: Proceedings of the 33rd Annual Conference on Design Automation.
Mcclelland JL, Servan-Schreiber D, Cleeremans A. (1989). Finite state automata and simple recurrent networks Neural Comput. 1
Moore EF. (1956). Gedanken-experiments on sequential machines Annals Of Mathematical Studies. 34
Muggleton S, Raedt LD. (1994). Inductive logic programming: Theory and methods Journal of Logic Programming. 19
Paz A. (1971). Introduction to probabilistic automata.
Pitts W, Mcculloch WS. (1943). A Logical Calculus of Ideas Immanent in Nervous Activity Bull Math Biophysics. 5
Popper KR. (1990). The logic of scientific discovery (14th ed).
Rabin MO. (1963). Probabilistic automata Information And Control. 6
Saito K, Langley P, Shrager J. (2002). Computational discovery of communicable scientific knowledge Logical and computational aspects of model-based reasoning.
Sharkey NE, Jackson SA. (1995). An internal report for connectionists Computational architectures integrating neural and symbolic processes .
Shavlik JW, Craven MW. (1994). Using sampling and queries to extract rules from trained neural networks Machine learning: Proceedings of the Eleventh International Conference.
Shavlik JW, Craven MW. (1996). Extracting tree-structured representations of trained networks Advances in neural information processing systems. 8
Shavlik JW, Craven MW. (1999). Rule extraction: Where do we go from here? Tech. Rep. No. Machine Learning Research Group Working Paper 99-1.
Simon HA. (1973). Does scientific discovery have a logic? Philosophy Of Science. 40
Simon HA. (1996). Machine discovery Foundations Of Science. 1
Tino P, Cernanský M, Benusková L. (2004). Markovian architectural bias of recurrent neural networks. IEEE transactions on neural networks. 15 [PubMed]
Tino P, Köteles M. (1999). Extracting finite-state representations from recurrent neural networks trained on chaotic symbolic sequences. IEEE transactions on neural networks. 10 [PubMed]
Tino P, Vojtek V. (1998). Extracting stochastic machines from recurrent neural networks trained on complex symbolic sequences Neural Network World. 8
Vahed A, Omlin CW. (2004). A machine learning method for extracting symbolic knowledge from recurrent neural networks. Neural computation. 16 [PubMed]
Valiant LG. (1984). A theory of the learnable Communications Of The ACM. 27
Watrous RL, Kuhn GM. (1992). Induction of finite-state automata using second-order recurrent networks Advances in neural information processing systems. 4
Williamson J. (2004). A dynamic interaction between machine learning and the philosophy of science Minds And Machines. 14
Zeng Z, Goodman RM, Smyth P. (1993). Learning finite state machines with self-clustering recurrent networks Neural Comput. 5
de_la_Higuera C. (2005). A bibliographical study of grammatical inference Pattern Recognition. 38
Grüning A. (2007). Elman backpropagation as reinforcement for simple recurrent networks. Neural computation. 19 [PubMed]