1st Edition

Proceedings of the 1993 Connectionist Models Summer School

    424 Pages
    by Psychology Press

    The result of the 1993 Connectionist Models Summer School, the papers in this volume exemplify the tremendous breadth and depth of research underway in the field of neural networks. Although the slant of the summer school has always leaned toward cognitive science and artificial intelligence, the diverse scientific backgrounds and research interests of accepted students and invited faculty reflect the broad spectrum of areas contributing to neural networks, including artificial intelligence, cognitive science, computer science, engineering, mathematics, neuroscience, and physics. Providing an accurate picture of the state of the art in this fast-moving field, the proceedings of this intense two-week program of lectures, workshops, and informal discussions contains timely and high-quality work by the best and the brightest in the neural networks field.

    Contents: Part I:Neuroscience. T. Rebotier, J. Droulez, Sigma-Pi Properties of Spiking Neurons. H.S. Wan, D.S. Touretzky, A.D. Redish, Towards a Computational Theory of Rat Navigation. H.T. Blair, Evaluating Connectionist Models in Psychology and Neuroscience. Part II:Vision. J. Sirosh, R. Miikkulainen, Self-Organizing Feature Maps with Lateral Connections: Modeling Ocular Dominance. A.K. Bhattacharjya, B. Roysam, Joint Solution of Low, Intermediate, and High Level Vision Tasks by Global Optimization: Application to Computer Vision at Low SNR. T.B. Ghiselli-Crippa, P.W. Munro, Learning Global Spatial Structures from Local Associations. Part III:Cognitive Modeling. D. Ascher, A Connectionist Model of Auditory Morse Code Perception. V. Dragoi, J.E.R. Staddon, A Competitive Neural Network Model for the Process of Recurrent Choice. A.M. Lindemann, A Neural Network Simulation of Numerical Verbal-to-Arabic Transcoding. T. Lund, Combining Models of Single-Digit Arithmetic and Magnitude Comparison. I.E. Dror, Neural Network Models as Tools for Understanding High-Level Cognition: Developing Paradigms for Cognitive Interpretation of Neural Network Models. Part IV:Language. F.J. Eisenhart, Modeling Language as Sensorimotor Coordination. A. Govindjee, G. Dell, Structure and Content in Word Production: Why It's Hard to Say Dlorm. P. Gupta, Investigating Phonological Representations: A Modeling Agenda. H. Schütze, Y. Singer, Part-of-Speech Tagging Using a Variable Context Markov Model. M. Spivey-Knowlton, Quantitative Predictions from a Constraint-Based Theory of Syntactic Ambiguity Resolution. B.B. Tesar, Optimality Semantics. Part V:Symbolic Computation and Rules. K.G. Daugherty, M. Hare, What's in a Rule? The Past Tense by Some Other Name Might Be Called a Connectionist Net. A. Almor, M. Rindner, On the Proper Treatment of Symbolism -- A Lesson from Linguistics. L.F. Niklasson, Structure Sensitivity in Connectionist Models. M. Crucianu, Looking for Structured Representations in Recurrent Networks. I. Tchoumatchenko, Back Propagation with Understandable Results. M.W. Craven, J.W. Shavlik, Understanding Neural Networks via Rule Extraction and Pruning. A-H. Tan, Rule Learning and Extraction with Self-Organizing Neural Networks. Part VI:Recurrent Networks and Temporal Pattern Processing. J.F. Kolen, Recurrent Networks: State Machines or Iterated Function Systems? F. Cummins, R.F. Port, On the Treatment of Time in Recurrent Neural Networks. J.D. McAuley, Finding Metrical Structure in Time. C. Stevens, J. Wiles, Representations of Tonal Music: A Case Study in the Development of Temporal Relationships. M.A.S. Potts, D.S. Broomhead, J.P. Huke, Applications of Radial Basis Function Fitting to the Analysis of Dynamical Systems. M.E. Young, T.M. Bailey, Event Prediction: Faster Learning in a Layered Hebbian Network with Memory. Part VII:Control. S. Thrun, A. Schwartz, Issues in Using Function Approximation for Reinforcement Learning. P. Sabes, Approximating Q-Values with Basis Function Representations. K.L. Markey, Efficient Learning of Multiple Degree-of-Freedom Control Problems with Quasi-Independent Q-Agents. A.L. Tascillo, V.A. Skormin, Neural Adaptive Control of Systems with Drifting Parameters. Part VIII:Learning Algorithms and Architectures. R.C. O'Reilly, Temporally Local Unsupervised Learning: The MaxIn Algorithm for Maximizing Input Information. V.R. de Sa, Minimizing Disagreement for Self-Supervised Classification. S.N. Lindstaedt, Comparison of Two Unsupervised Neural Network Models for Redundancy Reduction. Z. Ghahramani, Solving Inverse Problems Using an EM Approach to Density Estimation. M. Finke, K-R. Müller, Estimating A-Posteriori Probabilities Using Stochastic Network Models. Part IX:Learning Theory. A.S. Weigend, On Overfitting and the Effective Number of Hidden Units. R. Dodier, Increase of Apparent Complexity Is Due to Decrease of Training Set Error. G.B. Orr, T.K. Leen, Momentum and Optimal Stochastic Search. R. Garcés, Scheme to Improve the Generalization Error. M.P. Perrone, General Averaging Results for Convex Optimization. R.A. Caruana, Multitask Connectionist Learning. Z. Cataltepe, Y.S. Abu-Mostafa, Estimating Learning Performance Using Hints. Part X:Simulation Tools. A. Jagota, A Simulator for Asynchronous Hopfield Models. A. Linden, An Object-Oriented Dataflow Approach for Better Designs of Neural Net Architectures.

    Biography

    Mozer, Michael C.; Smolensky, Paul; Touretzky, David S.; Elman, Jeffrey L.; Weigend, Andreas S.