event

GT Neuro Seminar Series

Primary tabs

“Integrating New Knowledge into a Neural Network without Catastrophic Interference: Computational and Theoretical Investigations in a Hierarchically Structured Environment”

James L. McClelland, Ph.D.
Lucie Stern Professor in the Social Sciences
Director, Center for Mind, Brain and Computation
Department of Psychology
Stanford University, Stanford, CA

According to complementary learning systems theory, integrating new memories into a multi-layer neural network without interfering with what is already known depends on interleaving presentation of the new memories with ongoing presentations of items previously learned. This putative dependence is both costly for machine learning and biologically implausible for real brains which are unlikely to have sufficient time for such massive interleaving, even during sleep. We use deep linear neural networks in hierarchically structured environments previously analyzed by Saxe, McClelland, and Ganguli () to gain new insights into how integration of new knowledge might be made more efficient. For this type of environment, its content can be described by the singular value decomposition (SVD) of the environment's input-output covariance matrix, in which each successive dimension corresponds to categorical split in the hierarchical environment. Prior work showed that deep linear networks are sufficient to learn the content of the environment, and they do so in a stage-line way, with each dimension strength rising from near-zero to its maximum strength after a delay inversely proportional to the strength of the dimension, as previously demonstrated by Saxe et al capturing patterns previously observed in deeper non-linear neural networks by Rogers and McClelland (2004). Several observations are then accessible when we consider learning a new item previously not encountered in the micro-environment. (1) The item can be examined in terms of its projection onto the existing structure, and the degree to which it adds a new categorical split. (2) To the extent the item projects onto existing structure, including it in the training corpus leads to the rapid adjustment of the representation of the categories involved, and effectively no adjustment occurs to categories onto which the new item does not project at all. (3) Learning a new split, however, is slow, and its learning dynamics show the same delayed rise to maximum that depends on the dimension's strength. These observations then motivate the development of ideas about how the new information might be acquired efficiently, combining interleaved learning with other strategies.

This presentation can be seen via BlueJeans: https://bluejeans.com/824485104/

Status

  • Workflow Status:Published
  • Created By:Floyd Wood
  • Created:03/26/2019
  • Modified By:Floyd Wood
  • Modified:03/26/2019