We suggest that any brain-like (artificial neural network based) learning system
will need a sleep-like mechanism for consolidating newly learned information if it
wishes to cope with the sequential / ongoing learning of significantly new
information. We summarise and explore two possible candidates for a computational
account of this consolidation process in Hopfield type networks. The
"pseudorehearsal" method is based on the relearning of randomly selected
attractors in the network as the new information is added from some second system.
This process is supposed to reinforce old information within the network and
protect it from the disruption caused by learning the new inputs. The
"unlearning" method is based on the unlearning of randomly selected attractors in
the network after new information has already been learned. This process is
supposed to locate and remove the unwanted associations between information that
obscure the learned inputs. We suggest that as a computational model of sleep
consolidation the pseudorehearsal approach is better supported by the
psychological, evolutionary, and neurophysiological data (in particular accounting
for the role of the hippocampus in consolidation).