In this review we explore the topic of sequential learning, where information to be learned and retained arrives in separate episodes over time, in the context of artificial neural networks. Most neural networks handle this kind of task very badly, as new learning completely disrupts information previously learned by the network. This problem, known as "catastrophic forgetting", has received a lot of attention in the literature. We illustrate the catastrophic forgetting effect, and summarise possible solutions. In particular, we review the literature relating to the pseudorehearsal mechanism, which is an effective solution to the catastrophic forgetting problem in back propagation type networks. We then review similar issues of capacity, forgetting, and the use of pseudorehearsal in Hopfield type networks. Finally, we briefly discuss these issues in the context of cognition, and summarise interesting topics for further research.