Craig Atkinson, Department of Computer Science
Achieving Continual Learning in Deep Neural Networks
Neural networks can achieve extraordinary results on a wide variety of tasks. However, when they attempt to learn a sequence of tasks, they tend to learn the new task while destructively forgetting previous tasks. One solution to this problem is pseudo-rehearsal, which involves learning the new task while rehearsing generated items representative of previous tasks. Our model combines pseudo-rehearsal with a deep generative model and a dual memory system, resulting in a method that prevents forgetting without needing to revisit or store raw data from past tasks. Our model iteratively learns three Atari 2600 games while retaining above human level performance on all three games and performing as well as a network which rehearses from real data. Furthermore, previous state-of-the-art solutions demonstrate substantial forgetting compared to our model on these complex deep reinforcement learning tasks.
This page is maintained by the seminar list administrator.