๐Ÿ“šCourse Materials

All the course material in one page!

Slides PDF

Classic readings on catastrophic forgetting

Catastrophic Forgetting; Catastrophic Interference; Stability; Plasticity; Rehearsal.arrow-up-right by and Anthony Robins. Connection Science, 123--146, 1995.

Using Semi-Distributed Representations to Overcome Catastrophic Forgetting in Connectionist Networksarrow-up-right by and Robert French. In Proceedings of the 13th Annual Cognitive Science Society Conference, 173--178, 1991. [sparsity]

Check out additional material for popular reviews and surveys on continual learning.

Lifelong Machine Learningarrow-up-right, Second Edition. by Zhiyuan Chen and Bing Liu. Synthesis Lectures on Artificial Intelligence and Machine Learning, 2018.

Slides PDF

Classic references on Catastrophic Forgetting provided above.

Does Continual Learning = Catastrophic Forgetting?arrow-up-right, by A. Thai, S. Stojanov, I. Rehg, and J. M. Rehg, arXiv, 2021.

An Empirical Study of Example Forgetting during Deep Neural Network Learningarrow-up-right, by M. Toneva, A. Sordoni, R. T. des Combes, A. Trischler, Y. Bengio, and G. J. Gordon, ICLR, 2019.

Compete to Computearrow-up-right, by R. K. Srivastava, J. Masci, S. Kazerounian, F. Gomez, and J. Schmidhuber, NIPS, 2013 (Permuted MNIST task).

Slides PDF

CL scenarios

Three scenarios for continual learningarrow-up-right, by G. M. van de Ven and A. S. Tolias, Continual Learning workshop at NeurIPS, 2018. task/domain/class incremental learning

Continuous Learning in Single-Incremental-Task Scenariosarrow-up-right, by D. Maltoni and V. Lomonaco, Neural Networks, vol. 116, pp. 56โ€“73, 2019. New Classes (NC), New Instances (NI), New Instances and Classes (NIC) + Single Incremental (SIT) /Multi (MT) /Multi Incremental (MIT) Task

Task-Free Continual Learningarrow-up-right, by R. Aljundi, K. Kelchtermans, and T. Tuytelaars, CVPR, 2019.

Continual Prototype Evolution: Learning Online from Non-Stationary Data Streamsarrow-up-right, by M. De Lange and T. Tuytelaars, ICCV, 2021. Data-incremental and comparisons with other CL scenarios

Survey presenting CL scenarios

Continual Learning for Robotics: Definition, Framework, Learning Strategies, Opportunities and Challengesarrow-up-right by Timothรฉe Lesort, Vincenzo Lomonaco, Andrei Stoian, Davide Maltoni, David Filliat and Natalia Dรญaz-Rodr\วตuez. Information Fusion, 52--68, 2020. Section 3, in particular.

CL benchmarks

CORe50: a New Dataset and Benchmark for Continuous Object Recognitionarrow-up-right, by V. Lomonaco and D. Maltoni, Proceedings of the 1st Annual Conference on Robot Learning, vol. 78, pp. 17โ€“26, 2017.

OpenLORIS-Object: A Robotic Vision Dataset and Benchmark for Lifelong Deep Learningarrow-up-right, by Q. She et al. ICRA, 2020.

Incremental Object Learning From Contiguous Viewsarrow-up-right, by S. Stojanov et al., CVPR, 2019. CRIB benchmark

Stream-51: Streaming Classification and Novelty Detection From Videosarrow-up-right, by R. Roady, T. L. Hayes, H. Vaidya, and C. Kanan, CVPR 2019.

Slides PDF

Efficient Lifelong Learning with A-GEMarrow-up-right, by A. Chaudhry, M. Ranzato, M. Rohrbach, and M. Elhoseiny, ICLR, 2019. Evaluation protocol with "split by experiences".

Gradient Episodic Memory for Continual Learningarrow-up-right, by D. Lopez-Paz and M. Ranzato, NIPS, 2017. popular formalization of ACC, BWT, FWT.

CLEVA-Compass: A Continual Learning EValuation Assessment Compass to Promote Research Transparency and Comparabilityarrow-up-right, by M. Mundt, S. Lang, Q. Delfosse, and K. Kersting, arXiv, 2021.

Donโ€™t forget, there is more than forgetting: new metrics for Continual Learningarrow-up-right, by N. Dรญaz-Rodrรญguez, V. Lomonaco, D. Filliat, and D. Maltoni, arXiv, 2018. definition of additional metrics

Slides PDF

Replay

GDumb: A Simple Approach that Questions Our Progress in Continual Learningarrow-up-right, by A. Prabhu, P. H. S. Torr, and P. K. Dokania, ECCV, 2020.

Online Continual Learning with Maximal Interfered Retrievalarrow-up-right, by R. Aljundi et al., NeurIPS, 2019.

Latent replay

Latent Replay for Real-Time Continual Learningarrow-up-right, by Lorenzo Pellegrini, Gabriele Graffieti, Vincenzo Lomonaco, Davide Maltoni, IROS, 2020.

Generative replay

Continual Learning with Deep Generative Replayarrow-up-right, by H. Shin, J. K. Lee, J. Kim, and J. Kim, NeurIPS, 2017.

Brain-inspired replay for continual learning with artificial neural networksarrow-up-right, by G. M. van de Ven, H. T. Siegelmann, and A. S. Tolias, Nature Communications, 2020

Slides PDF

L1, L2, Dropout

An Empirical Investigation of Cataarrow-up-righttrophic Forgetting in Gradient-Based Neural Networksarrow-up-right, by Goodfellow et al, 2015.

Understanding the Role of Training Regimes in Continual Learningarrow-up-right, by Mirzadeh et al., NeurIPS, 2020.

Regularization strategies

Learning without Forgettingarrow-up-right, by Li et al., TPAMI 2017.

Overcoming catastrophic forgetting in neural networksarrow-up-right, by Kirkpatrick et al, PNAS 2017.

Continual Learning Through Synaptic Intelligencearrow-up-right, by Zenke et al., 2017.

Continual learning with hypernetworksarrow-up-right, by Von Osvald et al., ICLR 2020.

Architectural strategies

Rehearsal-Free Continual Learning over Small Non-I.I.D. Batchesarrow-up-right, by Lomonaco et al, CLVision Workshop at CVPR 2020. CWR*

Progressive Neural Networksarrow-up-right, by Rusu et al., arXiv, 2016.

PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruningarrow-up-right, by Mallya et al., CVPR, 2018.

Overcoming catastrophic forgetting with hard attention to the taskarrow-up-right, by Serra et al., ICML, 2018.

Supermasks in Superpositionarrow-up-right, by Wortsman et al., NeurIPS, 2020.

Slides PDF

Hybrid strategies

Gradient Episodic Memory for Continual Learningarrow-up-right, by Lopez-Paz et al, NeurIPS 2017 GEM.

iCaRL: Incremental Classifier and Representation Learningarrow-up-right, by Rebuffi et al, CVPR, 2017.

Progress & Compress: A scalable framework for continual learning, by Schwarz et al, ICML, 2018.

Latent Replay for Real-Time Continual Learningarrow-up-right, by L. Pellegrini et al., IROS 2020 AR1*.

Applications

Continual Learning at the Edge: Real-Time Training on Smartphone Devicesarrow-up-right, by L. Pellegrini et al., ESANN, 2021.

Continual Learning in Practicearrow-up-right by T. Diethe et al., Continual Learning Workshop at NeurIPS, 2018.

Startups / Companies: CogitAIarrow-up-right, Neuralaarrow-up-right, Gantryarrow-up-right

Tools / Libraries: Avalanchearrow-up-right, Continuumarrow-up-right, Sequoiaarrow-up-right, CL-Gymarrow-up-right

Slides PDF

Embracing Change: Continual Learning in Deep Neural Networksarrow-up-right, by Hadsell et al., Trends in Cognitive Science, 2020. Continual meta learning - Meta continual learning

Towards Continual Reinforcement Learning: A Review and Perspectivesarrow-up-right, by Khetarpal et al, arXiv, 2020.

Continual Unsupervised Representation Learningarrow-up-right, by D. Rao et al., NeurIPS 2019.

Distributed Continual Learning Ex-Model: Continual Learning from a Stream of Trained Modelsarrow-up-right, by Carta et al., arXiv, 2021.

Continual Sequence Learning Continual learning for recurrent neural networks: An empirical evaluationarrow-up-right, by Cossu et al, Neural Networks, vol. 143, pp. 607โ€“627, 2021. Continual Learning with Echo State Networksarrow-up-right, by Cossu et al., ESANN, 2021.

Software

Avalanche: an End-to-End Library for Continual Learningarrow-up-right, the software library based on PyTorch used for the coding session of this course.

ContinualAI Colab notebooksarrow-up-right, coding continual learning from scratch in notebooks

Last updated

Was this helpful?