📚
Course Materials
All the course material in one page!
01_intro_and_motivation.pdf
4MB
PDF
Slides PDF
Classic readings on catastrophic forgetting
Catastrophic Forgetting; Catastrophic Interference; Stability; Plasticity; Rehearsal. by and Anthony Robins. Connection Science, 123--146, 1995.
Using Semi-Distributed Representations to Overcome Catastrophic Forgetting in Connectionist Networks by and Robert French. In Proceedings of the 13th Annual Cognitive Science Society Conference, 173--178, 1991. [sparsity]
Check out additional material for popular reviews and surveys on continual learning.
Lifelong Machine Learning, Second Edition. by Zhiyuan Chen and Bing Liu. Synthesis Lectures on Artificial Intelligence and Machine Learning, 2018.
02_forgetting.pdf
2MB
PDF
Slides PDF
Classic references on Catastrophic Forgetting provided above.
Does Continual Learning = Catastrophic Forgetting?, by A. Thai, S. Stojanov, I. Rehg, and J. M. Rehg, arXiv, 2021.
An Empirical Study of Example Forgetting during Deep Neural Network Learning, by M. Toneva, A. Sordoni, R. T. des Combes, A. Trischler, Y. Bengio, and G. J. Gordon, ICLR, 2019.
Compete to Compute, by R. K. Srivastava, J. Masci, S. Kazerounian, F. Gomez, and J. Schmidhuber, NIPS, 2013 (Permuted MNIST task).
03_benchmarks.pdf
2MB
PDF
Slides PDF
CL scenarios
Three scenarios for continual learning, by G. M. van de Ven and A. S. Tolias, Continual Learning workshop at NeurIPS, 2018. task/domain/class incremental learning
Continuous Learning in Single-Incremental-Task Scenarios, by D. Maltoni and V. Lomonaco, Neural Networks, vol. 116, pp. 56–73, 2019. New Classes (NC), New Instances (NI), New Instances and Classes (NIC) + Single Incremental (SIT) /Multi (MT) /Multi Incremental (MIT) Task
Continual Prototype Evolution: Learning Online from Non-Stationary Data Streams, by M. De Lange and T. Tuytelaars, ICCV, 2021. Data-incremental and comparisons with other CL scenarios
Survey presenting CL scenarios
Continual Learning for Robotics: Definition, Framework, Learning Strategies, Opportunities and Challenges by Timothée Lesort, Vincenzo Lomonaco, Andrei Stoian, Davide Maltoni, David Filliat and Natalia Díaz-Rodr\ǵuez. Information Fusion, 52--68, 2020. Section 3, in particular.
CL benchmarks
CORe50: a New Dataset and Benchmark for Continuous Object Recognition, by V. Lomonaco and D. Maltoni, Proceedings of the 1st Annual Conference on Robot Learning, vol. 78, pp. 17–26, 2017.
OpenLORIS-Object: A Robotic Vision Dataset and Benchmark for Lifelong Deep Learning, by Q. She et al. ICRA, 2020.
Incremental Object Learning From Contiguous Views, by S. Stojanov et al., CVPR, 2019. CRIB benchmark
Stream-51: Streaming Classification and Novelty Detection From Videos, by R. Roady, T. L. Hayes, H. Vaidya, and C. Kanan, CVPR 2019.
04_evaluation.pdf
2MB
PDF
Slides PDF
Efficient Lifelong Learning with A-GEM, by A. Chaudhry, M. Ranzato, M. Rohrbach, and M. Elhoseiny, ICLR, 2019. Evaluation protocol with "split by experiences".
Gradient Episodic Memory for Continual Learning, by D. Lopez-Paz and M. Ranzato, NIPS, 2017. popular formalization of ACC, BWT, FWT.
CLEVA-Compass: A Continual Learning EValuation Assessment Compass to Promote Research Transparency and Comparability, by M. Mundt, S. Lang, Q. Delfosse, and K. Kersting, arXiv, 2021.
Don’t forget, there is more than forgetting: new metrics for Continual Learning, by N. Díaz-Rodríguez, V. Lomonaco, D. Filliat, and D. Maltoni, arXiv, 2018. definition of additional metrics
05_methodologies_part1.pdf
2MB
PDF
Slides PDF
Replay
GDumb: A Simple Approach that Questions Our Progress in Continual Learning, by A. Prabhu, P. H. S. Torr, and P. K. Dokania, ECCV, 2020.
Latent replay
Latent Replay for Real-Time Continual Learning, by Lorenzo Pellegrini, Gabriele Graffieti, Vincenzo Lomonaco, Davide Maltoni, IROS, 2020.
Generative replay
Continual Learning with Deep Generative Replay, by H. Shin, J. K. Lee, J. Kim, and J. Kim, NeurIPS, 2017.
Brain-inspired replay for continual learning with artificial neural networks, by G. M. van de Ven, H. T. Siegelmann, and A. S. Tolias, Nature Communications, 2020
06_methodologies_part2.pdf
2MB
PDF
Slides PDF
L1, L2, Dropout
An Empirical Investigation of Catatrophic Forgetting in Gradient-Based Neural Networks, by Goodfellow et al, 2015.
Understanding the Role of Training Regimes in Continual Learning, by Mirzadeh et al., NeurIPS, 2020.
Regularization strategies
Architectural strategies
Rehearsal-Free Continual Learning over Small Non-I.I.D. Batches, by Lomonaco et al, CLVision Workshop at CVPR 2020. CWR*
PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning, by Mallya et al., CVPR, 2018.
07_methodologies_part3.pdf
3MB
PDF
Slides PDF
Hybrid strategies
Applications
Continual Learning at the Edge: Real-Time Training on Smartphone Devices, by L. Pellegrini et al., ESANN, 2021.
08_frontiers.pdf
4MB
PDF
Slides PDF
Embracing Change: Continual Learning in Deep Neural Networks, by Hadsell et al., Trends in Cognitive Science, 2020. Continual meta learning - Meta continual learning
Towards Continual Reinforcement Learning: A Review and Perspectives, by Khetarpal et al, arXiv, 2020.
Distributed Continual Learning
Ex-Model: Continual Learning from a Stream of Trained Models, by Carta et al., arXiv, 2021.
Continual Sequence Learning
Continual learning for recurrent neural networks: An empirical evaluation, by Cossu et al, Neural Networks, vol. 143, pp. 607–627, 2021.
Continual Learning with Echo State Networks, by Cossu et al., ESANN, 2021.
Avalanche: an End-to-End Library for Continual Learning, the software library based on PyTorch used for the coding session of this course.
Last modified 1yr ago