Continual Learning Course
ContinualAIWikiAvalanche Mailing-list
  • Continual Learning: On Machines that can Learn Continually
  • Background
    • 🔡Prerequisites
    • 🛠️Tools & Setup
    • 📑Course Details
  • Lectures
    • 📍Introduction & Motivation
    • 📍Understanding Catastrophic Forgetting
    • 📍Scenarios & Benchmarks
    • 📍Evaluation & Metrics
    • 📍Methodologies [Part 1]
    • 📍Methodologies [Part 2]
    • 📍Methodologies [Part 3], Applications & Tools
    • 📍Frontiers in Continual Learning
  • Invited & Extra Lectures
    • 💻Avalanche Dev Day
    • 🔮Invited Talks
  • Resources
    • 📚Course Materials
    • 🔀Additional Material
  • About Us
    • 👨‍🏫Your Instructor
    • 🆘Teaching Assistants
  • Useful Links
    • Avalanche
    • Forum
    • Colab
    • Wiki
    • Open World Lifelong Learning Course
    • ContinualAI
    • Join us on Slack!
Powered by GitBook
On this page
  • Introduction & Motivation
  • Understanding Catastrophic Forgetting
  • Scenarios & Benchmarks
  • Evaluation & Metrics
  • Methodologies [part 1]
  • Methodologies [part 2]
  • Methodologies [part 3] & Applications
  • Frontiers in Continual Learning
  • Invited Lectures

Was this helpful?

Export as PDF
  1. Resources

Course Materials

All the course material in one page!

PreviousInvited TalksNextAdditional Material

Last updated 3 years ago

Was this helpful?

Classic readings on catastrophic forgetting

by and Anthony Robins. Connection Science, 123--146, 1995.

by and Robert French. In Proceedings of the 13th Annual Cognitive Science Society Conference, 173--178, 1991. [sparsity]

Check out additional material for popular reviews and surveys on continual learning.

, Second Edition. by Zhiyuan Chen and Bing Liu. Synthesis Lectures on Artificial Intelligence and Machine Learning, 2018.

Classic references on Catastrophic Forgetting provided above.

, by A. Thai, S. Stojanov, I. Rehg, and J. M. Rehg, arXiv, 2021.

CL scenarios

Survey presenting CL scenarios

CL benchmarks

Replay

Latent replay

Generative replay

L1, L2, Dropout

Regularization strategies

Architectural strategies

Hybrid strategies

Applications

Software

, by M. Toneva, A. Sordoni, R. T. des Combes, A. Trischler, Y. Bengio, and G. J. Gordon, ICLR, 2019.

, by R. K. Srivastava, J. Masci, S. Kazerounian, F. Gomez, and J. Schmidhuber, NIPS, 2013 (Permuted MNIST task).

, by G. M. van de Ven and A. S. Tolias, Continual Learning workshop at NeurIPS, 2018. task/domain/class incremental learning

, by D. Maltoni and V. Lomonaco, Neural Networks, vol. 116, pp. 56–73, 2019. New Classes (NC), New Instances (NI), New Instances and Classes (NIC) + Single Incremental (SIT) /Multi (MT) /Multi Incremental (MIT) Task

, by R. Aljundi, K. Kelchtermans, and T. Tuytelaars, CVPR, 2019.

, by M. De Lange and T. Tuytelaars, ICCV, 2021. Data-incremental and comparisons with other CL scenarios

by Timothée Lesort, Vincenzo Lomonaco, Andrei Stoian, Davide Maltoni, David Filliat and Natalia Díaz-Rodr\ǵuez. Information Fusion, 52--68, 2020. Section 3, in particular.

, by V. Lomonaco and D. Maltoni, Proceedings of the 1st Annual Conference on Robot Learning, vol. 78, pp. 17–26, 2017.

, by Q. She et al. ICRA, 2020.

, by S. Stojanov et al., CVPR, 2019. CRIB benchmark

, by R. Roady, T. L. Hayes, H. Vaidya, and C. Kanan, CVPR 2019.

, by A. Chaudhry, M. Ranzato, M. Rohrbach, and M. Elhoseiny, ICLR, 2019. Evaluation protocol with "split by experiences".

, by D. Lopez-Paz and M. Ranzato, NIPS, 2017. popular formalization of ACC, BWT, FWT.

, by M. Mundt, S. Lang, Q. Delfosse, and K. Kersting, arXiv, 2021.

, by N. Díaz-Rodríguez, V. Lomonaco, D. Filliat, and D. Maltoni, arXiv, 2018. definition of additional metrics

, by A. Prabhu, P. H. S. Torr, and P. K. Dokania, ECCV, 2020.

, by R. Aljundi et al., NeurIPS, 2019.

, by Lorenzo Pellegrini, Gabriele Graffieti, Vincenzo Lomonaco, Davide Maltoni, IROS, 2020.

, by H. Shin, J. K. Lee, J. Kim, and J. Kim, NeurIPS, 2017.

, by G. M. van de Ven, H. T. Siegelmann, and A. S. Tolias, Nature Communications, 2020

, by Goodfellow et al, 2015.

, by Mirzadeh et al., NeurIPS, 2020.

, by Li et al., TPAMI 2017.

, by Kirkpatrick et al, PNAS 2017.

, by Zenke et al., 2017.

, by Von Osvald et al., ICLR 2020.

, by Lomonaco et al, CLVision Workshop at CVPR 2020. CWR*

, by Rusu et al., arXiv, 2016.

, by Mallya et al., CVPR, 2018.

, by Serra et al., ICML, 2018.

, by Wortsman et al., NeurIPS, 2020.

, by Lopez-Paz et al, NeurIPS 2017 GEM.

, by Rebuffi et al, CVPR, 2017.

, by Schwarz et al, ICML, 2018.

, by L. Pellegrini et al., IROS 2020 AR1*.

, by L. Pellegrini et al., ESANN, 2021.

by T. Diethe et al., Continual Learning Workshop at NeurIPS, 2018.

Startups / Companies: , ,

Tools / Libraries: , , ,

, by Hadsell et al., Trends in Cognitive Science, 2020. Continual meta learning - Meta continual learning

, by Khetarpal et al, arXiv, 2020.

, by D. Rao et al., NeurIPS 2019.

Distributed Continual Learning , by Carta et al., arXiv, 2021.

Continual Sequence Learning , by Cossu et al, Neural Networks, vol. 143, pp. 607–627, 2021. , by Cossu et al., ESANN, 2021.

, the software library based on PyTorch used for the coding session of this course.

, coding continual learning from scratch in notebooks

📚
An Empirical Study of Example Forgetting during Deep Neural Network Learning
Compete to Compute
Scenarios & Benchmarks
Three scenarios for continual learning
Continuous Learning in Single-Incremental-Task Scenarios
Task-Free Continual Learning
Continual Prototype Evolution: Learning Online from Non-Stationary Data Streams
Continual Learning for Robotics: Definition, Framework, Learning Strategies, Opportunities and Challenges
CORe50: a New Dataset and Benchmark for Continuous Object Recognition
OpenLORIS-Object: A Robotic Vision Dataset and Benchmark for Lifelong Deep Learning
Incremental Object Learning From Contiguous Views
Stream-51: Streaming Classification and Novelty Detection From Videos
Evaluation & Metrics
Efficient Lifelong Learning with A-GEM
Gradient Episodic Memory for Continual Learning
CLEVA-Compass: A Continual Learning EValuation Assessment Compass to Promote Research Transparency and Comparability
Don’t forget, there is more than forgetting: new metrics for Continual Learning
Methodologies [part 1]
GDumb: A Simple Approach that Questions Our Progress in Continual Learning
Online Continual Learning with Maximal Interfered Retrieval
Latent Replay for Real-Time Continual Learning
Continual Learning with Deep Generative Replay
Brain-inspired replay for continual learning with artificial neural networks
Methodologies [part 2]
An Empirical Investigation of Cata
trophic Forgetting in Gradient-Based Neural Networks
Understanding the Role of Training Regimes in Continual Learning
Learning without Forgetting
Overcoming catastrophic forgetting in neural networks
Continual Learning Through Synaptic Intelligence
Continual learning with hypernetworks
Rehearsal-Free Continual Learning over Small Non-I.I.D. Batches
Progressive Neural Networks
PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning
Overcoming catastrophic forgetting with hard attention to the task
Supermasks in Superposition
Methodologies [part 3] & Applications
Gradient Episodic Memory for Continual Learning
iCaRL: Incremental Classifier and Representation Learning
Latent Replay for Real-Time Continual Learning
Continual Learning at the Edge: Real-Time Training on Smartphone Devices
Continual Learning in Practice
CogitAI
Neurala
Gantry
Avalanche
Continuum
Sequoia
CL-Gym
Frontiers in Continual Learning
Embracing Change: Continual Learning in Deep Neural Networks
Towards Continual Reinforcement Learning: A Review and Perspectives
Continual Unsupervised Representation Learning
Ex-Model: Continual Learning from a Stream of Trained Models
Continual learning for recurrent neural networks: An empirical evaluation
Continual Learning with Echo State Networks
Invited Lectures
Avalanche: an End-to-End Library for Continual Learning
ContinualAI Colab notebooks
Progress & Compress: A scalable framework for continual learning
Introduction & Motivation
Catastrophic Forgetting; Catastrophic Interference; Stability; Plasticity; Rehearsal.
Using Semi-Distributed Representations to Overcome Catastrophic Forgetting in Connectionist Networks
Lifelong Machine Learning
Understanding Catastrophic Forgetting
Does Continual Learning = Catastrophic Forgetting?
4MB
01_intro_and_motivation.pdf
pdf
Slides PDF
2MB
02_forgetting.pdf
pdf
Slides PDF
2MB
03_benchmarks.pdf
pdf
Slides PDF
2MB
04_evaluation.pdf
pdf
Slides PDF
2MB
05_methodologies_part1.pdf
pdf
Slides PDF
2MB
06_methodologies_part2.pdf
pdf
Slides PDF
3MB
07_methodologies_part3.pdf
pdf
Slides PDF
4MB
08_frontiers.pdf
pdf
Slides PDF