# Course Materials

### [Introduction & Motivation](/lectures/introduction.md)

{% file src="/files/NEzsFHLeT0eG7VKTKq7D" %}
Slides PDF
{% endfile %}

**Classic readings on catastrophic forgetting**

[Catastrophic Forgetting; Catastrophic Interference; Stability; Plasticity; Rehearsal.](http://www.tandfonline.com/doi/abs/10.1080/09540099550039318) by and Anthony Robins. *Connection Science*, 123--146, 1995.

[Using Semi-Distributed Representations to Overcome Catastrophic Forgetting in Connectionist Networks](https://www.aaai.org/Papers/Symposia/Spring/1993/SS-93-06/SS93-06-007.pdf) by and Robert French. *In Proceedings of the 13th Annual Cognitive Science Society Conference*, 173--178, 1991. \[sparsity]

**Check out additional material for popular reviews and surveys on continual learning.**

[Lifelong Machine Learning](https://www.cs.uic.edu/~liub/lifelong-machine-learning.html), Second Edition. by Zhiyuan Chen and Bing Liu. Synthesis Lectures on Artificial Intelligence and Machine Learning, 2018.

### [*Understanding Catastrophic Forgetting*](/lectures/understanding-catastrophic-forgetting.md)

{% file src="/files/33N6wKWTKh0HZUvPrgLw" %}
Slides PDF
{% endfile %}

Classic references on Catastrophic Forgetting provided above.

[Does Continual Learning = Catastrophic Forgetting?](http://arxiv.org/abs/2101.07295), by A. Thai, S. Stojanov, I. Rehg, and J. M. Rehg,  arXiv, 2021.

[An Empirical Study of Example Forgetting during Deep Neural Network Learning](https://openreview.net/forum?id=BJlxm30cKm), by M. Toneva, A. Sordoni, R. T. des Combes, A. Trischler, Y. Bengio, and G. J. Gordon, ICLR, 2019.

[Compete to Compute](http://papers.nips.cc/paper/5059-compete-to-compute.pdf), by R. K. Srivastava, J. Masci, S. Kazerounian, F. Gomez, and J. Schmidhuber, NIPS, 2013 (**Permuted MNIST task**).

### [*Scenarios & Benchmarks*](/lectures/scenarios-and-benchamarks.md)

{% file src="/files/UWLfC2IK5b5xk5YVpjo8" %}
Slides PDF
{% endfile %}

**CL scenarios**

[Three scenarios for continual learning](http://arxiv.org/abs/1904.07734)**,** by G. M. van de Ven and A. S. Tolias, Continual Learning workshop at NeurIPS, 2018. **task/domain/class incremental learning**

[Continuous Learning in Single-Incremental-Task Scenarios](https://www.sciencedirect.com/science/article/pii/S0893608019300838), by D. Maltoni and V. Lomonaco, Neural Networks, vol. 116, pp. 56–73, 2019. **New Classes (NC), New Instances (NI), New Instances and Classes (NIC) + Single Incremental (SIT) /Multi (MT) /Multi Incremental (MIT) Task**

[Task-Free Continual Learning](https://openaccess.thecvf.com/content_CVPR_2019/papers/Aljundi_Task-Free_Continual_Learning_CVPR_2019_paper.pdf), by R. Aljundi, K. Kelchtermans, and T. Tuytelaars, CVPR, 2019.

[Continual Prototype Evolution: Learning Online from Non-Stationary Data Streams](https://arxiv.org/abs/2009.00919), by M. De Lange and T. Tuytelaars, ICCV, 2021. **Data-incremental and comparisons with other CL scenarios**

**Survey presenting CL scenarios**&#x20;

[Continual Learning for Robotics: Definition, Framework, Learning Strategies, Opportunities and Challenges](http://www.sciencedirect.com/science/article/pii/S1566253519307377) by Timothée Lesort, Vincenzo Lomonaco, Andrei Stoian, Davide Maltoni, David Filliat and Natalia Díaz-Rodr\ǵuez. *Information Fusion*, 52--68, 2020. ***Section 3, in particular.***

**CL benchmarks**

[CORe50: a New Dataset and Benchmark for Continuous Object Recognition](http://proceedings.mlr.press/v78/lomonaco17a.html), by V. Lomonaco and D. Maltoni, Proceedings of the 1st Annual Conference on Robot Learning, vol. 78, pp. 17–26, 2017.

[OpenLORIS-Object: A Robotic Vision Dataset and Benchmark for Lifelong Deep Learning](http://arxiv.org/abs/1911.06487), by Q. She et al. ICRA, 2020.

[Incremental Object Learning From Contiguous Views](https://openaccess.thecvf.com/content_CVPR_2019/html/Stojanov_Incremental_Object_Learning_From_Contiguous_Views_CVPR_2019_paper.htm), by S. Stojanov et al., CVPR, 2019. **CRIB benchmark**

[Stream-51: Streaming Classification and Novelty Detection From Videos](https://openaccess.thecvf.com/content_CVPRW_2020/html/w15/Roady_Stream-51_Streaming_Classification_and_Novelty_Detection_From_Videos_CVPRW_2020_paper.html), by R. Roady, T. L. Hayes, H. Vaidya, and C. Kanan, CVPR 2019.

### [*Evaluation & Metrics*](/lectures/evaluation-protocols-and-metrics.md)

{% file src="/files/8bg4dZoxB0ny17ac5trk" %}
Slides PDF
{% endfile %}

[Efficient Lifelong Learning with A-GEM](http://arxiv.org/abs/1812.00420), by A. Chaudhry, M. Ranzato, M. Rohrbach, and M. Elhoseiny, ICLR, 2019.  **Evaluation protocol with "split by experiences"**.

[Gradient Episodic Memory for Continual Learning](https://arxiv.org/abs/1706.08840), by D. Lopez-Paz and M. Ranzato, NIPS, 2017. **popular formalization of ACC, BWT, FWT.**

&#x20;[CLEVA-Compass: A Continual Learning EValuation Assessment Compass to Promote Research Transparency and Comparability](http://arxiv.org/abs/2110.03331), by M. Mundt, S. Lang, Q. Delfosse, and K. Kersting, arXiv, 2021.

[Don’t forget, there is more than forgetting: new metrics for Continual Learning](http://arxiv.org/abs/1810.13166), by N. Díaz-Rodríguez, V. Lomonaco, D. Filliat, and D. Maltoni, arXiv, 2018. **definition of additional metrics**

### [*Methodologies \[part 1\]*](/lectures/strategies.md)

{% file src="/files/gE1OdqDVQJlmboAlcF0n" %}
Slides PDF
{% endfile %}

**Replay**&#x20;

[GDumb: A Simple Approach that Questions Our Progress in Continual Learning](http://openaccess.thecvf.com/content_cvpr_2017/papers/Rebuffi_iCaRL_Incremental_Classifier_CVPR_2017_paper.pdf), by A. Prabhu, P. H. S. Torr, and P. K. Dokania, ECCV, 2020.

[Online Continual Learning with Maximal Interfered Retrieval](https://proceedings.neurips.cc/paper/2019/hash/15825aee15eb335cc13f9b559f166ee8-Abstract.html), by R. Aljundi et al., NeurIPS, 2019.&#x20;

**Latent replay**

[Latent Replay for Real-Time Continual Learning](https://ras.papercept.net/images/temp/IROS/files/0596.pdf), by Lorenzo Pellegrini, Gabriele Graffieti, Vincenzo Lomonaco, Davide Maltoni,  IROS, 2020.

**Generative replay**

[Continual Learning with Deep Generative Replay](http://papers.nips.cc/paper/6892-continual-learning-with-deep-generative-replay.pdf), by H. Shin, J. K. Lee, J. Kim, and J. Kim, NeurIPS, 2017.

[Brain-inspired replay for continual learning with artificial neural networks](https://www.nature.com/articles/s41467-020-17866-2), by G. M. van de Ven, H. T. Siegelmann, and A. S. Tolias, Nature Communications, 2020&#x20;

### [*Methodologies \[part 2\]*](/lectures/methodologies-part-2.md)

{% file src="/files/6NGVTUgESwOSN0NQIA7n" %}
*Slides PDF*
{% endfile %}

**L1, L2, Dropout**

[An Empirical Investigation of Cata](https://arxiv.org/abs/1312.6211)[trophic Forgetting in Gradient-Based Neural Networks](https://arxiv.org/abs/1312.6211), by Goodfellow et al, 2015.

[Understanding the Role of Training Regimes in Continual Learning](https://arxiv.org/pdf/2006.06958.pdf), by Mirzadeh et al., NeurIPS, 2020.

**Regularization strategies**

[Learning without Forgetting](https://arxiv.org/pdf/1606.09282.pdf), by Li et al., TPAMI 2017.

[Overcoming catastrophic forgetting in neural networks](https://arxiv.org/pdf/1612.00796.pdf), by Kirkpatrick et al, PNAS 2017.

[Continual Learning Through Synaptic Intelligence](https://arxiv.org/pdf/1703.04200.pdf), by Zenke et al., 2017.

[Continual learning with hypernetworks](https://arxiv.org/abs/1906.00695), by Von Osvald et al., ICLR 2020.

**Architectural strategies**

[Rehearsal-Free Continual Learning over Small Non-I.I.D. Batches](https://arxiv.org/pdf/1907.03799.pdf), by Lomonaco et al, CLVision Workshop at CVPR 2020. **CWR\***

[Progressive Neural Networks](https://arxiv.org/abs/1606.04671), by Rusu et al., arXiv, 2016.

[PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning](https://arxiv.org/abs/1711.05769), by Mallya et al., CVPR,  2018.

[Overcoming catastrophic forgetting with hard attention to the task](https://arxiv.org/abs/1801.01423), by Serra et al., ICML, 201&#x38;**.**

[Supermasks in Superposition](https://arxiv.org/pdf/2006.14769.pdf), by Wortsman et al., NeurIPS, 2020.

### [*Methodologies \[part 3\] & Applications*](/lectures/methodologies-part-3-applications-and-tools.md)

{% file src="/files/mFw5Y8VwWrj8BCz6uneH" %}
Slides PDF
{% endfile %}

**Hybrid strategies**

[Gradient Episodic Memory for Continual Learning](https://arxiv.org/abs/1706.08840), by Lopez-Paz et al, NeurIPS 2017 **GEM**.

[iCaRL: Incremental Classifier and Representation Learning](https://arxiv.org/abs/1611.07725), by Rebuffi et al, CVPR, 2017.

[Progress & Compress: A scalable framework for continual learning](#invited-lectures), by Schwarz et al, ICML, 2018.

[Latent Replay for Real-Time Continual Learning](http://ras.papercept.net/images/temp/IROS/files/0596.pdf), by L. Pellegrini et al., IROS 2020 **AR1\***.

**Applications**

[Continual Learning at the Edge: Real-Time Training on Smartphone Devices](https://arxiv.org/abs/2105.13127)**,** by L. Pellegrini et al.,  ESANN, 2021.

[Continual Learning in Practice](https://arxiv.org/abs/1903.05202v2) by T. Diethe et al., Continual Learning Workshop at NeurIPS, 2018.

**Startups / Companies:** [CogitAI](https://www.cogitai.com/), [Neurala](https://www.neurala.com/), [Gantry](https://gantry.io)

**Tools / Libraries:** [Avalanche](https://avalanche.continualai.org/), [Continuum](https://github.com/Continvvm/continuum), [Sequoia](https://github.com/lebrice/Sequoia), [CL-Gym](https://github.com/imirzadeh/CL-Gym)

### [*Frontiers in Continual Learning*](/lectures/frontiers-in-continual-learning.md)

{% file src="/files/ekh3AQQf7IL3GJ9MGHAo" %}
Slides PDF
{% endfile %}

[Embracing Change: Continual Learning in Deep Neural Networks](https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613\(20\)30219-9), by Hadsell et al., Trends in Cognitive Science, 2020. **Continual meta learning - Meta continual learning**

[Towards Continual Reinforcement Learning: A Review and Perspectives](https://arxiv.org/abs/2012.13490), by Khetarpal et al, arXiv, 2020.&#x20;

[Continual Unsupervised Representation Learning](https://proceedings.neurips.cc/paper/2019/file/861578d797aeb0634f77aff3f488cca2-Paper.pdf)**,** by D. Rao et al., NeurIPS 2019.

**Distributed Continual Learning**\
[Ex-Model: Continual Learning from a Stream of Trained Models](http://arxiv.org/abs/2112.06511), by Carta et al., arXiv, 2021.&#x20;

**Continual Sequence Learning**\
[Continual learning for recurrent neural networks: An empirical evaluation](https://www.sciencedirect.com/science/article/abs/pii/S0893608021002847), by Cossu et al, Neural Networks, vol. 143, pp. 607–627, 2021.\
[Continual Learning with Echo State Networks](http://arxiv.org/abs/2105.07674), by Cossu et al., ESANN, 2021.&#x20;

### [*Invited Lectures*](/invited-lectures/invited-talks.md)

#### Software

[Avalanche: an End-to-End Library for Continual Learning](https://github.com/ContinualAI/avalanche), the software library based on PyTorch used for the coding session of this course.

[ContinualAI Colab notebooks](https://github.com/ContinualAI/colab), coding continual learning from scratch in notebooks


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://course.continualai.org/resources/course-materials.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
