Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Show me your Data!
In this lecture we will address the following points:
Possible continual learning scenarios
Existing and commonly used benchmarks
Avalanche Benchmarks
A University of Pisa, ContinualAI and AIDA Doctoral Academy Course
Learning continually from non-stationary data streams is a fascinating research topic and a fundamental aspect of Intelligence. At , in conjunction with the and the , we are proud to offer the first open-access course on Continual Learning. Anyone from around the world can join the class and learn about this fascinating topic, completely for free!
The course will follow a mixed in-person / virtual modality and recorded lectures will be uploaded on the ContinualAI Youtube account (and remain available on Youtube for async views). However, if you want to actually be enrolled in the course, interact in class and get a certificate of attendance you need to follow the procedure described below ⬇️.
Please note that the certificate of attendance is only released after a project-based exam to be agreed with the course instructor.
You can check out the official course structure (8 lectures + 2 invited talks), the class timetable and other relevant details about the course in the section below:
In order to officially enroll in this course, hence being able to participate and interact in class, you need to register through the form below (you'll be contacted soon with more instructions if you already complected the enrollment).
Enrollments for the 2021 course are now closed. You can still follow the recorded lectures on YouTube in the official ContinualAI channel!
Why Continual Learning?
In this lecture we will address the following points:
Course structure and modality
Intro to continual learning
Relationship with other learning paradigms
Brief history of continual Learning and its milestones
Things you Should Know Before Enrolling
This course has been designed for Graduate and PhD Students that have never been exposed to Continual Learning. However, it assumes basic knowledge in Computer Science (Bachelor level) and Machine Learning. In particular we assume basic knowledge in Deep Learning.
For students do not have this background we suggest to follow at least an introductory Machine Learning course such the one offered by Andrew Ng at Cursera.
We also assume basic hands-on knowledge about:
Anaconda, Python and PyCharm
Python Notebooks
Google Colaboratory
Git and GitHub
PyTorch
Make sure you learn the basics of these tools and languages as they will be used extensively across the course.
Main Continual Learning Strategies
This lecture will address the following points:
Strategies Categorization and History
Replay Strategies: Intro & Main Approaches
Avalanche Strategies & Plugins
How to Evaluate your Continual Learning Agent
In this lecture we will address the following points:
Evaluation Protocols
Continual learning metrics
Avalanche Metrics & Loggers
How to Setup your System Before Starting the Course
Before starting the course, please make sure you mature some confidence with the following tools:
Microsoft Teams
Anaconda, Python and PyCharm (or any other IDE of your choice for Python)
Google Colaboratory
PyTorch
Please make sure to setup your personal computer to run such tools before enrolling.
The Main Details of the Course and Class Timetable
In this page you'll find all the relevant details of the "Continual Learning" course. Please check this page from time to time as class timetables may be subject to change.
In this course you'll learn the fundamentals of Continual Learning with Deep Architectures. At the end of this course you can expect to possess the basic theoretical and practical knowledge that would enable you to autonomously explore more advanced topics and frontiers in this exciting research area. You will also able to apply such skills to your own research topics and real-world applications.
Modality: Mixed In-person / Remote
Where: University of Pisa, , "Sala Polifunzionale" and "Sala Seminari Est" (check ), Largo B. Pontecorvo, 356127, Pisa, Italy. The link to the Microsoft Teams will be sent via email to the registered participants.
Lectures plan: Every Monday and Wednesday 16-18 CEST (there may be exceptions)
Period: 22/11/2021 - 20/12/2021
Language: English
The course will be based on 8 main lectures (2 hours each) and a final session composed of 2 invited talks. You can click on each lecture to check the outline and see the recorded lecture finished.
(22/11)
Location: "Sala Polifunzionale" & Remote
(24/11)
Location: "Sala Seminari Est" & Remote
Main Continual Learning Strategies
This lecture will address the following points:
Regularization Strategies: Intro & Main Approaches
Architectural Strategies: Intro & Main Approaches
Avalanche Implementation
Location: Remote only
Evaluation & Metrics (1-12)
Location: Remote only
Methodologies [part 1] (6-12)
Location: "Sala Seminari Est" & Remote
Methodologies [part 2] (9-12)
Location: "Sala Seminari Est" & Remote
Methodologies [part 3] & Applications (13-12)
Location: "Sala Seminari Est" & Remote
Frontiers in Continual Learning (15-12)
Location: "Sala Seminari Est" & Remote
Avalanche Dev Day (16-12)
Location: "Sala Seminari Est" & Remote
Invited Lectures (20-12)
Location: "Sala Seminari Est" & Remote

Main Continual Learning Strategies, Applications an Tools
In this lecture we will address the following points:
Hybrid Strategies: Intro & Main approaches
Avalanche Implementation
Applications of Continual learning: Past, Present and Future
The Continual Learner Toolbox
Cutting-edge Research and Promising Directions
In this lecture we will address the following points:
Advanced Topics & Promising Future Directions
Distributed Continual Learning
Continual Sequence Learning
Conclusion
Two Guest Lectures on Advanced CL Topics
Title: "Addressing the Stability-Plasticity Dilemma in Rehearsal-Free Continual Learning"
Abstract: Catastrophic forgetting is one of the main challenges to enable deep neural networks to learn a set of tasks sequentially. However, deploying continual learning models in real-world applications requires considering the model efficiency. Continual learning agents should learn new tasks and preserve old knowledge with minimal computational and memory costs. This requires the agent to adapt to new tasks quickly and preserve old knowledge without revisiting its data. These two requirements are competing with each other. In this talk, we will discuss the challenges of solving the stability-plasticity dilemma in the rehearsal-free setting and how to address the requirements needed for building efficient agents. I will present how sparse neural networks are promising for this setting and show the results of the current sparse continual learning approaches. Finally, I will discuss the potential of detecting the relation between previous and current tasks in solving the stability-plasticity dilemma.
is a Ph.D. student at the Department of Mathematics and Computer Science, Eindhoven University of Technology, the Netherlands. She is mainly working on continual learning. Her current research interests are continual lifelong learning, sparse neural networks, few-shot learning, and reinforcement learning. She is a teaching assistant at Eindhoven University of Technology. She contributes to different machine learning courses. She is also a member of the Inclusion & Diversity committee at ContinualAI. Previously, she was a research scientist at Siemens Digital Industries Software.
Title: "Using Generative Models for Continual Learning"
Abstract: Incrementally learning from non-stationary data, referred to as ‘continual learning’, is a key feature of natural intelligence, but an unsolved problem in deep learning. Particularly challenging for deep neural networks is ‘class-incremental learning’, whereby a network must learn to distinguish between classes that are not observed together. In this guest lecture, I will discuss two ways in which generative models can be used to address the class-incremental learning problem. First, I will cover ‘generative replay’. With this popular approach, two models are learned: a classifier network and an additional generative model. Then, when learning new classes, samples from the generative model are interleaved – or replayed – along with the training data of the new classes. Second, I will discuss a more recently proposed approach for class-incremental learning: ‘generative classification’. With this approach, rather than using a generative model indirectly for generating samples to train a discriminative classifier on (as is done with generative replay), the generative model is used directly to perform classification using Bayes’ rule.
is a postdoctoral researcher in the Center for Neuroscience and Artificial Intelligence at the Baylor College of Medicine (Houston, USA), and a visiting researcher in the Computational and Biological Learning Lab at the University of Cambridge (UK). In his research, he aims to use insights and intuitions from neuroscience to make the behavior of deep neural networks more human-like. In particular, Gido is interested in the problem of continual learning, and generative models are his principal tool to address this problem. Previously, for his doctoral research, he used optogenetics and electrophysiological recordings in mice to study the role of replay in memory consolidation in the brain.
Your Teaching Assistants
This course would have not been possible without the help of two great teaching assistants: and ! They helped us significantly improve the quality of the material and offered their technical/didactic support along the entirety of the course!
is a PhD Student in Data Science, under the supervision of ,
Antonio Carta is a Post-Doc in the Department of Computer Science at the University of Pisa, under the supervision of Davide Bacciu. He is also a member of the Computational intelligence and Machine Learning group ( CIML) and Pervasive AI Lab ( PAI) at the University of Pisa, and a member of ContinualAI. His research is focused on continual learning methods applied to deep learning models and recurrent neural networks.
An event for discussing Avalanche developments
The "Avalanche Dev Day" is a annual event organized by ContinualAI to discuss relevant ideas related to the development of Avalanche: the end-to-end reference library for Continual Learning.
When & Where
The event will take place in a mixed in-person / virtual modality from 16:00 - 18:00 CET. The in-person event will take place in "Sala Seminari Est" Largo B. Pontecorvo, 356127, Pisa, Italy. You'll be able to follow the event online using this MS Teams link. Program
Opening Avalanche Beta Presentation Avalanche Panel & Q&As with the main maintainers Next Steps in Avalanche
Additional material you can freely explore!
Continual Learning for Robotics: Definition, Framework, Learning Strategies, Opportunities and Challenges by Timothée Lesort, Vincenzo Lomonaco, Andrei Stoian, Davide Maltoni, David Filliat and Natalia Díaz-Rodr\ǵuez. Information Fusion, 52--68, 2020.
by German I Parisi, Ronald Kemker, Jose L Part, Christopher Kanan and Stefan Wermter. Neural Networks, 54--71, 2019.
by Matthias De Lange, Rahaf Aljundi, Marc Masana, Sarah Parisot, Xu Jia, Ales Leonardis, Gregory Slabaugh and Tinne Tuytelaars. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021.
by Tom Mitchell, William W Cohen, E Hruschka, Partha P Talukdar, B Yang, Justin Betteridge, Andrew Carlson, B Dalvi, Matt Gardner, Bryan Kisiel, J Krishnamurthy, Ni Lao, K Mazaitis, T Mohamed, N Nakashole, E Platanios, A Ritter, M Samadi, B Settles, R Wang, D Wijaya, A Gupta, X Chen, A Saparov, M Greaves and J Welling. Communications of the Acm, 2302--2310, 2015.
by Gail A. Carpenter and Stephen Grossberg. Computer, 77--88, 1988.
by and Mark B Ring. Machine Learning, 77--104, 1997.
explore the ContinualAI projects and people
with many videos and seminars on continual learning
: Advanced, seminar-style course at MILA, 2021.
The Biggest Obstacle for Continual Learning Machines
In this lecture we will address the following points:
What is catastrophic forgetting?
Understanding forgetting with one neuron
A deep learning example: Permuted and Split MNIST
research venues, industry players, software, tutorials and more
Continual Learning Papers on GitHub
you can see an organized list of continual learning papers
Continual Learning Papers with a dynamic and easy-to-use interface to navigate papers, aligned with GitHub resource above
Brainstorming session: how to solve forgetting?
Avalanche: an end-to-end library for continual learning research
Meet the main instructor of the course!
This short course on Continual Learning has been designed and implemented by Vincenzo Lomonaco with the help of the teaching assistants and the feedback from the ContinualAI community.
Vincenzo Lomonaco is a 29 years old Assistant Professor at the University of Pisa, Italy and Co-Founding President of ContinualAI, a non-profit research organization and the largest open community on Continual Learning for AI. Currently, He is also a Co-founder and Board Member of AI for People, Director of the ContinualAI Lab and a proud member of the European Lab for Learning and Intelligent Systems (ELLIS).
In Pisa, he works within the Pervasive AI Lab and the Computational Intelligence and Machine Learning Group, which is also part of the and the Confederation of Laboratories for Artificial Intelligence Research in Europe (CLAIRE).
Previously, he was a Post-Doc @ University of Bologna (with: ) where he also obtained his PhD in early 2019 with a dissertation titled (on a topic he’s been working on for more than 7 years now) which was recognized as one of the top-5 AI dissertation of 2019 by the .
For more than 5 years he worked as a teaching assistant for the and courses in the Department of Computer Science of Engineering (DISI) at UniBo. In the past Vincenzo have been a Visiting Research Scientist at in 2020, at (with: , ) in 2019, at (with: ) in 2018 and at (with: ) in 2017. Even before, he was a Machine Learning Software Engineer @ and a Master Student @ UniBo.
His main research interest and passion is about Continual Learning in all its facets. In particular, I love to study Continual Learning under three main lights: Neuroscience, Deep Learning and Practical Applications, all within a AI Sustainability developmental framework.

All the course material in one page!
Classic readings on catastrophic forgetting
Catastrophic Forgetting; Catastrophic Interference; Stability; Plasticity; Rehearsal. by and Anthony Robins. Connection Science, 123--146, 1995.
Using Semi-Distributed Representations to Overcome Catastrophic Forgetting in Connectionist Networks by and Robert French. In Proceedings of the 13th Annual Cognitive Science Society Conference, 173--178, 1991. [sparsity]
Check out additional material for popular reviews and surveys on continual learning.
, Second Edition. by Zhiyuan Chen and Bing Liu. Synthesis Lectures on Artificial Intelligence and Machine Learning, 2018.
Classic references on Catastrophic Forgetting provided above.
, by A. Thai, S. Stojanov, I. Rehg, and J. M. Rehg, arXiv, 2021.
, by M. Toneva, A. Sordoni, R. T. des Combes, A. Trischler, Y. Bengio, and G. J. Gordon, ICLR, 2019.
, by R. K. Srivastava, J. Masci, S. Kazerounian, F. Gomez, and J. Schmidhuber, NIPS, 2013 (Permuted MNIST task).
CL scenarios
, by G. M. van de Ven and A. S. Tolias, Continual Learning workshop at NeurIPS, 2018. task/domain/class incremental learning
, by D. Maltoni and V. Lomonaco, Neural Networks, vol. 116, pp. 56–73, 2019. New Classes (NC), New Instances (NI), New Instances and Classes (NIC) + Single Incremental (SIT) /Multi (MT) /Multi Incremental (MIT) Task
, by R. Aljundi, K. Kelchtermans, and T. Tuytelaars, CVPR, 2019.
, by M. De Lange and T. Tuytelaars, ICCV, 2021. Data-incremental and comparisons with other CL scenarios
Survey presenting CL scenarios
by Timothée Lesort, Vincenzo Lomonaco, Andrei Stoian, Davide Maltoni, David Filliat and Natalia Díaz-Rodr\ǵuez. Information Fusion, 52--68, 2020. Section 3, in particular.
CL benchmarks
, by V. Lomonaco and D. Maltoni, Proceedings of the 1st Annual Conference on Robot Learning, vol. 78, pp. 17–26, 2017.
, by Q. She et al. ICRA, 2020.
, by S. Stojanov et al., CVPR, 2019. CRIB benchmark
, by R. Roady, T. L. Hayes, H. Vaidya, and C. Kanan, CVPR 2019.
, by A. Chaudhry, M. Ranzato, M. Rohrbach, and M. Elhoseiny, ICLR, 2019. Evaluation protocol with "split by experiences".
, by D. Lopez-Paz and M. Ranzato, NIPS, 2017. popular formalization of ACC, BWT, FWT.
, by M. Mundt, S. Lang, Q. Delfosse, and K. Kersting, arXiv, 2021.
, by N. Díaz-Rodríguez, V. Lomonaco, D. Filliat, and D. Maltoni, arXiv, 2018. definition of additional metrics
Replay
, by A. Prabhu, P. H. S. Torr, and P. K. Dokania, ECCV, 2020.
, by R. Aljundi et al., NeurIPS, 2019.
Latent replay
, by Lorenzo Pellegrini, Gabriele Graffieti, Vincenzo Lomonaco, Davide Maltoni, IROS, 2020.
Generative replay
, by H. Shin, J. K. Lee, J. Kim, and J. Kim, NeurIPS, 2017.
, by G. M. van de Ven, H. T. Siegelmann, and A. S. Tolias, Nature Communications, 2020
L1, L2, Dropout
, by Goodfellow et al, 2015.
, by Mirzadeh et al., NeurIPS, 2020.
Regularization strategies
, by Li et al., TPAMI 2017.
, by Kirkpatrick et al, PNAS 2017.
, by Zenke et al., 2017.
, by Von Osvald et al., ICLR 2020.
Architectural strategies
, by Lomonaco et al, CLVision Workshop at CVPR 2020. CWR*
, by Rusu et al., arXiv, 2016.
, by Mallya et al., CVPR, 2018.
, by Serra et al., ICML, 2018.
, by Wortsman et al., NeurIPS, 2020.
Hybrid strategies
, by Lopez-Paz et al, NeurIPS 2017 GEM.
, by Rebuffi et al, CVPR, 2017.
, by Schwarz et al, ICML, 2018.
, by L. Pellegrini et al., IROS 2020 AR1*.
Applications
, by L. Pellegrini et al., ESANN, 2021.
by T. Diethe et al., Continual Learning Workshop at NeurIPS, 2018.
Startups / Companies: , ,
Tools / Libraries: , , ,
, by Hadsell et al., Trends in Cognitive Science, 2020. Continual meta learning - Meta continual learning
, by Khetarpal et al, arXiv, 2020.
, by D. Rao et al., NeurIPS 2019.
Distributed Continual Learning , by Carta et al., arXiv, 2021.
Continual Sequence Learning , by Cossu et al, Neural Networks, vol. 143, pp. 607–627, 2021. , by Cossu et al., ESANN, 2021.
, the software library based on PyTorch used for the coding session of this course.
, coding continual learning from scratch in notebooks



