Only this pageAll pages
Powered by GitBook
1 of 24

Continual Learning Course

Loading...

Background

Loading...

Loading...

Loading...

Lectures

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Invited & Extra Lectures

Loading...

Loading...

Resources

Loading...

Loading...

About Us

Loading...

Loading...

Useful Links

Scenarios & Benchmarks

Show me your Data!

2MB
03_benchmarks.pdf
PDF
Open
Slides PDF

In this lecture we will address the following points:

  • Possible continual learning scenarios

  • Existing and commonly used benchmarks

  • Avalanche Benchmarks

Continual Learning: On Machines that can Learn Continually

A University of Pisa, ContinualAI and AIDA Doctoral Academy Course

Learning continually from non-stationary data streams is a fascinating research topic and a fundamental aspect of Intelligence. At , in conjunction with the and the , we are proud to offer the first open-access course on Continual Learning. Anyone from around the world can join the class and learn about this fascinating topic, completely for free!

The course is tailored for Graduate / PhD Students as an introduction to Continual Learning

, especially focusing on the recent
Deep Learning
advances.

The course will follow a mixed in-person / virtual modality and recorded lectures will be uploaded on the ContinualAI Youtube account (and remain available on Youtube for async views). However, if you want to actually be enrolled in the course, interact in class and get a certificate of attendance you need to follow the procedure described below ⬇️.

Please note that the certificate of attendance is only released after a project-based exam to be agreed with the course instructor.

You can check out the official course structure (8 lectures + 2 invited talks), the class timetable and other relevant details about the course in the section below:

Officially Enroll in the Course

In order to officially enroll in this course, hence being able to participate and interact in class, you need to register through the form below (you'll be contacted soon with more instructions if you already complected the enrollment).

Enrollments for the 2021 course are now closed. You can still follow the recorded lectures on YouTube in the official ContinualAI channel!

ContinualAI
University of Pisa
AIDA Doctoral Academy
official
📑Course Details

Introduction & Motivation

Why Continual Learning?

4MB
01_intro_and_motivation.pdf
PDF
Open
Slides PDF

In this lecture we will address the following points:

  • Course structure and modality

  • Intro to continual learning

  • Relationship with other learning paradigms

  • Brief history of continual Learning and its milestones

Prerequisites

Things you Should Know Before Enrolling

This course has been designed for Graduate and PhD Students that have never been exposed to Continual Learning. However, it assumes basic knowledge in Computer Science (Bachelor level) and Machine Learning. In particular we assume basic knowledge in Deep Learning.

For students do not have this background we suggest to follow at least an introductory Machine Learning course such the one offered by Andrew Ng at Cursera.

We also assume basic hands-on knowledge about:

  • Anaconda, Python and PyCharm

  • Python Notebooks

  • Google Colaboratory

  • Git and GitHub

  • PyTorch

Make sure you learn the basics of these tools and languages as they will be used extensively across the course.

Methodologies [Part 1]

Main Continual Learning Strategies

2MB
05_methodologies_part1.pdf
PDF
Open
Slides PDF

This lecture will address the following points:

  • Strategies Categorization and History

  • Replay Strategies: Intro & Main Approaches

  • Avalanche Strategies & Plugins

Evaluation & Metrics

How to Evaluate your Continual Learning Agent

In this lecture we will address the following points:

  • Evaluation Protocols

  • Continual learning metrics

  • Avalanche Metrics & Loggers

Tools & Setup

How to Setup your System Before Starting the Course

Before starting the course, please make sure you mature some confidence with the following tools:

  • Microsoft Teams

  • Anaconda, Python and PyCharm (or any other IDE of your choice for Python)

  • Google Colaboratory

  • PyTorch

Please make sure to setup your personal computer to run such tools before enrolling.

2MB
04_evaluation.pdf
PDF
Open
Slides PDF

Course Details

The Main Details of the Course and Class Timetable

In this page you'll find all the relevant details of the "Continual Learning" course. Please check this page from time to time as class timetables may be subject to change.

Course Objective

In this course you'll learn the fundamentals of Continual Learning with Deep Architectures. At the end of this course you can expect to possess the basic theoretical and practical knowledge that would enable you to autonomously explore more advanced topics and frontiers in this exciting research area. You will also able to apply such skills to your own research topics and real-world applications.

Course Details

Modality: Mixed In-person / Remote

Where: University of Pisa, , "Sala Polifunzionale" and "Sala Seminari Est" (check ), Largo B. Pontecorvo, 356127, Pisa, Italy. The link to the Microsoft Teams will be sent via email to the registered participants.

Lectures plan: Every Monday and Wednesday 16-18 CEST (there may be exceptions)

Period: 22/11/2021 - 20/12/2021

Language: English

Lecture Details

The course will be based on 8 main lectures (2 hours each) and a final session composed of 2 invited talks. You can click on each lecture to check the outline and see the recorded lecture finished.

  1. (22/11)

    Location: "Sala Polifunzionale" & Remote

  2. (24/11)

    Location: "Sala Seminari Est" & Remote

Methodologies [Part 2]

Main Continual Learning Strategies

This lecture will address the following points:

  • Regularization Strategies: Intro & Main Approaches

  • Architectural Strategies: Intro & Main Approaches

  • Avalanche Implementation

(29-11)

Location: Remote only

  • Evaluation & Metrics (1-12)

    Location: Remote only

  • Methodologies [part 1] (6-12)

    Location: "Sala Seminari Est" & Remote

  • Methodologies [part 2] (9-12)

    Location: "Sala Seminari Est" & Remote

  • Methodologies [part 3] & Applications (13-12)

    Location: "Sala Seminari Est" & Remote

  • Frontiers in Continual Learning (15-12)

    Location: "Sala Seminari Est" & Remote

  • Avalanche Dev Day (16-12)

    Location: "Sala Seminari Est" & Remote

  • Invited Lectures (20-12)

    Location: "Sala Seminari Est" & Remote

  • Department of Computer Science
    below
    Introduction & Motivation
    Understanding Catastrophic Forgetting
    Scenarios & Benchmarks
    2MB
    06_methodologies_part2.pdf
    PDF
    Open
    Slides PDF

    Methodologies [Part 3], Applications & Tools

    Main Continual Learning Strategies, Applications an Tools

    3MB
    07_methodologies_part3.pdf
    PDF
    Open
    Slides PDF

    In this lecture we will address the following points:

    • Hybrid Strategies: Intro & Main approaches

    • Avalanche Implementation

    • Applications of Continual learning: Past, Present and Future

    • The Continual Learner Toolbox

    Frontiers in Continual Learning

    Cutting-edge Research and Promising Directions

    In this lecture we will address the following points:

    • Advanced Topics & Promising Future Directions

    • Distributed Continual Learning

    • Continual Sequence Learning

    Conclusion

    4MB
    08_frontiers.pdf
    PDF
    Open
    Slides PDF

    Invited Talks

    Two Guest Lectures on Advanced CL Topics

    Ghada Sokar

    Title: "Addressing the Stability-Plasticity Dilemma in Rehearsal-Free Continual Learning"

    Abstract: Catastrophic forgetting is one of the main challenges to enable deep neural networks to learn a set of tasks sequentially. However, deploying continual learning models in real-world applications requires considering the model efficiency. Continual learning agents should learn new tasks and preserve old knowledge with minimal computational and memory costs. This requires the agent to adapt to new tasks quickly and preserve old knowledge without revisiting its data. These two requirements are competing with each other. In this talk, we will discuss the challenges of solving the stability-plasticity dilemma in the rehearsal-free setting and how to address the requirements needed for building efficient agents. I will present how sparse neural networks are promising for this setting and show the results of the current sparse continual learning approaches. Finally, I will discuss the potential of detecting the relation between previous and current tasks in solving the stability-plasticity dilemma.

    is a Ph.D. student at the Department of Mathematics and Computer Science, Eindhoven University of Technology, the Netherlands. She is mainly working on continual learning. Her current research interests are continual lifelong learning, sparse neural networks, few-shot learning, and reinforcement learning. She is a teaching assistant at Eindhoven University of Technology. She contributes to different machine learning courses. She is also a member of the Inclusion & Diversity committee at ContinualAI. Previously, she was a research scientist at Siemens Digital Industries Software.

    Gido Van De Ven

    Title: "Using Generative Models for Continual Learning"

    Abstract: Incrementally learning from non-stationary data, referred to as ‘continual learning’, is a key feature of natural intelligence, but an unsolved problem in deep learning. Particularly challenging for deep neural networks is ‘class-incremental learning’, whereby a network must learn to distinguish between classes that are not observed together. In this guest lecture, I will discuss two ways in which generative models can be used to address the class-incremental learning problem. First, I will cover ‘generative replay’. With this popular approach, two models are learned: a classifier network and an additional generative model. Then, when learning new classes, samples from the generative model are interleaved – or replayed – along with the training data of the new classes. Second, I will discuss a more recently proposed approach for class-incremental learning: ‘generative classification’. With this approach, rather than using a generative model indirectly for generating samples to train a discriminative classifier on (as is done with generative replay), the generative model is used directly to perform classification using Bayes’ rule.

    is a postdoctoral researcher in the Center for Neuroscience and Artificial Intelligence at the Baylor College of Medicine (Houston, USA), and a visiting researcher in the Computational and Biological Learning Lab at the University of Cambridge (UK). In his research, he aims to use insights and intuitions from neuroscience to make the behavior of deep neural networks more human-like. In particular, Gido is interested in the problem of continual learning, and generative models are his principal tool to address this problem. Previously, for his doctoral research, he used optogenetics and electrophysiological recordings in mice to study the role of replay in memory consolidation in the brain.

    Teaching Assistants

    Your Teaching Assistants

    This course would have not been possible without the help of two great teaching assistants: and ! They helped us significantly improve the quality of the material and offered their technical/didactic support along the entirety of the course!

    Please refer to them and the for any issue you may have.

    is a PhD Student in Data Science, under the supervision of ,

    and
    . His research focuses on Continual Learning, with applications to Recurrent Neural Networks models and sequential data processing. He is a member of the
    and of the
    group at University of Pisa. He is a Board Member and Treasurer of
    . He is also the Principal Maintainer of
    and one of the main maintainers of
    , an End-to-End library for Continual Learning based on
    .
    Antonio Carta

    Antonio Carta is a Post-Doc in the Department of Computer Science at the University of Pisa, under the supervision of Davide Bacciu. He is also a member of the Computational intelligence and Machine Learning group ( CIML) and Pervasive AI Lab ( PAI) at the University of Pisa, and a member of ContinualAI. His research is focused on continual learning methods applied to deep learning models and recurrent neural networks.

    Antonio Carta
    Andrea Cossu
    course instructor
    Andrea Cossu
    Davide Bacciu
    Vincenzo Lomonaco
    Andrea Cossu
    Anna Monreale
    Pervasive AI Lab
    Computational Intelligence and Machine Learning (CIML)
    ContinualAI
    ContinualAI wiki
    Avalanche
    PyTorch
    2MB
    ghada_guest_lecture_slides.pdf
    PDF
    Open
    Ghada Sokar
    5MB
    gido_guest_lecture_slides.pdf
    PDF
    Open
    Gido van de Ven
    Ghada Sokar
    Gido van de Ven

    Avalanche Dev Day

    An event for discussing Avalanche developments

    The "Avalanche Dev Day" is a annual event organized by ContinualAI to discuss relevant ideas related to the development of Avalanche: the end-to-end reference library for Continual Learning.

    When & Where

    The event will take place in a mixed in-person / virtual modality from 16:00 - 18:00 CET. The in-person event will take place in "Sala Seminari Est" Largo B. Pontecorvo, 356127, Pisa, Italy. You'll be able to follow the event online using this MS Teams link. Program

    Opening Avalanche Beta Presentation Avalanche Panel & Q&As with the main maintainers Next Steps in Avalanche

    Additional Material

    Additional material you can freely explore!

    Popular Continual Learning Reviews and Surveys

    • Continual Learning for Robotics: Definition, Framework, Learning Strategies, Opportunities and Challenges by Timothée Lesort, Vincenzo Lomonaco, Andrei Stoian, Davide Maltoni, David Filliat and Natalia Díaz-Rodr\ǵuez. Information Fusion, 52--68, 2020.

    • by German I Parisi, Ronald Kemker, Jose L Part, Christopher Kanan and Stefan Wermter. Neural Networks, 54--71, 2019.

    • by Matthias De Lange, Rahaf Aljundi, Marc Masana, Sarah Parisot, Xu Jia, Ales Leonardis, Gregory Slabaugh and Tinne Tuytelaars. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021.

    Additional papers

    • by Tom Mitchell, William W Cohen, E Hruschka, Partha P Talukdar, B Yang, Justin Betteridge, Andrew Carlson, B Dalvi, Matt Gardner, Bryan Kisiel, J Krishnamurthy, Ni Lao, K Mazaitis, T Mohamed, N Nakashole, E Platanios, A Ritter, M Samadi, B Settles, R Wang, D Wijaya, A Gupta, X Chen, A Saparov, M Greaves and J Welling. Communications of the Acm, 2302--2310, 2015.

    • by Gail A. Carpenter and Stephen Grossberg. Computer, 77--88, 1988.

    • by and Mark B Ring. Machine Learning, 77--104, 1997.

    Useful Links about Resources on Continual learning

      • explore the ContinualAI projects and people

    • with many videos and seminars on continual learning

    Related Courses

    • : Advanced, seminar-style course at MILA, 2021.

    Understanding Catastrophic Forgetting

    The Biggest Obstacle for Continual Learning Machines

    In this lecture we will address the following points:

    • What is catastrophic forgetting?

    • Understanding forgetting with one neuron

    • A deep learning example: Permuted and Split MNIST

    ContinualAI wiki, general resources on continual learning
    • research venues, industry players, software, tutorials and more

  • Continual Learning Papers on GitHub

    • you can see an organized list of continual learning papers

  • Continual Learning Papers with a dynamic and easy-to-use interface to navigate papers, aligned with GitHub resource above

  • Continual Lifelong Learning with Neural Networks: A Review
    A Continual Learning Survey: Defying Forgetting in Classification Tasks
    Never-Ending Learning
    The ART of Adaptive Pattern Recognition by a Self-Organizing Neural Network
    CHILD: A First Step Towards Continual Learning
    ContinualAI association main website
    YouTube account of ContinualAI
    "Continual Learning: Towards Broad AI"

    Brainstorming session: how to solve forgetting?

  • Avalanche: an end-to-end library for continual learning research

  • 2MB
    02_forgetting.pdf
    PDF
    Open
    Slides PDF
    Lecture #3: Scenarios & Benchmarks - Video Recording

    Your Instructor

    Meet the main instructor of the course!

    This short course on Continual Learning has been designed and implemented by Vincenzo Lomonaco with the help of the teaching assistants and the feedback from the ContinualAI community.

    Vincenzo Lomonaco

    Vincenzo Lomonaco is a 29 years old Assistant Professor at the University of Pisa, Italy and Co-Founding President of ContinualAI, a non-profit research organization and the largest open community on Continual Learning for AI. Currently, He is also a Co-founder and Board Member of AI for People, Director of the ContinualAI Lab and a proud member of the European Lab for Learning and Intelligent Systems (ELLIS).

    In Pisa, he works within the Pervasive AI Lab and the Computational Intelligence and Machine Learning Group, which is also part of the and the Confederation of Laboratories for Artificial Intelligence Research in Europe (CLAIRE).

    Previously, he was a Post-Doc @ University of Bologna (with: ) where he also obtained his PhD in early 2019 with a dissertation titled (on a topic he’s been working on for more than 7 years now) which was recognized as one of the top-5 AI dissertation of 2019 by the .

    For more than 5 years he worked as a teaching assistant for the and courses in the Department of Computer Science of Engineering (DISI) at UniBo. In the past Vincenzo have been a Visiting Research Scientist at in 2020, at (with: , ) in 2019, at (with: ) in 2018 and at (with: ) in 2017. Even before, he was a Machine Learning Software Engineer @ and a Master Student @ UniBo.

    His main research interest and passion is about Continual Learning in all its facets. In particular, I love to study Continual Learning under three main lights: Neuroscience, Deep Learning and Practical Applications, all within a AI Sustainability developmental framework.

    Davide Maltoni
    “Continual Learning with Deep Architectures”
    Italian Association for Artificial Intelligence
    Machine Learning
    Computer Architectures
    AI Labs
    Numenta
    Jeff Hawkins
    Subutai Ahmad
    ENSTA ParisTech
    David Filliat
    Purdue University
    Eugenio Culurciello
    iDL in-line Devices
    https://docs.google.com/presentation/d/1majqWeuWRwgT_R1PwrT1HjTCFIvYFmpiVQWpHYn_Jdc/edit?usp=sharingdocs.google.com
    Lecture #1: Introduction & Motivation - Slides
    Lecture #5: Methodologies [Part 1] - Video Recording

    Course Materials

    All the course material in one page!

    Introduction & Motivation

    4MB
    01_intro_and_motivation.pdf
    PDF
    Open
    Slides PDF

    Classic readings on catastrophic forgetting

    Catastrophic Forgetting; Catastrophic Interference; Stability; Plasticity; Rehearsal. by and Anthony Robins. Connection Science, 123--146, 1995.

    Using Semi-Distributed Representations to Overcome Catastrophic Forgetting in Connectionist Networks by and Robert French. In Proceedings of the 13th Annual Cognitive Science Society Conference, 173--178, 1991. [sparsity]

    Check out additional material for popular reviews and surveys on continual learning.

    , Second Edition. by Zhiyuan Chen and Bing Liu. Synthesis Lectures on Artificial Intelligence and Machine Learning, 2018.

    Classic references on Catastrophic Forgetting provided above.

    , by A. Thai, S. Stojanov, I. Rehg, and J. M. Rehg, arXiv, 2021.

    , by M. Toneva, A. Sordoni, R. T. des Combes, A. Trischler, Y. Bengio, and G. J. Gordon, ICLR, 2019.

    , by R. K. Srivastava, J. Masci, S. Kazerounian, F. Gomez, and J. Schmidhuber, NIPS, 2013 (Permuted MNIST task).

    CL scenarios

    , by G. M. van de Ven and A. S. Tolias, Continual Learning workshop at NeurIPS, 2018. task/domain/class incremental learning

    , by D. Maltoni and V. Lomonaco, Neural Networks, vol. 116, pp. 56–73, 2019. New Classes (NC), New Instances (NI), New Instances and Classes (NIC) + Single Incremental (SIT) /Multi (MT) /Multi Incremental (MIT) Task

    , by R. Aljundi, K. Kelchtermans, and T. Tuytelaars, CVPR, 2019.

    , by M. De Lange and T. Tuytelaars, ICCV, 2021. Data-incremental and comparisons with other CL scenarios

    Survey presenting CL scenarios

    by Timothée Lesort, Vincenzo Lomonaco, Andrei Stoian, Davide Maltoni, David Filliat and Natalia Díaz-Rodr\ǵuez. Information Fusion, 52--68, 2020. Section 3, in particular.

    CL benchmarks

    , by V. Lomonaco and D. Maltoni, Proceedings of the 1st Annual Conference on Robot Learning, vol. 78, pp. 17–26, 2017.

    , by Q. She et al. ICRA, 2020.

    , by S. Stojanov et al., CVPR, 2019. CRIB benchmark

    , by R. Roady, T. L. Hayes, H. Vaidya, and C. Kanan, CVPR 2019.

    , by A. Chaudhry, M. Ranzato, M. Rohrbach, and M. Elhoseiny, ICLR, 2019. Evaluation protocol with "split by experiences".

    , by D. Lopez-Paz and M. Ranzato, NIPS, 2017. popular formalization of ACC, BWT, FWT.

    , by M. Mundt, S. Lang, Q. Delfosse, and K. Kersting, arXiv, 2021.

    , by N. Díaz-Rodríguez, V. Lomonaco, D. Filliat, and D. Maltoni, arXiv, 2018. definition of additional metrics

    Replay

    , by A. Prabhu, P. H. S. Torr, and P. K. Dokania, ECCV, 2020.

    , by R. Aljundi et al., NeurIPS, 2019.

    Latent replay

    , by Lorenzo Pellegrini, Gabriele Graffieti, Vincenzo Lomonaco, Davide Maltoni, IROS, 2020.

    Generative replay

    , by H. Shin, J. K. Lee, J. Kim, and J. Kim, NeurIPS, 2017.

    , by G. M. van de Ven, H. T. Siegelmann, and A. S. Tolias, Nature Communications, 2020

    L1, L2, Dropout

    , by Goodfellow et al, 2015.

    , by Mirzadeh et al., NeurIPS, 2020.

    Regularization strategies

    , by Li et al., TPAMI 2017.

    , by Kirkpatrick et al, PNAS 2017.

    , by Zenke et al., 2017.

    , by Von Osvald et al., ICLR 2020.

    Architectural strategies

    , by Lomonaco et al, CLVision Workshop at CVPR 2020. CWR*

    , by Rusu et al., arXiv, 2016.

    , by Mallya et al., CVPR, 2018.

    , by Serra et al., ICML, 2018.

    , by Wortsman et al., NeurIPS, 2020.

    Hybrid strategies

    , by Lopez-Paz et al, NeurIPS 2017 GEM.

    , by Rebuffi et al, CVPR, 2017.

    , by Schwarz et al, ICML, 2018.

    , by L. Pellegrini et al., IROS 2020 AR1*.

    Applications

    , by L. Pellegrini et al., ESANN, 2021.

    by T. Diethe et al., Continual Learning Workshop at NeurIPS, 2018.

    Startups / Companies: , ,

    Tools / Libraries: , , ,

    , by Hadsell et al., Trends in Cognitive Science, 2020. Continual meta learning - Meta continual learning

    , by Khetarpal et al, arXiv, 2020.

    , by D. Rao et al., NeurIPS 2019.

    Distributed Continual Learning , by Carta et al., arXiv, 2021.

    Continual Sequence Learning , by Cossu et al, Neural Networks, vol. 143, pp. 607–627, 2021. , by Cossu et al., ESANN, 2021.

    Software

    , the software library based on PyTorch used for the coding session of this course.

    , coding continual learning from scratch in notebooks

    Lifelong Machine Learning
    Understanding Catastrophic Forgetting
    2MB
    02_forgetting.pdf
    PDF
    Open
    Slides PDF
    Does Continual Learning = Catastrophic Forgetting?
    An Empirical Study of Example Forgetting during Deep Neural Network Learning
    Compete to Compute
    Scenarios & Benchmarks
    2MB
    03_benchmarks.pdf
    PDF
    Open
    Slides PDF
    Three scenarios for continual learning
    Continuous Learning in Single-Incremental-Task Scenarios
    Task-Free Continual Learning
    Continual Prototype Evolution: Learning Online from Non-Stationary Data Streams
    Continual Learning for Robotics: Definition, Framework, Learning Strategies, Opportunities and Challenges
    CORe50: a New Dataset and Benchmark for Continuous Object Recognition
    OpenLORIS-Object: A Robotic Vision Dataset and Benchmark for Lifelong Deep Learning
    Incremental Object Learning From Contiguous Views
    Stream-51: Streaming Classification and Novelty Detection From Videos
    Evaluation & Metrics
    2MB
    04_evaluation.pdf
    PDF
    Open
    Slides PDF
    Efficient Lifelong Learning with A-GEM
    Gradient Episodic Memory for Continual Learning
    CLEVA-Compass: A Continual Learning EValuation Assessment Compass to Promote Research Transparency and Comparability
    Don’t forget, there is more than forgetting: new metrics for Continual Learning
    Methodologies [part 1]
    2MB
    05_methodologies_part1.pdf
    PDF
    Open
    Slides PDF
    GDumb: A Simple Approach that Questions Our Progress in Continual Learning
    Online Continual Learning with Maximal Interfered Retrieval
    Latent Replay for Real-Time Continual Learning
    Continual Learning with Deep Generative Replay
    Brain-inspired replay for continual learning with artificial neural networks
    Methodologies [part 2]
    2MB
    06_methodologies_part2.pdf
    PDF
    Open
    Slides PDF
    An Empirical Investigation of Cata
    trophic Forgetting in Gradient-Based Neural Networks
    Understanding the Role of Training Regimes in Continual Learning
    Learning without Forgetting
    Overcoming catastrophic forgetting in neural networks
    Continual Learning Through Synaptic Intelligence
    Continual learning with hypernetworks
    Rehearsal-Free Continual Learning over Small Non-I.I.D. Batches
    Progressive Neural Networks
    PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning
    Overcoming catastrophic forgetting with hard attention to the task
    Supermasks in Superposition
    Methodologies [part 3] & Applications
    3MB
    07_methodologies_part3.pdf
    PDF
    Open
    Slides PDF
    Gradient Episodic Memory for Continual Learning
    iCaRL: Incremental Classifier and Representation Learning
    Progress & Compress: A scalable framework for continual learning
    Latent Replay for Real-Time Continual Learning
    Continual Learning at the Edge: Real-Time Training on Smartphone Devices
    Continual Learning in Practice
    CogitAI
    Neurala
    Gantry
    Avalanche
    Continuum
    Sequoia
    CL-Gym
    Frontiers in Continual Learning
    4MB
    08_frontiers.pdf
    PDF
    Open
    Slides PDF
    Embracing Change: Continual Learning in Deep Neural Networks
    Towards Continual Reinforcement Learning: A Review and Perspectives
    Continual Unsupervised Representation Learning
    Ex-Model: Continual Learning from a Stream of Trained Models
    Continual learning for recurrent neural networks: An empirical evaluation
    Continual Learning with Echo State Networks
    Invited Lectures
    Avalanche: an End-to-End Library for Continual Learning
    ContinualAI Colab notebooks
    https://docs.google.com/presentation/d/1eoGzgsx7-5EGiOqAvD9eDSte1rR5GiUZQIBkNylAilM/edit?usp=sharingdocs.google.com
    Lecture #3: Scenarios & Benchmarks - Slides
    https://docs.google.com/presentation/d/1bxRDyMIZbJ08ZZnhMtzvxQSSYTAlM3XZZA2I2yYzT34/edit?usp=sharingdocs.google.com
    Lecture #5: Methodologies [Part 1] - Slides
    https://docs.google.com/presentation/d/1uEso1T1_ONqOIwQvCQvOOpgxwWQ3wWrWEVu5cxsbq_Y/edit?usp=sharingdocs.google.com
    Lecture #7: Methodologies [Part 3], Applications & Tools - Slides
    Lecture #7: Methodologies [Part 3], Applications & Tools - Video Recording
    Lecture #1: Introduction & Motivation - Recording [22-11-2021]
    https://docs.google.com/presentation/d/1xqJTAMFcB-XHQ-Jouya5uiGR94jRUnkVjDYAXdGDiUA/edit?usp=sharingdocs.google.com
    Lecture #4: Evaluation & Metrics - Slides
    Lecture #4: Evaluation & Metrics - Video Recording
    Avalanche Dev Day 2021 - Video Recording
    Guest Lectures - Video Recording
    https://docs.google.com/presentation/d/1ftEHCllRS9cQ8NBU2zT3BjhjQnAqQ8GxfD9zruRG9kA/edit?usp=sharingdocs.google.com
    Guest Lectures - Slides
    https://docs.google.com/presentation/d/1wXPHwii8HZHgdwm4ADVfK_tutTdCFxygnjSUQJGKsvA/edit?usp=sharingdocs.google.com
    Avalanche Dev Day 2021 - Slides
    https://docs.google.com/presentation/d/1IW5EcWx-ZUDEk4RfeL6t3fbAhXlLfkjX_IaxJDeg770/edit?usp=sharingdocs.google.com
    Lecture #8: Frontiers in Continual Learning - Slides
    Lecture #2: Understanding Catastrophic Forgetting - Video Recording
    https://docs.google.com/presentation/d/1VLjx99qDhLpiNxdokz1exgDizM6PG_EYPz9PigduDRI/edit?usp=sharingdocs.google.com
    Lecture #2: Understanding Catastrophic Forgetting - Slides
    https://docs.google.com/presentation/d/1SBin-JySTzRuVX2X3BdjLT2-D93xevel29RlMpEPTsA/edit?usp=sharingdocs.google.com
    Lecture #6: Methodologies [Part 2] - Slides
    Lecture #6: Methodologies [Part 2] - Video Recording
    Lecture #8: Frontiers in Continual Learning - Video Recording