🔮Invited Talks

Two Guest Lectures on Advanced CL Topics

Ghada Sokar

Title: "Addressing the Stability-Plasticity Dilemma in Rehearsal-Free Continual Learning"

Abstract: Catastrophic forgetting is one of the main challenges to enable deep neural networks to learn a set of tasks sequentially. However, deploying continual learning models in real-world applications requires considering the model efficiency. Continual learning agents should learn new tasks and preserve old knowledge with minimal computational and memory costs. This requires the agent to adapt to new tasks quickly and preserve old knowledge without revisiting its data. These two requirements are competing with each other. In this talk, we will discuss the challenges of solving the stability-plasticity dilemma in the rehearsal-free setting and how to address the requirements needed for building efficient agents. I will present how sparse neural networks are promising for this setting and show the results of the current sparse continual learning approaches. Finally, I will discuss the potential of detecting the relation between previous and current tasks in solving the stability-plasticity dilemma.

Ghada Sokar is a Ph.D. student at the Department of Mathematics and Computer Science, Eindhoven University of Technology, the Netherlands. She is mainly working on continual learning. Her current research interests are continual lifelong learning, sparse neural networks, few-shot learning, and reinforcement learning. She is a teaching assistant at Eindhoven University of Technology. She contributes to different machine learning courses. She is also a member of the Inclusion & Diversity committee at ContinualAI. Previously, she was a research scientist at Siemens Digital Industries Software.

Gido Van De Ven

Title: "Using Generative Models for Continual Learning"

Abstract: Incrementally learning from non-stationary data, referred to as ‘continual learning’, is a key feature of natural intelligence, but an unsolved problem in deep learning. Particularly challenging for deep neural networks is ‘class-incremental learning’, whereby a network must learn to distinguish between classes that are not observed together. In this guest lecture, I will discuss two ways in which generative models can be used to address the class-incremental learning problem. First, I will cover ‘generative replay’. With this popular approach, two models are learned: a classifier network and an additional generative model. Then, when learning new classes, samples from the generative model are interleaved – or replayed – along with the training data of the new classes. Second, I will discuss a more recently proposed approach for class-incremental learning: ‘generative classification’. With this approach, rather than using a generative model indirectly for generating samples to train a discriminative classifier on (as is done with generative replay), the generative model is used directly to perform classification using Bayes’ rule.

Gido van de Ven is a postdoctoral researcher in the Center for Neuroscience and Artificial Intelligence at the Baylor College of Medicine (Houston, USA), and a visiting researcher in the Computational and Biological Learning Lab at the University of Cambridge (UK). In his research, he aims to use insights and intuitions from neuroscience to make the behavior of deep neural networks more human-like. In particular, Gido is interested in the problem of continual learning, and generative models are his principal tool to address this problem. Previously, for his doctoral research, he used optogenetics and electrophysiological recordings in mice to study the role of replay in memory consolidation in the brain.

Last updated