# Invited Talks

{% embed url="<https://www.youtube.com/watch?ab_channel=ContinualAI&v=_MYppmNaS1k>" %}
Guest Lectures - Video Recording
{% endembed %}

{% embed url="<https://docs.google.com/presentation/d/1ftEHCllRS9cQ8NBU2zT3BjhjQnAqQ8GxfD9zruRG9kA/edit?usp=sharing>" %}
Guest Lectures - Slides
{% endembed %}

### Ghada Sokar

**Title**: ***"**&#x41;ddressing the Stability-Plasticity Dilemma in Rehearsal-Free Continual Learning"*

**Abstract**: *Catastrophic forgetting is one of the main challenges to enable deep neural networks to learn a set of tasks sequentially.  However, deploying continual learning models in real-world applications requires considering the model efficiency. Continual learning agents should learn new tasks and preserve old knowledge with minimal computational and memory costs. This requires the agent to adapt to new tasks quickly and preserve old knowledge without revisiting its data. These two requirements are competing with each other. In this talk, we will discuss the challenges of solving the stability-plasticity dilemma in the rehearsal-free setting and how to address the requirements needed for building efficient agents. I will present how sparse neural networks are promising for this setting and show the results of the current sparse continual learning approaches. Finally, I will discuss the potential of detecting the relation between previous and current tasks in solving the stability-plasticity dilemma.*

{% file src="<https://3849168153-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-MNmnUsYueCOx_WDehfy%2Fuploads%2F7106TJAP4Od64GAkpZCA%2FGhada_guest_lecture_CL_Course.pdf?alt=media&token=0e15d72e-3b19-4bac-bd88-8bb136108a1e>" %}

![Ghada Sokar](https://3849168153-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-MNmnUsYueCOx_WDehfy%2Fuploads%2FYRybqhyilU9wrThoc9rs%2Fdownload.jpg?alt=media\&token=008b620e-504c-466d-a272-06ea5b79591d)

[**Ghada Sokar**](https://research.tue.nl/en/persons/ghada-sokar) *is a Ph.D. student at the Department of Mathematics and Computer Science, Eindhoven University of Technology, the Netherlands. She is mainly working on continual learning. Her current research interests are continual lifelong learning, sparse neural networks, few-shot learning, and reinforcement learning. She is a teaching assistant at Eindhoven University of Technology. She contributes to different machine learning courses. She is also a member of the Inclusion & Diversity committee at ContinualAI. Previously, she was a research scientist at Siemens Digital Industries Software.*

### Gido Van De Ven

**Title**:  *"Using Generative Models for Continual Learning"*

**Abstract**: *Incrementally learning from non-stationary data, referred to as ‘continual learning’, is a key feature of natural intelligence, but an unsolved problem in deep learning. Particularly challenging for deep neural networks is ‘class-incremental learning’, whereby a network must learn to distinguish between classes that are not observed together. In this guest lecture, I will discuss two ways in which generative models can be used to address the class-incremental learning problem. First, I will cover ‘generative replay’. With this popular approach, two models are learned: a classifier network and an additional generative model. Then, when learning new classes, samples from the generative model are interleaved – or replayed – along with the training data of the new classes. Second, I will discuss a more recently proposed approach for class-incremental learning: ‘generative classification’. With this approach, rather than using a generative model indirectly for generating samples to train a discriminative classifier on (as is done with generative replay), the generative model is used directly to perform classification using Bayes’ rule.*

{% file src="<https://3849168153-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-MNmnUsYueCOx_WDehfy%2Fuploads%2FIaFIXILXyNX2gWucwN1x%2Fslides_CLcourse_20Dec.pdf?alt=media&token=a3e0e6e9-0a27-44f5-a0a4-23df35329106>" %}

![Gido van de Ven](https://3849168153-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-MNmnUsYueCOx_WDehfy%2Fuploads%2FdAVNjV55UVCAZiTsKWvB%2FBb4oVjkM.jpg?alt=media\&token=2c44c6a5-c700-4f93-97d8-188e940f9cec)

[Gido van de Ven](https://scholar.google.com/citations?user=3k0l15MAAAAJ\&hl=en) *is a postdoctoral researcher in the Center for Neuroscience and Artificial Intelligence at the Baylor College of Medicine (Houston, USA), and a visiting researcher in the Computational and Biological Learning Lab at the University of Cambridge (UK). In his research, he aims to use insights and intuitions from neuroscience to make the behavior of deep neural networks more human-like. In particular, Gido is interested in the problem of continual learning, and generative models are his principal tool to address this problem. Previously, for his doctoral research, he used optogenetics and electrophysiological recordings in mice to study the role of replay in memory consolidation in the brain.*


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://course.continualai.org/invited-lectures/invited-talks.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
