Abstract: Incrementally learning from non-stationary data, referred to as ‘continual learning’, is a key feature of natural intelligence, but an unsolved problem in deep learning. Particularly challenging for deep neural networks is ‘class-incremental learning’, whereby a network must learn to distinguish between classes that are not observed together. In this guest lecture, I will discuss two ways in which generative models can be used to address the class-incremental learning problem. First, I will cover ‘generative replay’. With this popular approach, two models are learned: a classifier network and an additional generative model. Then, when learning new classes, samples from the generative model are interleaved – or replayed – along with the training data of the new classes. Second, I will discuss a more recently proposed approach for class-incremental learning: ‘generative classification’. With this approach, rather than using a generative model indirectly for generating samples to train a discriminative classifier on (as is done with generative replay), the generative model is used directly to perform classification using Bayes’ rule.