Search Results for author: Dong-Wan Choi

Found 5 papers, 5 papers with code

Recall-Oriented Continual Learning with Generative Adversarial Meta-Model

1 code implementation5 Mar 2024 Haneol Kang, Dong-Wan Choi

The stability-plasticity dilemma is a major challenge in continual learning, as it involves balancing the conflicting objectives of maintaining performance on previous tasks while learning new tasks.

Continual Learning Incremental Learning

Teacher as a Lenient Expert: Teacher-Agnostic Data-Free Knowledge Distillation

1 code implementation18 Feb 2024 Hyunjune Shin, Dong-Wan Choi

In this paper, we propose the teacher-agnostic data-free knowledge distillation (TA-DFKD) method, with the goal of more robust and stable performance regardless of teacher models.

Data-free Knowledge Distillation

Better Generalized Few-Shot Learning Even Without Base Data

1 code implementation29 Nov 2022 Seong-Woong Kim, Dong-Wan Choi

In this paper, we overcome this limitation by proposing a simple yet effective normalization method that can effectively control both mean and variance of the weight distribution of novel classes without using any base samples and thereby achieve a satisfactory performance on both novel and base classes.

Few-Shot Learning Generalized Few-Shot Learning

Pool of Experts: Realtime Querying Specialized Knowledge in Massive Neural Networks

2 code implementations3 Jul 2021 Hakbin Kim, Dong-Wan Choi

In spite of the great success of deep learning technologies, training and delivery of a practically serviceable model is still a highly time-consuming process.

Knowledge Distillation Model Compression

Split-and-Bridge: Adaptable Class Incremental Learning within a Single Neural Network

1 code implementation3 Jul 2021 Jong-Yeong Kim, Dong-Wan Choi

Continual learning has been a major problem in the deep learning community, where the main challenge is how to effectively learn a series of newly arriving tasks without forgetting the knowledge of previous tasks.

Class Incremental Learning Incremental Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.