|
Incremental Open-set Domain Adaptation
Sayan Rakshit,
Hmrishav Bandyopadhyay, Nibaran Das,Biplab Banerjee
Arxiv, 2024
paper
bibTeX
Catastrophic forgetting makes neural network models unstable when learning visual domains consecutively.
The neural network model drifts to catastrophic forgetting-induced low performance of previously
learnt domains when training with new domains.
We illuminate this current neural network model weakness and develop a forgetting-resistant
incremental learning strategy.
Here, we propose a new unsupervised incremental open-set domain adaptation (IOSDA) issue for
image classification.
Open-set domain adaptation adds complexity to the incremental domain adaptation issue since
each target domain has more classes than the Source domain. In IOSDA, the model learns
training with domain streams phase by phase in incremented time.
Inference uses test data from all target domains without revealing their identities.
|
|
Open-Set Domain Adaptation Under Few Source-Domain Labeled Samples
Sayan Rakshit,
Balasubramanian S,
Hmrishav Bandyopadhyay, Piyush Bharambe, Sai Nandan Desetti,Biplab Banerjee, Subhasis Chaudhuri
CVPRw, 2022
paper
bibTeX
The notion of closed-set few-shot domain adaptation (FSDA)
has been introduced where limited supervision is present in the source domain.
However, FSDA overlooks the fact that the unlabeled target domain may contain new
classes unseen in the source domain. To this end, we introduce the novel problem definition
of few-shot open-set DA (FosDA) where the source domain contains few labeled samples together with
a large pool of unlabeled data, and the target domain consists of test samples from the known as
well as new categories.
|
|
FRIDA — Generative feature replay for incremental domain adaptation
Sayan Rakshit,
Anwesh Mohanty, Ruchika Chavhan,Biplab Banerjee, Gemma Roig, Subhasis Chaudhuri
CVIU, 2022
paper
bibTeX
We tackle the novel problem of incremental unsupervised domain adaptation (IDA) in this paper.
We assume that a labeled source domain and different unlabeled target domains are incrementally
observed with the constraint that data corresponding to the current domain is only available at a time.
The goal is to preserve the accuracies for all the past domains while generalizing well for the current domain.
|
|
Multi-Source Open-Set Deep Adversarial Domain Adaptation
Sayan Rakshit,
Dipesh Tamboli, Pragati Shuddhodhan Meshram,Biplab Banerjee, Gemma Roig, Subhasis Chaudhuri
ECCV, 2020
paper
bibTeX
We introduce a novel learning paradigm of multi-source openset unsupervised domain adaptation (MS-OSDA). Recently, the notion
of single-source open-set domain adaptation (SS-OSDA) which considers the presence of previously unseen open-set (unknown) classes in
the target-domain in addition to the source-domain closed-set (known)
classes has drawn attention. In the SS-OSDA setting, the labeled samples
are assumed to be drawn from the same source. Yet, it is more plausible
to assume that the labeled samples are distributed over multiple sourcedomains,
but the existing SS-OSDA techniques cannot directly handle
this more realistic scenario considering the diversities among multiple
source-domains.
|
|