Domain adaptation focuses on transferring knowledge from a source domain (with labeled data) to a target domain (with different data distribution and often without labels). The goal is to build models that generalize effectively across domains despite distributional shifts. My research explores both unsupervised and source-free domain adaptation approaches, aiming to improve robustness and adaptability in real-world applications.
This work introduces a Feature-Aligned Maximum Classifier Discrepancy (FAMCD) framework that enhances the classic Maximum Classifier Discrepancy method by aligning features between source and target domains. The method reduces domain gaps through adversarial learning between classifiers and feature extractors, improving transfer performance without requiring target labels.
Highlights
Incorporates feature alignment to reduce inter-domain discrepancy.
Utilizes adversarial learning between classifiers for better boundary adaptation.
Achieves consistent improvements on standard domain adaptation benchmarks.
This research advances Source-Free Domain Adaptation (SFDA) by combining Shuffle PatchMix augmentation with a confidence-margin weighted pseudo-labeling strategy. It improves generalization when source data is inaccessible and only a pre-trained source model is available.
Highlights
Introduces Shuffle PatchMix to enhance local diversity in target samples.
Uses confidence-margin weighting to refine pseudo-label reliability.
Delivers improved adaptation performance under challenging target domains.