Semantic Self-adaptation: Enhancing Generalization with a Single Sample
Description
The lack of out-of-domain generalization is a critical weakness of deep networks for semantic segmentation. Previous studies relied on the assumption of a static model, i. e., once the training process is complete, model parameters remain fixed at test time. In this work, we challenge this premise with a self-adaptive approach for semantic segmentation that adjusts the inference process to each input sample. Self-adaptation operates on two levels. First, it fine-tunes the parameters of convolutional layers to the input image using consistency regularization. Second, in Batch Normalization layers, self-adaptation interpolates between the training and the reference distribution derived from a single test sample. Despite both techniques being well known in the literature, their combination sets new state-of-the-art accuracy on synthetic-to-real generalization benchmarks. Our empirical study suggests that self-adaptation may complement the established practice of model regularization at training time for improving deep network generalization to out-of-domain data.
DFG subject classification
4.43-04 Künstliche Intelligenz und Maschinelle Lernverfahren4.43-05 Bild- und Sprachverarbeitung, Computergraphik und Visualisierung, Human Computer Interaction, Ubiquitous und Wearable Computing
Related third party funded projects
EC/H2020 | 866008 | REDHMWK | 500/10.001-(00012) | TAM - TP Roth
Related Resources
- Is described by: arXiv:https://arxiv.org/abs/2208.05788
- Is described by: https://openreview.net/forum?id=ILNqQhGbLx
- Is described by: https://github.com/visinf/self-adaptive?tab=readme-ov-file
Collections
-
Segmentation [8]
The following license files are associated with this item: