ADVENT is a flexible technique for bridging the gap between two different domains through Entropy minimization
models trained only on source domain tend to produce over-confident, i.e., low-Entropy, predictions on source-like images and under-confident, i.e., high-Entropy, predictions on target-like ones
Consequently by minimizing the Entropy on the target domain, we make the feature distributions from the two domains more similar.
More annotated data has been shown to always improve performance of DNNs
Here we are working on Unsupervised DA (UDA), which is a more challenging task where we have access to labeled source samples and only unlabeled target samples. We use as source, data generated by a simulator or video game engine, while for target we consider real-data from car-mounted cameras.
We present our two proposed approaches for Entropy minimization using (i) an unsupervised Entropyloss and (ii) adversarial training. To build our models, we start from existing semantic segmentation frameworks and add an additional network branch used for domain adaptation.