Visual Domain Adaptation Challenge

(VisDA-2018)


[News] [Overview] [TASK-CV Workshop] [Organizers] [Sponsors]

News

Introducing the 2018 VisDA Challenge! Stay tuned for more dates and details coming soon.

For details about last year's challenge and winners, see the VisDA 2017 challenge page.

  • May 7 Training data, validation data and DevKits released
  • April 9 Registration starts

Overview

We are pleased to announce the 2018 Visual Domain Adaptation (VisDA2018) Challenge! It is well known that the success of machine learning methods on visual recognition tasks is highly dependent on access to large labeled datasets. Unfortunately, performance often drops significantly when the model is presented with data from a new deployment domain which it did not see in training, a problem known as dataset shift. The VisDA challenge aims to test domain adaptation methods’ ability to transfer source knowledge and adapt it to novel target domains.


Caption: An example of a domain adaptation problem for object classification with a synthetic source (train) domain and a real target (test) domain. Unsupervised Domain Adaptation methods aim to use labeled samples from the train domain and large volumes of unlabeled samples from the test domain to reduce prediction errors on the test domain.


The competition will take place during the months of May -- September 2018, and the top performing teams will be invited to present their results at the TASK-CV workshop at ECCV 2018 in Munich, Germany. This year’s challenge focuses on synthetic-to-real visual domain shifts and includes two tracks:

across two different domain shift problems. Participants are welcome to enter in one or both tracks.


Open-Set Classification Track

Last year’s challenge featured a closed-set classification task for synthetic-to-real adaptation, where all object categories were known ahead of time. The top performing teams in this track developed CNN models that achieved impressive adaptation results. This year, we push the boundaries beyond closed-set classification and propose a novel open-set classification task. In this track, the goal is to develop a method of unsupervised domain adaptation for object classification, where the target domains contain images of additional unknown categories not present in the source dataset.




Detection Track

In this track, the goal is to develop a model that can adapt between synthetic and real objects for recognition/detection. This task entails localizing an object from each of 12 learned categories in novel images by predicting its class and its bounding box.




Both tracks feature the same datasets and focus on synthetic-to-real domain adaptation. Participants will be given three datasets, each containing the same object categories:

  • training domain (source): synthetic 2D renderings of 3D models generated from different angles and with different lighting conditions
  • validation domain (target): a photo-realistic or real-image validation domain that participants can use to evaluate performance of their domain adaptation methods
  • test domain (target): a new real-image test domain, different from the validation domain and without labels . The test set will be released shortly before the end of the competition

The reason for using different target domains for validation and test is to evaluate the performance of proposed models as an out-of-the-box domain adaptation tool. This setting more closely mimics realistic deployment scenarios where the target domain is unknown at training time and discourages algorithms that are designed to handle a particular target domain.


Workshop

The challenge is associated with the 5th annual TASK-CV workshop, being held at ECCV 2018 in Munich, Germany. Challenge participants are requested to submit a 2-page abstract to the workshop in order to be considered for any prizes, as well as recieve initivation to give a talk about their results at a special session of the workshop. Please see the workshop website for submission guidelines.



Organizers

Kate Saenko (Boston University), Ben Usman (Boston University), Xingchao Peng (Boston University), Neela Kaushik (Boston University), Kuniaki Saito (The University of Tokyo), Judy Hoffman (UC Berkeley)