Recent Posts

  1. Unsupervised Domain Adaptation by Backpropagation

    In supervised learning, we train neural networks on a ton of labelled examples. We test the accuracy of a trained model on a held-out test set. A common choice for the test set is a random selection of 20% of the training examples. This is fine when the deployed model sees the same distribution as the training set. In practice, this often isn’t the case. Solutions to this problem are called “domain adaptation”. I’ll explain a domain adaptation technique proposed by Ganin and Lempitsky. For this technique, all we need is an unlabelled data set from the distribution the model tests on.


    Read more ⟶