Free Shipping to all UK customers for orders over £25.00

0 Total items on my wish-list.

Free Shipping to all UK customers for orders over £25.00

Ryefieldbooks Logo

Ryefield Books

Free Shipping to all UK customers for orders over £25.00

Ryefieldbooks Logo

Ryefield Books

© Copyright Ryefield Books - All Right Reserved
Product Categories
My Shopping Cart
Void image

You shopping cart is empty

You may browse our offerings to locate what you're
searching for, then put it in your shopping cart.

Book cover image

Visual Domain Adaptation in the Deep Learning Era

Usually dispatched within 3 - 5 business days.

In Stock (756)

£ 52.79

Description

Solving problems with deep neural networks typically relies on massive amounts of labeled training data to achieve high performance. While in many situations huge volumes of unlabeled data can be and often are generated and available, the cost of acquiring data labels remains high. Transfer learning (TL), and in particular domain adaptation (DA), has emerged as an effective solution to overcome the burden of annotation, exploiting the unlabeled data available from the target domain together with labeled data or pre-trained models from similar, yet different source domains. The aim of this book is to provide an overview of such DA/TL methods applied to computer vision, a field whose popularity has increased significantly in the last few years. We set the stage by revisiting the theoretical background and some of the historical shallow methods before discussing and comparing different domain adaptation strategies that exploit deep architectures for visual recognition. We introduce the space of self-training-based methods that draw inspiration from the related fields of deep semi-supervised and self-supervised learning in solving the deep domain adaptation. Going beyond the classic domain adaptation problem, we then explore the rich space of problem settings that arise when applying domain adaptation in practice such as partial or open-set DA, where source and target data categories do not fully overlap, continuous DA where the target data comes as a stream, and so on. We next consider the least restrictive setting of domain generalization (DG), as an extreme case where neither labeled nor unlabeled target data are available during training. Finally, we close by considering the emerging area of learning-to-learn and how it can be applied to further improve existing approaches to cross domain learning problems such as DA and DG.