Domain Adaptation H-Divergence

Tags: #machine learning #transfer learning

Equation

$$d_{\mathcal{H}}(\mathcal{D},\mathcal{D}^{'})=2\sup_{h \in \mathcal{H}}|\Pr_{\mathcal{D}}[I(h)]-\Pr_{\mathcal{D}^{'}}[I(h)]|$$

Latex Code

                                 d_{\mathcal{H}}(\mathcal{D},\mathcal{D}^{'})=2\sup_{h \in \mathcal{H}}|\Pr_{\mathcal{D}}[I(h)]-\Pr_{\mathcal{D}^{'}}[I(h)]|
                            

Have Fun

Let's Vote for the Most Difficult Equation!

Introduction

Equation



Latex Code

            d_{\mathcal{H}}(\mathcal{D},\mathcal{D}^{'})=2\sup_{h \in \mathcal{H}}|\Pr_{\mathcal{D}}[I(h)]-\Pr_{\mathcal{D}^{'}}[I(h)]|
        

Explanation

The H-Divergence is defined as the superior of divengence between two probability Pr(D) and Pr(D^{'}) for any hypothesis h in all hypotheses class H. In this formulation, given domain X with two data distribution D and D^{'} over X, I(h) denotes the characteristic function(indicator function) on X, which means that for subset of x in I(h), h(x) = 1. You can check more detailed information of domain adaptation and H-divergence in this paper by Shai Ben-David, A theory of learning from different domains for more details.

Related Documents

Related Videos

Comments

Write Your Comment

Upload Pictures and Videos