Jensen-Shannon Divergence JS-Divergence
Tags: #machine learningEquation
$$JS(P||Q)=\frac{1}{2}KL(P||\frac{(P+Q)}{2})+\frac{1}{2}KL(Q||\frac{(P+Q)}{2})$$Latex Code
JS(P||Q)=\frac{1}{2}KL(P||\frac{(P+Q)}{2})+\frac{1}{2}KL(Q||\frac{(P+Q)}{2})
Have Fun
Let's Vote for the Most Difficult Equation!
Introduction
Equation
Latex Code
JS(P||Q)=\frac{1}{2}KL(P||\frac{(P+Q)}{2})+\frac{1}{2}KL(Q||\frac{(P+Q)}{2})
Explanation
Latex code for the Jensen-Shannon Divergence(JS-Divergence). I will briefly introduce the notations in this formulation.
- : KL Divergence between P and Q
- : JS Divergence between P and Q, which is the symmetric divergence metric between distribution P and Q
- : Distribution of P(x) over x
- : Distribution of Q(x) over x
Reply