Huber Loss
Tags: #machine learningEquation
$$L_{\delta}(y,f(x)) = \left \{ \begin{aligned} & \frac{1}{2}(y-f(x))^{2}, \text{for} |y-f(x)| \le \delta \cr & \delta \times (|y-f(x)| - \frac{1}{2}\delta) \cr \end{aligned} \right. $$Latex Code
L_{\delta}(y,f(x)) = \left \{ \begin{aligned} & \frac{1}{2}(y-f(x))^{2}, \text{for} |y-f(x)| \le \delta \cr & \delta \times (|y-f(x)| - \frac{1}{2}\delta) \cr \end{aligned} \right.
Have Fun
Let's Vote for the Most Difficult Equation!
Introduction
Huber Loss is widely used in regression as compared to MSE loss. When the error term |y-f(x)| is less or equal than delta, the loss is quadratic the same as MSE Mean Squared Error loss. When the regression error term is larger or equal than delta, the loss is linear.
Reply