A Tunable Loss Function for Binary Classification

02/12/2019
by   Tyler Sypherd, et al.
0

We present α-loss, α∈ [1,∞], a tunable loss function for binary classification that bridges log-loss (α=1) and 0-1 loss (α = ∞). We prove that α-loss has an equivalent margin-based form and is classification-calibrated, two desirable properties for a good surrogate loss function for the ideal yet intractable 0-1 loss. For logistic regression-based classification, we provide an upper bound on the difference between the empirical and expected risk for α-loss by exploiting its Lipschitzianity along with recent results on the landscape features of empirical risk functions. Finally, we show that α-loss with α = 2 performs better than log-loss on MNIST for logistic regression.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro