StochasticRank: Global Optimization of Scale-Free Discrete Functions

03/04/2020
by   Aleksei Ustimenko, et al.
1

In this paper, we introduce a powerful and efficient framework for the direct optimization of ranking metrics. The problem is ill-posed due to the discrete structure of the loss, and to deal with that, we introduce two important techniques: a stochastic smoothing and a novel gradient estimate based on partial integration. We also address the problem of smoothing bias and present a universal solution for a proper debiasing. To guarantee the global convergence of our method, we adopt a recently proposed Stochastic Gradient Langevin Boosting algorithm. Our algorithm is implemented as a part of the CatBoost gradient boosting library and outperforms the existing approaches on several learning to rank datasets. In addition to ranking metrics, our framework applies to any scale-free discreet loss function.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro