Improved second-order evaluation complexity for unconstrained nonlinear optimization using high-order regularized models

08/14/2017
by   Coralia Cartis, et al.
0

The unconstrained minimization of a sufficiently smooth objective function f(x) is considered, for which derivatives up to order p, p≥ 2, are assumed to be available. An adaptive regularization algorithm is proposed that uses Taylor models of the objective of order p and that is guaranteed to find a first- and second-order critical point in at most O (( ϵ_1^-p+1/p, ϵ_2^-p+1/p-1) ) function and derivatives evaluations, where ϵ_1 and ϵ_2 >0 are prescribed first- and second-order optimality tolerances. Our approach extends the method in Birgin et al. (2016) to finding second-order critical points, and establishes the novel complexity bound for second-order criticality under identical problem assumptions as for first-order, namely, that the p-th derivative tensor is Lipschitz continuous and that f(x) is bounded from below. The evaluation-complexity bound for second-order criticality improves on all such known existing results.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro