Skip to content

ExactNewtonOptimizer

core/OPT/ExactNewtonOptimizer.h        Extends: IterativeOptimizer

Template class to optimize a given TwiceDifferentiableScalarField over using the Newton's iterative optimization method: where denotes the Hessian matrix of the field evaluated at point .

template <unsigned int N> class ExactNewtonOptimizer : public NewtonOptimizer<N> { ... };

Note

The implementation of Newton's method provided by this class relies on the exact analytical expression of gradient and hessian of the field. See parent class NewtonOptimizer in case you want to use numerically approximations for these quantities or you have no analytical expressions for them.

Info

At each iteration of the method to avoid the cost of matrix inversion, the quantity is computed by solving the linear system in using eigen's QR decompostion with column-pivoting.

Methods

ExactNewtonOptimizer(double step_, const SVector<N>& x0_, unsigned int maxIteration_,
                     double tolerance_, const TwiceDifferentiableScalarField<N>& objective_)

Constructor.

Args Description
double step_ The term in the iterative formulation of the Newton's method.
const SVector<N>& x0_ The initial point from which the iterative method is started.
unsigned int maxIteration_ The maximum number of iterations allowed.
double tolerance_ The tolerance on the error of the obtained solution requested from the method.
TwiceDifferentiableScalarField<N>& objective_ The objective function to optimize.
std::pair<SVector<N>, double> findMinimum() const override;

Applies the optimization method to the objective passed as argument. Returns std::pair<SVector<N>,double> where the first element is the point in where the minimum is reached while the second one is the actual minimum value reached by the objective.

Info

Default stopping condition: the method stops either if the norm of the gradient of the objective reaches the required tolerance or if such tolerance is not reached before a maxIteration number of iterations.

Tip

You can control the optimizer by exploiting the IterativeOptimizer interface.

Examples

The following code finds the minimum of function using an exact Newton's method with starting from point .

// define target to optimize: 2*x^2 + x + 2*y^2
std::function<double(SVector<2>)> g = [](SVector<2> x) -> double { 
    return 2*std::pow(x[0],2) + 2*std::pow(x[1],2) + x[0]; 
  };

// define analytical expression of gradient and hessian of the field
std::function<SVector<2>(SVector<2>)> dg = [](SVector<2> x) -> SVector<2> { 
    return SVector<2>({4*x[0] + 1, 4*x[1]}); 
  };

std::function<SMatrix<2>(SVector<2>)> ddg = [](SVector<2> x) -> SMatrix<2> { 
    return SMatrix<2>({{4, 0},
                       {0, 4}});
  };

// define the objective to optimize
TwiceDifferentiableScalarField<2> objective(g, dg, ddg);

// perform newton optimization

double lambda = 0.01;                   // set learning rate
unsigned int max_iterations = 1000;     // set maximum number of iterations
double tolerance = 0.001;               // set tolerance

// define newton optimizer
ExactNewtonOptimizer<2> optNewton2DExact(lambda, SVector<2>(1,1), max_iterations, 
                                         tolerance, objective);
// optimize
std::pair<SVector<2>,double> min_g = optNewton2DExact.findMinimum();