Optimization module
core/OPT/... Namespace: fdaPDE::core::OPT
The optimization module provides a set of general routines for optimizing a generic ScalarField .
Usage
The following code snippet shows how to optimize the scalar field using a Newton optimizer without providing any analytical expression for its gradient and hessian function (these quantities are numerically approximated by the method at each step):
Developer's advice
Due to the highly templatized structure of this module there is no software enforced interface among all optimizers. Future optimizer routines included in this library must adopt the design strategy described in this section to be consistent with the optimization API.
A vast family of optimizers should expose their main entry point to the minimization routine using the following signature:
template <typename... Args>
std::pair<SVector<N>, double> findMinimum(const ScalarField<N>& objective, const SVector<N>& x0,
const Args&... args);
Args | Description |
---|---|
const ScalarField<N>& objective |
The objective function to optimize passed as a ScalarField or one of its derived classes. This last case is adopted in case the optimization scheme requires specific regularity conditions on the objective. |
const SVector<N>& x0 |
The initial point from which the iterative method is started. |
const Args&... args |
Zero or more Extensions used to control the behaviour of the optimizer at run time. |
Return type: std::pair<SVector<N>, double>
where the first element of the pair is the point of minimum, the second element is the actual value reached by the objective at this minimum.
All iterative optimization methods in the OPT module are accessible in this way. In case this is not possible (see for example GridOptimizer) the signature should be kept as similar as possible to the presented one.
Developer's advice
In any case is mandatory to expose a findMinimum()
method returning a std::pair<SVector<N>, double>
and accepting a ScalarField as objective function.
Extensions
Extensions are a usefull and simple way to modify the behaviour of an optimization routine at run time.
Note
An optimization method exposing a variadic template findMinimum()
method as presented before is open to customizations. See the specific method page to check if an optimizer can be extended or not.
Developer's advice
Check the Extensions page to find how your optimizer can take advantage of the extension mechanism and how to write new extensions.
Consider the case of optimizing the objective function using a GradientDescent optimizer. The following code shows an usage example
This code snippet will optimize the function using a fixed learning rate which is the standard behaviour for the gradient descent optimizer. Suppose that the fixed step approach works really bad and you want to go for an adaptive step method based on the same gradient descent iterative scheme. The extension mechanism allows you to inject in the gradient descent algorithm any adaptive step size method. For example, the BacktrackingAdaptiveStep extension implements the backtracking line search algorithm for step selection.
Enabling it in the gradient descent method is as simple as writing the following:
The code will call the line search algorithm at the right point in the iterative procedure doing all the job for you.
You can call as many extensions as you want by simply list them in the findMinimum()
method. For example the following applies a backtracking gradient descent method printing at the end of the optimization a small summary of the result
Advice
Is up to the user to enable extensions which make sense for the optimization scheme considered and do not collide each other (i.e. enabling more than one adaptive step method is not a good idea).
Tip
Check the Extensions page to get a list of all available extensions.