GradientsΒΆ

To automatically get the gradients, we need to activate the automatic differentiation module Autograd (please note that this requires a numpy version prior to 2.0).

[1]:
from pymoo.gradient import activate

activate('autograd.numpy')

If the problem is implemented using autograd then the gradients through automatic differentiation are available out of the box. Let us consider the following problem definition for a simple quadratic function without any constraints:

[2]:
import numpy as np

import pymoo.gradient.toolbox as anp
from pymoo.core.problem import Problem
from pymoo.gradient.automatic import AutomaticDifferentiation


class MyProblem(Problem):

    def __init__(self):
        super().__init__(n_var=10, n_obj=1, xl=-5, xu=5)

    def _evaluate(self, x, out, *args, **kwargs):
        out["F"] = anp.sum(anp.power(x, 2), axis=1)

problem = AutomaticDifferentiation(MyProblem())

The gradients can be retrieved by appending F to the return_values_of parameter:

[3]:
X = np.array([np.arange(10)]).astype(float)
F, dF = problem.evaluate(X, return_values_of=["F", "dF"])

The resulting gradients are stored in dF and the shape is (n_rows, n_objective, n_vars):

[4]:
print(X, F)
print(dF.shape)
print(dF)
[[0. 1. 2. 3. 4. 5. 6. 7. 8. 9.]] [[285.]]
(1, 1, 10)
[[[ 0.  2.  4.  6.  8. 10. 12. 14. 16. 18.]]]

Analogously, the gradient of constraints can be retrieved by appending dG.