pymoo
Latest Version: pymoo==0.3.2

GradientsΒΆ

If the problem is implemented using autograd then the gradients through automatic differentiation are available out of the box. Let us consider the following problem definition for a simple quadratic function without any constraints:

[1]:
import autograd.numpy as anp

from pymoo.model.problem import Problem

class MyProblem(Problem):

    def __init__(self):
        super().__init__(n_var=10, n_obj=1, n_constr=0, xl=-5, xu=5)

    def _evaluate(self, x, out, *args, **kwargs):
         out["F"] = anp.sum(anp.power(x, 2), axis=1)

problem = MyProblem()

The gradients can be retried by appending dF to the return_values_of parameter:

[2]:
F, dF = problem.evaluate(anp.array([anp.arange(10)]), return_values_of=["F", "dF"])

The resulting gradients are stored in dF and the shape is (n_rows, n_objective, n_vars):

[3]:
print(dF.shape)
dF
(1, 1, 10)
[3]:
array([[[ 0.,  2.,  4.,  6.,  8., 10., 12., 14., 16., 18.]]])

Analogous, the gradient of constraints can be retrieved by appending dG.