Definition¶
Problems have to be defined, and some information has to be provided. In contrast to other frameworks, we do not share the opinion of just defining a problem by a function is the most convenient one. In pymoo
the problem is defined by an object that contains some metadata, for instance the number of objectives, constraints, lower and upper bounds in the design space. These attributes are supposed to be defined in the constructor and thus by overriding the __init__
method.
Argument |
Description |
---|---|
|
Integer value representing the number of design variables. |
|
Integer value representing the number of objectives. |
|
Integer value representing the number of constraints. |
|
Float or |
|
Float or |
|
(optional) A type hint for the user what variable should be optimized. |
Moreover, in pymoo there exists three different ways for defining a problem:
Overview
Problem: Object-oriented definition
Problem
which implements a method evaluating a set of solutions.ElementwiseProblem: Object-oriented definition
ElementwiseProblem
which implements a function evaluating a single solution at a time.FunctionalProblem: Define a problem
FunctionalProblem
by using a function for each objective and constraint.
Problem (vectorized)¶
The majority of optimization algorithms implemented in pymoo are population-based, which means that more than one solution is evaluated in each generation. This is ideal for implementing a parallelization of function evaluations. Thus, the default definition of a problem retrieves a set of solutions to be evaluated. The actual function evaluation takes place in the _evaluate
method, which aims to fill the out
dictionary with the corresponding data. The function values are supposed
to be written into out["F"]
and the constraints into out["G"]
if n_constr
is greater than zero. If another approach is used to compute the function values or the constraints, they must be appropriately converted into a two-dimensional numpy
array (one dimension for the function values, the other dimension for each element of the population evaluated in the current round). For example, if the function values are written in a regular python list like
F_list = [[<func values for individual 1>], [<func values for individual 2>], ...]
, before returning from the _evaluate
method, the list must be converted to numpy array with out["F"] = np.row_stack(F_list_of_lists)
.
Tip
How the objective and constraint values are calculate is irrelevant from a pymoo’s point of view. Whether it is a simple mathematical equation or a discrete-event simulation, you only have to ensure that for each input the corresponding values have been set.
The example below shows a modified Sphere problem with a radial constraint located at the center. The problem consists of 10 design variables, one objective, one constraint, and the lower and upper bounds of each variable are in the range of 0 and 1.
[1]:
import numpy as np
from pymoo.core.problem import Problem
class SphereWithConstraint(Problem):
def __init__(self):
super().__init__(n_var=10, n_obj=1, n_ieq_constr=1, xl=0.0, xu=1.0)
def _evaluate(self, x, out, *args, **kwargs):
out["F"] = np.sum((x - 0.5) ** 2, axis=1)
out["G"] = 0.1 - out["F"]
Assuming the algorithm being used requests to evaluate a set of solutions of size 100, then the input NumPy matrix x
will be of the shape (100,10)
. Please note that the two-dimensional matrix is summed up on the first axis which results in a vector of length 100 for out["F"]
. Thus, NumPy performs a vectorized operation on a matrix to speed up the evaluation.
ElementwiseProblem (loop)¶
[2]:
import numpy as np
from pymoo.core.problem import ElementwiseProblem
class ElementwiseSphereWithConstraint(ElementwiseProblem):
def __init__(self):
xl = np.zeros(10)
xl[0] = -5.0
xu = np.ones(10)
xu[0] = 5.0
super().__init__(n_var=10, n_obj=1, n_ieq_constr=2, xl=xl, xu=xu)
def _evaluate(self, x, out, *args, **kwargs):
out["F"] = np.sum((x - 0.5) ** 2)
out["G"] = np.column_stack([0.1 - out["F"], out["F"] - 0.5])
Regardless of the number of solutions being asked to be evaluated, the _evaluate
function retrieves a vector of length 10. The _evaluate
, however, will be called for each solution. Implementing an element-wise problem, the Parallelization available in pymoo using processes or threads can be directly used. Moreover, note that the problem above uses a vector definition for the lower and upper bounds (xl
and xu
) because the first variables should cover
a different range of values.
FunctionalProblem (loop)¶
Another way of defining a problem is through functions. One the one hand, many function calls need to be performed to evaluate a set of solutions, but on the other hand, it is a very intuitive way of defining a problem.
[3]:
import numpy as np
from pymoo.problems.functional import FunctionalProblem
objs = [
lambda x: np.sum((x - 2) ** 2),
lambda x: np.sum((x + 2) ** 2)
]
constr_ieq = [
lambda x: np.sum((x - 1) ** 2)
]
n_var = 10
problem = FunctionalProblem(n_var,
objs,
constr_ieq=constr_ieq,
xl=np.array([-10, -5, -10]),
xu=np.array([10, 5, 10])
)
F, G = problem.evaluate(np.random.rand(3, 10))
print(f"F: {F}\n")
print(f"G: {G}\n")
F: [[28.91499854 54.15903677]
[19.90239951 68.36304158]
[19.0066805 71.2995835 ]]
G: [[5.22600809]
[2.01756003]
[2.07990625]]
Add Known Optima¶
If the optimum for a problem is known, this can be directly defined in the Problem
class. Below, an example shows the test problem ZDT1
where the Pareto-front has been analytically derived and discussed in the paper. Thus, the _calc_pareto_front
method returns the Pareto-front.
[4]:
class ZDT1(Problem):
def __init__(self, n_var=30, **kwargs):
super().__init__(n_var=n_var, n_obj=2, n_ieq_constr=0, xl=0, xu=1, vtype=float, **kwargs)
def _calc_pareto_front(self, n_pareto_points=100):
x = np.linspace(0, 1, n_pareto_points)
return np.array([x, 1 - np.sqrt(x)]).T
def _evaluate(self, x, out, *args, **kwargs):
f1 = x[:, 0]
g = 1 + 9.0 / (self.n_var - 1) * np.sum(x[:, 1:], axis=1)
f2 = g * (1 - np.power((f1 / g), 0.5))
out["F"] = np.column_stack([f1, f2])