Latest Version: pymoo==0.4.2.2

# Getting Started¶

In the following, we like to introduce pymoo by presenting an example optimization scenario. This guide goes through the essential steps to get started with our framework. This guide is structured as follows:

1. Introduction to Multi-objective Optimization and an exemplarily Test Problem

2. Implementation of a Problem (vectorized, element-wise or functional)

3. Initialization of an Algorithm (in our case NSGA2)

4. Definition of a Termination Criterion

5. Optimize (functional through minimize or object-oriented by calling next())

6. Visualization of Results and Convergence

7. Summary

8. Source code (in one piece)

We try to cover the essential steps you have to follow to get started optimizing your own optimization problem and have also included some posteriori analysis which is known to be particularly important in multi-objective optimization.

## 1. Introduction¶

### Multi-Objective Optimization¶

In general, multi-objective optimization has several objective functions with subject to inequality and equality constraints to optimize [18]. The goal is to find a set of solutions that do not have any constraint violation and are as good as possible regarding all its objectives values. The problem definition in its general form is given by:

\begin{align} \begin{split} \min \quad& f_{m}(x) \quad \quad \quad \quad m = 1,..,M \4pt] \text{s.t.} \quad& g_{j}(x) \leq 0 \quad \; \; \, \quad j = 1,..,J \\[2pt] \quad& h_{k}(x) = 0 \quad \; \; \quad k = 1,..,K \\[4pt] \quad& x_{i}^{L} \leq x_{i} \leq x_{i}^{U} \quad i = 1,..,N \\[2pt] \end{split} \end{align} The formulation above defines a multi-objective optimization problem with $$N$$ variables, $$M$$ objectives, $$J$$ inequality and $$K$$ equality constraints. Moreover, for each variable $$x_i$$ lower and upper variable boundaries ($$x_i^L$$ and $$x_i^U$$) are defined. ### Test Problem¶ In the following, we investigate exemplarily a bi-objective optimization with two constraints. We tried to select a suitable optimization problem with enough complexity for demonstration purposes, but not too difficult to lose track of the overall idea. Its definition is given by: \begin{align} \begin{split} \min \;\; & f_1(x) = (x_1^2 + x_2^2) \\ \max \;\; & f_2(x) = -(x_1-1)^2 - x_2^2 \\[1mm] \text{s.t.} \;\; & g_1(x) = 2 \, (x_1 - 0.1) \, (x_1 - 0.9) \leq 0\\ & g_2(x) = 20 \, (x_1 - 0.4) \, (x_1 - 0.6) \geq 0\\[1mm] & -2 \leq x_1 \leq 2 \\ & -2 \leq x_2 \leq 2 \end{split} \end{align} It consists of two objectives ($$M=2$$) where $$f_1(x)$$ is minimized and $$f_2(x)$$ maximized. The optimization is with subject to two inequality constraints ($$J=2$$) where $$g_1(x)$$ is formulated as a less than and $$g_2(x)$$ as a greater than constraint. The problem is defined with respect to two variables ($$N=2$$), $$x_1$$ and $$x_2$$, which both are in the range $$[-2,2]$$. The problem does not contain any equality constraints ($$K=0$$). The figure above shows the contours of the problem. The contour lines of the objective function $$f_1(x)$$ is represented by a solid and $$f_2(x)$$ by a dashed line. The constraints $$g_1(x)$$ and $$g_2(x)$$ are parabolas which intersect the $$x_1$$-axis at $$(0.1, 0.9)$$ and $$(0.4, 0.6)$$. A thick orange line illustrates the pareto-optimal set. Through the combination of both constraints, the pareto-set is split into two parts. Analytically, the pareto-optimal set is given by $$PS = \{(x_1, x_2) \,|\, (0.1 \leq x_1 \leq 0.4) \lor (0.6 \leq x_1 \leq 0.9) \, \land \, x_2 = 0\}$$ and the Pareto-front by $$f_2 = (\sqrt{f_1} - 1)^2$$ where $$f_1$$ is defined in $$[0.01,0.16]$$ and $$[0.36,0.81]$$. ## 2. Implementation of a Problem¶ In pymoo, we consider minimization problems for optimization in all our modules. However, without loss of generality, an objective that is supposed to be maximized can be multiplied by $$-1$$ and be minimized. Therefore, we minimize $$-f_2(x)$$ instead of maximizing $$f_2(x)$$ in our optimization problem. Furthermore, all constraint functions need to be formulated as a $$\leq 0$$ constraint. The feasibility of a solution can, therefore, be expressed by: \[\begin{split} \begin{cases} \text{feasible,} \quad \quad \sum_i^n \langle g_i(x)\rangle = 0\\ \text{infeasbile,} \quad \quad \quad \text{otherwise}\\ \end{cases}\end{split}
$\begin{split}\text{where} \quad \langle g_i(x)\rangle = \begin{cases} 0, \quad \quad \; \text{if} \; g_i(x) \leq 0\\ g_i(x), \quad \text{otherwise}\\ \end{cases}\end{split}$

For this reason, $$g_2(x)$$ needs to be multiplied by $$-1$$ in order to flip the $$\geq$$ to a $$\leq$$ relation. We recommend the normalization of constraints to give equal importance to each of them. For $$g_1(x)$$, the coefficient results in $$2 \cdot (-0.1) \cdot (-0.9) = 0.18$$ and for $$g_2(x)$$ in $$20 \cdot (-0.4) \cdot (-0.6) = 4.8$$, respectively. We achieve normalization of constraints by dividing $$g_1(x)$$ and $$g_2(x)$$ by its corresponding coefficient.

Finally, the optimization problem to be optimized using pymoo is defined by:

\begin{align} \label{eq:getting_started_pymoo} \begin{split} \min \;\; & f_1(x) = (x_1^2 + x_2^2) \\ \min \;\; & f_2(x) = (x_1-1)^2 + x_2^2 \\[1mm] \text{s.t.} \;\; & g_1(x) = 2 \, (x_1 - 0.1) \, (x_1 - 0.9) \, / \, 0.18 \leq 0\\ & g_2(x) = - 20 \, (x_1 - 0.4) \, (x_1 - 0.6) \, / \, 4.8 \leq 0\\[1mm] & -2 \leq x_1 \leq 2 \\ & -2 \leq x_2 \leq 2 \end{split} \end{align}

This getting started guide demonstrates 3 different ways of defining a problem:

• By Class

• Vectorized evaluation: A set of solutions is evaluated directly.

• Elementwise evaluation: Only one solution is evaluated at a time.

• By Functions: Functional interface as commonly defined in other optimization libraries.

Optional: Define a Pareto set and front for the optimization problem to track convergence to the analytically derived optimum/optima.

Please choose the most convenient implementation for your purpose.

### By Class¶

Defining a problem through a class allows defining the problem very naturally, assuming the metadata, such as the number of variables and objectives, are known. The problem inherits from the Problem class. By calling the super() function in the constructor __init__ the problem properties such as the number of variables n_var, objectives n_obj and constraints n_constr are supposed to be initialized. Furthermore, lower xl and upper variables boundaries xu are supplied as a NumPy array. Please note that most algorithms in our framework require the lower and upper boundaries to be provided and not equal to negative or positive infinity. Finally, the evaluation function _evaluate needs to be overwritten to calculated the objective and constraint values.

#### Vectorized Evaluation¶

The _evaluate method takes a two-dimensional NumPy array X with n rows and m columns as an input. Each row represents an individual, and each column an optimization variable. After doing the necessary calculations, the objective values must be added to the dictionary out with the key F and the constraints with key G.

Note: This method is only called once per iteration for most algorithms. This gives you all the freedom to implement your own parallelization.

[2]:

import numpy as np
from pymoo.model.problem import Problem

class MyProblem(Problem):

def __init__(self):
super().__init__(n_var=2,
n_obj=2,
n_constr=2,
xl=np.array([-2,-2]),
xu=np.array([2,2]))

def _evaluate(self, X, out, *args, **kwargs):
f1 = X[:,0]**2 + X[:,1]**2
f2 = (X[:,0]-1)**2 + X[:,1]**2

g1 = 2*(X[:, 0]-0.1) * (X[:, 0]-0.9) / 0.18
g2 = - 20*(X[:, 0]-0.4) * (X[:, 0]-0.6) / 4.8

out["F"] = np.column_stack([f1, f2])
out["G"] = np.column_stack([g1, g2])

vectorized_problem = MyProblem()


#### Elementwise Evaluation¶

The _evaluate method takes a one-dimensional NumPy array x number of entries equal to n_var. This behavior is enabled by setting elementwise_evaluation=True while calling the super() method.

Note: This method is called in each iteration for each solution exactly once.

[3]:

import numpy as np
from pymoo.util.misc import stack
from pymoo.model.problem import Problem

class MyProblem(Problem):

def __init__(self):
super().__init__(n_var=2,
n_obj=2,
n_constr=2,
xl=np.array([-2,-2]),
xu=np.array([2,2]),
elementwise_evaluation=True)

def _evaluate(self, x, out, *args, **kwargs):
f1 = x[0]**2 + x[1]**2
f2 = (x[0]-1)**2 + x[1]**2

g1 = 2*(x[0]-0.1) * (x[0]-0.9) / 0.18
g2 = - 20*(x[0]-0.4) * (x[0]-0.6) / 4.8

out["F"] = [f1, f2]
out["G"] = [g1, g2]

elementwise_problem = MyProblem()


### By Functions¶

The definition by functions is a common way in Python and available and many other optimization frameworks. It reduces the problem’s definitions without any overhead, and the number of objectives and constraints is simply derived from the list of functions.

After having defined the functions, the problem object is created by initializing FunctionalProblem. Please note that the number of variables n_var must be passed as an argument.

Note: This definition is recommended to be used to define a problem through simple functions. It is worth noting that the evaluation can require many functions calls. For instance, for 100 individuals with 2 objectives and 2 constraints 400 function calls are necessary for evaluation. Whereas, a vectorized definition through the Problem class requires only a single function call. Moreover, if metrics are shared between objectives or constraints, they need to be calculated twice.

[4]:

import numpy as np
from pymoo.model.problem import FunctionalProblem

objs = [
lambda x: x[0]**2 + x[1]**2,
lambda x: (x[0]-1)**2 + x[1]**2
]

constr_ieq = [
lambda x: 2*(x[0]-0.1) * (x[0]-0.9) / 0.18,
lambda x: - 20*(x[0]-0.4) * (x[0]-0.6) / 4.8
]

functional_problem = FunctionalProblem(2,
objs,
constr_ieq=constr_ieq,
xl=np.array([-2,-2]),
xu=np.array([2,2]))



### (Optional) Pareto front (pf) and Pareto set (ps)¶

In this case, we have a test problem where the optimum is known. For illustration, we like to measure the convergence of the algorithm to the known true optimum. Thus, we implement override the _calc_pareto_front and _calc_pareto_set for this purpose. Please note that both have to be mathematically derived.

Note: This is not necessary if your goal is solely optimizing a function. For test problems, this is usually done to measure and visualize the performance of an algorithm. The implementation of func_pf and func_ps looks as follows:

[5]:

from pymoo.util.misc import stack

def func_pf(flatten=True, **kwargs):
f1_a = np.linspace(0.1**2, 0.4**2, 100)
f2_a = (np.sqrt(f1_a) - 1)**2

f1_b = np.linspace(0.6**2, 0.9**2, 100)
f2_b = (np.sqrt(f1_b) - 1)**2

a, b = np.column_stack([f1_a, f2_a]), np.column_stack([f1_b, f2_b])
return stack(a, b, flatten=flatten)

def func_ps(flatten=True, **kwargs):
x1_a = np.linspace(0.1, 0.4, 50)
x1_b = np.linspace(0.6, 0.9, 50)
x2 = np.zeros(50)

a, b = np.column_stack([x1_a, x2]), np.column_stack([x1_b, x2])
return stack(a,b, flatten=flatten)


This information can be passed to the definition via class or functions as follows:

#### Add to Class¶

[6]:

import numpy as np
from pymoo.util.misc import stack
from pymoo.model.problem import Problem

class MyTestProblem(MyProblem):

def _calc_pareto_front(self, *args, **kwargs):
return func_pf(**kwargs)

def _calc_pareto_set(self, *args, **kwargs):
return func_ps(**kwargs)

test_problem = MyTestProblem()


#### Add to Function¶

[7]:

from pymoo.model.problem import FunctionalProblem

functional_test_problem = FunctionalProblem(2,
objs,
constr_ieq=constr_ieq,
xl=-2,
xu=2,
func_pf=func_pf,
func_ps=func_ps
)


### Initialize the object¶

Choose the way you have defined your problem and initialize it:

[8]:

problem = test_problem


Moreover, we would like to mention that in many test optimization problems, implementation already exists. For example, the test problem ZDT1 can be initiated by:

[9]:

from pymoo.factory import get_problem
zdt1 = get_problem("zdt1")


Our framework has various single- and many-objective optimization test problems already implemented. Furthermore, a more advanced guide for custom problem definitions is available. In case problem functions are computationally expensive, more sophisticated parallelization of the evaluation functions might be worth looking at.

## 3. Initialization of an Algorithm¶

Moreover, we need to initialize a method to optimize the problem. In pymoo, factory methods create an algorithm object to be used for optimization. For each of those methods, an API documentation is available, and through supplying different parameters, algorithms can be customized in a plug-and-play manner. Depending on the optimization problem, different algorithms can be used to optimize the problem. Our framework offers various Algorithms, which can be used to solve problems with different characteristics.

In general, the choice of a suitable algorithm for optimization problems is a challenge itself. Whenever problem characteristics are known beforehand, we recommended using those through customized operators. However, in our case, the optimization problem is rather simple, but the aspect of having two objectives and two constraints should be considered. We decided to use NSGA-II with its default configuration with minor modifications. We chose a population size of 40 (pop_size=40) and decided instead of generating the same number of offsprings to create only 10 (n_offsprings=40). Such an implementation is a greedier variant and improves the convergence of rather simple optimization problems without difficulties regarding optimization, such as the existence of local Pareto fronts. Moreover, we enable a duplicate check (eliminate_duplicates=True), making sure that the mating produces offsprings that are different from themselves and the existing population regarding their design space values. To illustrate the customization aspect, we listed the other unmodified default operators in the code snippet below.

[10]:

from pymoo.algorithms.nsga2 import NSGA2
from pymoo.factory import get_sampling, get_crossover, get_mutation

algorithm = NSGA2(
pop_size=40,
n_offsprings=10,
sampling=get_sampling("real_random"),
crossover=get_crossover("real_sbx", prob=0.9, eta=15),
mutation=get_mutation("real_pm", eta=20),
eliminate_duplicates=True
)



The algorithm object contains the implementation of NSGA-II with the custom settings supplied to the factory method.

## 4. Definition of a Termination Criterion¶

Furthermore, a termination criterion needs to be defined to start the optimization procedure finally. Different kind of Termination Criteria are available. Here, since the problem is rather simple, we run the algorithm for some number of generations.

[11]:

from pymoo.factory import get_termination

termination = get_termination("n_gen", 40)


Instead of the number of generations (or iterations), other criteria such as the number of function evaluations or the improvement in design or objective space between generations can be used.

## 5. Optimize¶

Finally, we are solving the problem with the algorithm and termination criterion we have defined. In pymoo, we provide two interfaces for solving an optimization problem:

• Functional: Commonly in Python, a function is used as a global interface. In pymoo, the minimize method is the most crucial method which is responsible for using an algorithm to solve a problem using other attributes such as seed, termination, and others.

• Object Oriented: The object-oriented interface directly uses the algorithm object to perform an iteration. This allows the flexibility of executing custom code very quickly between iterations. However, features already implemented in the functional approach, such as displaying metrics, saving the history, or pre-defined callbacks, need to be incorporated manually.

Both ways have their benefits and drawbacks depending on the different use cases.

### Functional Interface¶

The functional interface is provided by the minimize method. By default, the method performs deep-copies of the algorithm and the termination object. Which means the objects are not altered during the function call. This ensures repetitive function calls end up with the same results. The minimize function returns the Result object, which provides attributes such as the optimum.

[12]:

from pymoo.optimize import minimize

res = minimize(problem,
algorithm,
termination,
seed=1,
save_history=True,
verbose=True)

==========================================================================================
n_gen |  n_eval |   cv (min)   |   cv (avg)   |     igd      |      gd      |      hv
==========================================================================================
1 |      40 |  0.00000E+00 |  2.36399E+01 |  0.355456661 |  0.005362269 |  0.259340233
2 |      50 |  0.00000E+00 |  1.15486E+01 |  0.355456661 |  0.005362269 |  0.259340233
3 |      60 |  0.00000E+00 |  5.277918607 |  0.355456661 |  0.005362269 |  0.259340233
4 |      70 |  0.00000E+00 |  2.406068542 |  0.336177786 |  0.003791374 |  0.267747228
5 |      80 |  0.00000E+00 |  0.908316880 |  0.146187673 |  0.030825699 |  0.326821728
6 |      90 |  0.00000E+00 |  0.264746300 |  0.146187673 |  0.030825699 |  0.326821728
7 |     100 |  0.00000E+00 |  0.054063822 |  0.127166639 |  0.048434536 |  0.326821728
8 |     110 |  0.00000E+00 |  0.003060876 |  0.116635438 |  0.040756917 |  0.333328216
9 |     120 |  0.00000E+00 |  0.00000E+00 |  0.114104403 |  0.033780540 |  0.333467251
10 |     130 |  0.00000E+00 |  0.00000E+00 |  0.113905849 |  0.079757906 |  0.336916611
11 |     140 |  0.00000E+00 |  0.00000E+00 |  0.108326595 |  0.061746460 |  0.350973236
12 |     150 |  0.00000E+00 |  0.00000E+00 |  0.075994278 |  0.028407765 |  0.409709713
13 |     160 |  0.00000E+00 |  0.00000E+00 |  0.075900587 |  0.028006615 |  0.410363369
14 |     170 |  0.00000E+00 |  0.00000E+00 |  0.060504000 |  0.025567286 |  0.411662871
15 |     180 |  0.00000E+00 |  0.00000E+00 |  0.049446911 |  0.016289051 |  0.418530080
16 |     190 |  0.00000E+00 |  0.00000E+00 |  0.045458331 |  0.011383437 |  0.420489384
17 |     200 |  0.00000E+00 |  0.00000E+00 |  0.039469853 |  0.009911055 |  0.430486842
18 |     210 |  0.00000E+00 |  0.00000E+00 |  0.029430084 |  0.005849149 |  0.436066481
19 |     220 |  0.00000E+00 |  0.00000E+00 |  0.026914825 |  0.006440139 |  0.438071834
20 |     230 |  0.00000E+00 |  0.00000E+00 |  0.025819848 |  0.006597745 |  0.439045983
21 |     240 |  0.00000E+00 |  0.00000E+00 |  0.025538102 |  0.006092177 |  0.439313321
22 |     250 |  0.00000E+00 |  0.00000E+00 |  0.025430023 |  0.005831189 |  0.439357583
23 |     260 |  0.00000E+00 |  0.00000E+00 |  0.022907750 |  0.005325638 |  0.441118548
24 |     270 |  0.00000E+00 |  0.00000E+00 |  0.020729474 |  0.005408909 |  0.444667855
25 |     280 |  0.00000E+00 |  0.00000E+00 |  0.020685726 |  0.004700076 |  0.447352036
26 |     290 |  0.00000E+00 |  0.00000E+00 |  0.018021642 |  0.004673768 |  0.449131574
27 |     300 |  0.00000E+00 |  0.00000E+00 |  0.017734569 |  0.004981929 |  0.452104699
28 |     310 |  0.00000E+00 |  0.00000E+00 |  0.017256071 |  0.004906941 |  0.452453391
29 |     320 |  0.00000E+00 |  0.00000E+00 |  0.017099845 |  0.004821622 |  0.452586422
30 |     330 |  0.00000E+00 |  0.00000E+00 |  0.015604763 |  0.004715009 |  0.454343077
31 |     340 |  0.00000E+00 |  0.00000E+00 |  0.015494765 |  0.004509278 |  0.454689229
32 |     350 |  0.00000E+00 |  0.00000E+00 |  0.015494765 |  0.004509278 |  0.454689229
33 |     360 |  0.00000E+00 |  0.00000E+00 |  0.013269463 |  0.002882294 |  0.456133797
34 |     370 |  0.00000E+00 |  0.00000E+00 |  0.012326693 |  0.002872865 |  0.456726256
35 |     380 |  0.00000E+00 |  0.00000E+00 |  0.011929203 |  0.002718091 |  0.456859586
36 |     390 |  0.00000E+00 |  0.00000E+00 |  0.011452732 |  0.002668863 |  0.459103818
37 |     400 |  0.00000E+00 |  0.00000E+00 |  0.011187256 |  0.002467009 |  0.459278838
38 |     410 |  0.00000E+00 |  0.00000E+00 |  0.010961294 |  0.002442439 |  0.459731947
39 |     420 |  0.00000E+00 |  0.00000E+00 |  0.010982275 |  0.002847599 |  0.459564223
40 |     430 |  0.00000E+00 |  0.00000E+00 |  0.010485987 |  0.002833493 |  0.460033718


The Result object provides the corresponding X and F values and some more information.

### Object-Oriented Interface¶

On the contrary, the object-oriented approach directly modifies the algorithm object by calling the next method. Thus, it makes sense to create a deepcopy of the algorithm object beforehand, as shown in the code below. In the while loop, the algorithm object can be accessed to be modified or for other purposes.

NOTE: In this guide, we have used the functional interface because the history is used during analysis.

[13]:

import copy

# perform a copy of the algorithm to ensure reproducibility
obj = copy.deepcopy(algorithm)

# let the algorithm know what problem we are intending to solve and provide other attributes
obj.setup(problem, termination=termination, seed=1)

# until the termination criterion has not been met
while obj.has_next():

# perform an iteration of the algorithm
obj.next()

# access the algorithm to print some intermediate outputs
print(f"gen: {obj.n_gen} n_nds: {len(obj.opt)} constr: {obj.opt.get('CV').min()} ideal: {obj.opt.get('F').min(axis=0)}")

# finally obtain the result object
result = obj.result()

gen: 1 n_nds: 1 constr: 0.0 ideal: [0.43280594 0.12244878]
gen: 2 n_nds: 1 constr: 0.0 ideal: [0.43280594 0.12244878]
gen: 3 n_nds: 1 constr: 0.0 ideal: [0.43280594 0.12244878]
gen: 4 n_nds: 2 constr: 0.0 ideal: [0.43280594 0.09718357]
gen: 5 n_nds: 3 constr: 0.0 ideal: [0.13614243 0.09718357]
gen: 6 n_nds: 3 constr: 0.0 ideal: [0.13614243 0.09718357]
gen: 7 n_nds: 4 constr: 0.0 ideal: [0.13614243 0.08467286]
gen: 8 n_nds: 5 constr: 0.0 ideal: [0.02736815 0.08467286]
gen: 9 n_nds: 6 constr: 0.0 ideal: [0.02347838 0.08420915]
gen: 10 n_nds: 6 constr: 0.0 ideal: [0.02317765 0.08420915]
gen: 11 n_nds: 7 constr: 0.0 ideal: [0.02317765 0.08420915]
gen: 12 n_nds: 7 constr: 0.0 ideal: [0.02317765 0.08420915]
gen: 13 n_nds: 7 constr: 0.0 ideal: [0.02317765 0.08420915]
gen: 14 n_nds: 9 constr: 0.0 ideal: [0.02317765 0.05440163]
gen: 15 n_nds: 10 constr: 0.0 ideal: [0.02317765 0.05440163]
gen: 16 n_nds: 11 constr: 0.0 ideal: [0.02317765 0.0166032 ]
gen: 17 n_nds: 14 constr: 0.0 ideal: [0.02317765 0.0166032 ]
gen: 18 n_nds: 16 constr: 0.0 ideal: [0.02150969 0.0166032 ]
gen: 19 n_nds: 18 constr: 0.0 ideal: [0.02150969 0.0166032 ]
gen: 20 n_nds: 20 constr: 0.0 ideal: [0.02150969 0.0166032 ]
gen: 21 n_nds: 22 constr: 0.0 ideal: [0.02150969 0.0166032 ]
gen: 22 n_nds: 24 constr: 0.0 ideal: [0.02150969 0.01627147]
gen: 23 n_nds: 27 constr: 0.0 ideal: [0.02150969 0.01320869]
gen: 24 n_nds: 28 constr: 0.0 ideal: [0.02006779 0.01320869]
gen: 25 n_nds: 28 constr: 0.0 ideal: [0.02006779 0.01320869]
gen: 26 n_nds: 30 constr: 0.0 ideal: [0.02006779 0.01320869]
gen: 27 n_nds: 33 constr: 0.0 ideal: [0.01896005 0.01320869]
gen: 28 n_nds: 36 constr: 0.0 ideal: [0.01896005 0.01320869]
gen: 29 n_nds: 37 constr: 0.0 ideal: [0.01896005 0.01320869]
gen: 30 n_nds: 38 constr: 0.0 ideal: [0.01896005 0.01320869]
gen: 31 n_nds: 40 constr: 0.0 ideal: [0.01896005 0.01200888]
gen: 32 n_nds: 40 constr: 0.0 ideal: [0.01896005 0.01200888]
gen: 33 n_nds: 40 constr: 0.0 ideal: [0.01896005 0.01200888]
gen: 34 n_nds: 40 constr: 0.0 ideal: [0.01805568 0.01200888]
gen: 35 n_nds: 40 constr: 0.0 ideal: [0.01805568 0.01200888]
gen: 36 n_nds: 40 constr: 0.0 ideal: [0.0163747  0.01200888]
gen: 37 n_nds: 40 constr: 0.0 ideal: [0.0163747  0.01180915]
gen: 38 n_nds: 40 constr: 0.0 ideal: [0.0163747  0.01180915]
gen: 39 n_nds: 40 constr: 0.0 ideal: [0.0163747  0.01180915]
gen: 40 n_nds: 40 constr: 0.0 ideal: [0.0163747  0.01180915]


## 6. Visualization of Results and Convergence¶

### Results¶

The optimization results are illustrated below (design and objective space). The solid lines represent the analytically derived Pareto set, and front in the corresponding space, and the circles represent solutions found by the algorithm. It can be observed that the algorithm was able to converge, and a set of nearly-optimal solutions was obtained.

[14]:

from pymoo.visualization.scatter import Scatter

# get the pareto-set and pareto-front for plotting
ps = problem.pareto_set(use_cache=False, flatten=False)
pf = problem.pareto_front(use_cache=False, flatten=False)

# Design Space
plot = Scatter(title = "Design Space", axis_labels="x")
plot.add(res.X, s=30, facecolors='none', edgecolors='r')
if ps is not None:
plot.add(ps, plot_type="line", color="black", alpha=0.7)
plot.do()
plot.apply(lambda ax: ax.set_xlim(-0.5, 1.5))
plot.apply(lambda ax: ax.set_ylim(-2, 2))
plot.show()

# Objective Space
plot = Scatter(title = "Objective Space")
plot.add(res.F)
if pf is not None:
plot.add(pf, plot_type="line", color="black", alpha=0.7)
plot.show()


Visualization is a vital post-processing step in multi-objective optimization. Although it seems to be pretty easy for our example optimization problem, it becomes much more difficult in higher dimensions where trade-offs between solutions are not readily observable. For visualizations in higher dimensions, various more advanced Visualizations are implemented in our framework.

### Convergence¶

A not negligible step is the post-processing after having obtained the results. We strongly recommend not only analyzing the final result but also the algorithm’s behavior. This gives more insights into the convergence of the algorithm.

For such an analysis, intermediant steps of the algorithm need to be considered. This can either be achieved by:

• A Callback class storing the necessary information in each iteration of the algorithm.

• Enabling the save_history flag when calling the minimize method to store a deepcopy of the algorithm’s objective each iteration.

We provide some more details about each variant in our convergence tutorial. As you might have already seen, we have set save_history=True when calling the minmize method in this getting started guide and, thus, will you the history for our analysis. Moreover, we need to decide what metric should be used to measure the performance of our algorithm. In this tutorial, we are going to use Hypervolume and IGD. Feel free to look at our performance indicators to find more information about metrics to measure the performance of multi-objective algorithms.

As a first step we have to extract the population in each generation of the algorithm. We extract the constraint violation (cv), the objective space values (F) and the number of function evaluations (n_evals) of the corresponding generation.

[15]:

n_evals = []    # corresponding number of function evaluations\
F = []          # the objective space values in each generation
cv = []         # constraint violation in each generation

# iterate over the deepcopies of algorithms
for algorithm in res.history:

# store the number of function evaluations
n_evals.append(algorithm.evaluator.n_eval)

# retrieve the optimum from the algorithm
opt = algorithm.opt

# store the least contraint violation in this generation
cv.append(opt.get("CV").min())

# filter out only the feasible and append
feas = np.where(opt.get("feasible"))[0]
_F = opt.get("F")[feas]
F.append(_F)


NOTE: If your problem has different scales on the objectives (e.g. first objective in range of [0.1, 0.2] and the second objective [100, 10000] you HAVE to normalize to measure the performance in a meaningful way! This example assumes no normalization is necessary to keep things a bit simple.

### Constraint Violation (CV)¶

Here, in the first generation, a feasible solution was already found. Since the constraints of the problem are rather simple, the constraints are already satisfied in the initial population.

[16]:

import matplotlib.pyplot as plt

k = min([i for i in range(len(cv)) if cv[i] <= 0])
first_feas_evals = n_evals[k]
print(f"First feasible solution found after {first_feas_evals} evaluations")

plt.plot(n_evals, cv, '--', label="CV")
plt.scatter(first_feas_evals, cv[k], color="red", label="First Feasible")
plt.xlabel("Function Evaluations")
plt.ylabel("Constraint Violation (CV)")
plt.legend()
plt.show()

First feasible solution found after 40 evaluations


### Hypvervolume (HV)¶

Hypervolume is a very well-known performance indicator for multi-objective problems. It is known to be pareto-compliant and is based on the volume between a predefined reference point and the solution provided. Hypervolume requires to define a reference point ref_point which shall be larger than the maximum value of the Pareto front.

Note: Hypervolume becomes computationally expensive with increasing dimensionality. The exact hypervolume can be calculated efficiently for 2 and 3 objectives. For higher dimensions, some researchers use a hypervolume approximation, which is not available yet in pymoo.

[17]:

import matplotlib.pyplot as plt
from pymoo.performance_indicator.hv import Hypervolume

# MODIFY - this is problem dependend
ref_point = np.array([1.0, 1.0])

# create the performance indicator object with reference point
metric = Hypervolume(ref_point=ref_point, normalize=False)

# calculate for each generation the HV metric
hv = [metric.calc(f) for f in F]

# visualze the convergence curve
plt.plot(n_evals, hv, '-o', markersize=4, linewidth=2)
plt.title("Convergence")
plt.xlabel("Function Evaluations")
plt.ylabel("Hypervolume")
plt.show()


### IGD¶

For IGD the Pareto front needs to be known or to be approximated. In our framework the Pareto front of test problems can be obtained by:

[18]:

pf = problem.pareto_front(flatten=True, use_cache=False)


For real-world problems, you have to use an approximation. An approximation can be obtained by running an algorithm a couple of times and extracting the non-dominated solutions out of all solution sets. If you have only a single run, an alternative is to use the obtain a non-dominated set of solutions as an approximation. However, the result does then only indicate how much the algorithm’s progress in converging to the final set.

[19]:

import matplotlib.pyplot as plt
from pymoo.performance_indicator.igd import IGD

if pf is not None:

# for this test problem no normalization for post prcessing is needed since similar scales
normalize = False

metric = IGD(pf=pf, normalize=normalize)

# calculate for each generation the HV metric
igd = [metric.calc(f) for f in F]

# visualze the convergence curve
plt.plot(n_evals, igd, '-o', markersize=4, linewidth=2, color="green")
plt.yscale("log")          # enable log scale if desired
plt.title("Convergence")
plt.xlabel("Function Evaluations")
plt.ylabel("IGD")
plt.show()


### Running Metric¶

Another way of analyzing a run when the true Pareto front is not known is using are recently proposed running metric. The running metric shows the difference in the objective space from one generation to another and uses the algorithm’s survival to visualize the improvement. This metric is also being used in pymoo to determine the termination of a multi-objective optimization algorithm if no default termination criteria have been defined.

For instance, this analysis reveals that the algorithm was able to improve from the 4th to the 5th generation significantly.

[20]:

from pymoo.util.running_metric import RunningMetric

running = RunningMetric(delta_gen=5,
n_plots=3,
only_if_n_plots=True,
key_press=False,
do_show=True)

for algorithm in res.history[:15]:
running.notify(algorithm)


Plotting until the final population shows the the algorithm seems to have more a less converged and only a small improvement has been made.

[21]:

from pymoo.util.running_metric import RunningMetric

running = RunningMetric(delta_gen=10,
n_plots=4,
only_if_n_plots=True,
key_press=False,
do_show=True)

for algorithm in res.history:
running.notify(algorithm)


## 7. Summary¶

We hope you have enjoyed the getting started guide. For more topics we refer to each section covered by on the landing page. If you have any question or concern do not hesitate to contact us.

### Citation¶

If you have used pymoo for research purposes please refer to our framework in your reports or publication by:

@ARTICLE{pymoo,
author={J. {Blank} and K. {Deb}},
journal={IEEE Access},
title={Pymoo: Multi-Objective Optimization in Python},
year={2020},
volume={8},
number={},
pages={89497-89509},
}


## 8. Source Code¶

In this guide, we have provided a couple of options on defining your problem and how to run the optimization. You might have already copied the code into your IDE. However, if not, the following code snippets cover the problem definition, algorithm initializing, solving the optimization problem, and visualization of the non-dominated set of solutions altogether.

[22]:

import numpy as np

from pymoo.algorithms.nsga2 import NSGA2
from pymoo.model.problem import Problem
from pymoo.optimize import minimize
from pymoo.visualization.scatter import Scatter

class MyProblem(Problem):

def __init__(self):
super().__init__(n_var=2,
n_obj=2,
n_constr=2,
xl=np.array([-2, -2]),
xu=np.array([2, 2]),
elementwise_evaluation=True)

def _evaluate(self, x, out, *args, **kwargs):
f1 = x[0] ** 2 + x[1] ** 2
f2 = (x[0] - 1) ** 2 + x[1] ** 2

g1 = 2 * (x[0] - 0.1) * (x[0] - 0.9) / 0.18
g2 = - 20 * (x[0] - 0.4) * (x[0] - 0.6) / 4.8

out["F"] = [f1, f2]
out["G"] = [g1, g2]

problem = MyProblem()

algorithm = NSGA2(pop_size=100)

res = minimize(problem,
algorithm,
("n_gen", 100),
verbose=True,
seed=1)

plot = Scatter()
plot.add(res.F, color="red")
plot.show()