Usage

Overview

Functional

As you might be used to executing algorithms from other frameworks, pymoo offers a functional interface. It requires to pass the problem to be solved, the algorithm to be used, and optionally (but for most algorithms recommend) a termination condition. Other important arguments are discussed in the Interface tutorial. For executing custom code in between iterations the Callback object can be useful. Moreover, it is worth noting that the algorithm object is cloned before being modified. Thus, two calls with the same algorithm object and random seed lead to the same result.

[1]:
from pymoo.algorithms.moo.nsga2 import NSGA2
from pymoo.problems import get_problem
from pymoo.optimize import minimize

problem = get_problem("zdt1")

algorithm = NSGA2(pop_size=100)

res = minimize(problem,
               algorithm,
               ('n_gen', 10),
               seed=1,
               verbose=True)

# calculate a hash to show that all executions end with the same result
print("hash", res.F.sum())
==========================================================================
n_gen  |  n_eval  | n_nds  |      igd      |       gd      |       hv
==========================================================================
     1 |      100 |     18 |  1.9687500927 |  2.6048048316 |  0.000000E+00
     2 |      200 |     22 |  1.9687500927 |  2.6551717460 |  0.000000E+00
     3 |      300 |     16 |  1.9156076841 |  2.6054011843 |  0.000000E+00
     4 |      400 |     27 |  1.9156076841 |  2.5981861153 |  0.000000E+00
     5 |      500 |     15 |  1.7845109513 |  2.5097347961 |  0.000000E+00
     6 |      600 |     15 |  1.5665669246 |  1.9741836262 |  0.000000E+00
     7 |      700 |     18 |  1.4888433157 |  1.9581850627 |  0.000000E+00
     8 |      800 |     19 |  1.4536833012 |  1.7639127072 |  0.000000E+00
     9 |      900 |     15 |  1.3258230370 |  1.8118192650 |  0.000000E+00
    10 |     1000 |     20 |  1.1683545980 |  1.7447809684 |  0.000000E+00
hash 58.62964054306852

Object-oriented

Instead of passing the algorithm to the minimize function, it can be used directly for optimization. The first way using the next function is available for all algorithms in pymoo. The second way provides a convenient Ask and Tell interface, available for most evolutionary algorithms. The reason to use one or the other interface is to have more control during an algorithm execution or even modify the algorithm object while injecting new solutions.

Next Function

Directly using the algorithm object will modify its state during runtime. This allows to ask the object if one more iteration shall be executed or not by calling algorithm.has_next(). As soon as the termination criterion has been satisfied, this will return False, ending the run. Here, we show a custom printout in each iteration (from the second iteration on). Of course, more sophisticated procedures can be incorporated.

[2]:
import datetime

from pymoo.algorithms.moo.nsga2 import NSGA2
from pymoo.problems import get_problem

problem = get_problem("zdt1")

algorithm = NSGA2(pop_size=100)

# prepare the algorithm to solve the specific problem (same arguments as for the minimize function)
algorithm.setup(problem, termination=('n_gen', 10), seed=1, verbose=False)

# until the algorithm has no terminated
while algorithm.has_next():

    # do the next iteration
    algorithm.next()

    # do same more things, printing, logging, storing or even modifying the algorithm object
    print(algorithm.n_gen, algorithm.evaluator.n_eval)


# obtain the result objective from the algorithm
res = algorithm.result()

# calculate a hash to show that all executions end with the same result
print("hash", res.F.sum())
2 100
3 200
4 300
5 400
6 500
7 600
8 700
9 800
10 900
11 1000
hash 58.62964054306852

Ask and Tell

The next method already provides much more control over the algorithm executing than the functional interface. However, the call of the next function on the algorithm object still is considered a black box. This is where the Ask and Tell interface comes into play. Instead of calling one function, two function calls are executed. First, algorithm.ask() returns a solution set to be evaluated, and second, algorithm.tell(solutions) receives the evaluated solutions to proceed to the next generation. This gives even further control over the run.

Problem-Depdendent

A possible implementation of using this interface can look as follows:

[3]:
from pymoo.algorithms.moo.nsga2 import NSGA2
from pymoo.problems import get_problem

problem = get_problem("zdt1")

algorithm = NSGA2(pop_size=100)

# prepare the algorithm to solve the specific problem (same arguments as for the minimize function)
algorithm.setup(problem, termination=('n_gen', 10), seed=1, verbose=False)

# until the algorithm has no terminated
while algorithm.has_next():

    # ask the algorithm for the next solution to be evaluated
    pop = algorithm.ask()

    # evaluate the individuals using the algorithm's evaluator (necessary to count evaluations for termination)
    algorithm.evaluator.eval(problem, pop)

    # returned the evaluated individuals which have been evaluated or even modified
    algorithm.tell(infills=pop)

    # do same more things, printing, logging, storing or even modifying the algorithm object
    print(algorithm.n_gen, algorithm.evaluator.n_eval)

# obtain the result objective from the algorithm
res = algorithm.result()

# calculate a hash to show that all executions end with the same result
print("hash", res.F.sum())
2 100
3 200
4 300
5 400
6 500
7 600
8 700
9 800
10 900
11 1000
hash 58.62964054306852

Problem-independent

Since the evaluation is directly the step between the ask-and-tell calls, the evaluation function of the problem (_evaluate) is not even necessary anymore and the evaluation can be moved into the for-loop. We refer to this as the problem-independent execution. However, even in this case, some meta-data about the problem (number of variables, objectives, bounds) need to be provided.

[4]:
import numpy as np

from pymoo.algorithms.moo.nsga2 import NSGA2
from pymoo.core.evaluator import Evaluator
from pymoo.core.problem import Problem
from pymoo.core.termination import NoTermination
from pymoo.problems.static import StaticProblem

problem = Problem(n_var=30, n_obj=2, n_constr=0, xl=np.zeros(30), xu=np.ones(30))

# create the algorithm object
algorithm = NSGA2(pop_size=100)

# let the algorithm object never terminate and let the loop control it
termination = NoTermination()

# create an algorithm object that never terminates
algorithm.setup(problem, termination=termination)

# fix the random seed manually
np.random.seed(1)

# until the algorithm has no terminated
for n_gen in range(10):
    # ask the algorithm for the next solution to be evaluated
    pop = algorithm.ask()

    # get the design space values of the algorithm
    X = pop.get("X")

    # implement your evluation. here ZDT1
    f1 = X[:, 0]
    v = 1 + 9.0 / (problem.n_var - 1) * np.sum(X[:, 1:], axis=1)
    f2 = v * (1 - np.power((f1 / v), 0.5))
    F = np.column_stack([f1, f2])

    static = StaticProblem(problem, F=F)
    Evaluator().eval(static, pop)

    # returned the evaluated individuals which have been evaluated or even modified
    algorithm.tell(infills=pop)

    # do same more things, printing, logging, storing or even modifying the algorithm object
    print(algorithm.n_gen)

# obtain the result objective from the algorithm
res = algorithm.result()

# calculate a hash to show that all executions end with the same result
print("hash", res.F.sum())
2
3
4
5
6
7
8
9
10
11
hash 58.62964054306852