Version: 0.6.0

# Callback¶

A Callback class can be used to receive a notification of the algorithm object each generation. This can be useful to track metrics, do additional calculations, or even modify the algorithm object during the run. The latter is only recommended for experienced users.

The example below implements a less memory-intense version of keeping track of the convergence. A posteriori analysis can one the one hand, be done by using the save_history=True option. This, however, stores a deep copy of the Algorithm object in each iteration. This might be more information than necessary, and thus, the Callback allows to select only the information necessary to be analyzed when the run has terminated. Another good use case can be to visualize data in each iteration in real-time.

Tip

The callback has full access to the algorithm object and thus can also alter it. However, the callback’s purpose is not to customize an algorithm but to store or process data.

[1]:

import matplotlib.pyplot as plt
import numpy as np

from pymoo.algorithms.soo.nonconvex.ga import GA
from pymoo.problems import get_problem
from pymoo.core.callback import Callback
from pymoo.optimize import minimize

class MyCallback(Callback):

def __init__(self) -> None:
super().__init__()
self.data["best"] = []

def notify(self, algorithm):
self.data["best"].append(algorithm.pop.get("F").min())

problem = get_problem("sphere")

algorithm = GA(pop_size=100)

res = minimize(problem,
algorithm,
('n_gen', 20),
seed=1,
callback=MyCallback(),
verbose=True)

val = res.algorithm.callback.data["best"]
plt.plot(np.arange(len(val)), val)
plt.show()

=================================================
n_gen  |  n_eval  |     f_avg     |     f_min
=================================================
1 |      100 |  0.8314974785 |  0.3870993357
2 |      200 |  0.5704031312 |  0.3057138275
3 |      300 |  0.4412829559 |  0.2828397417
4 |      400 |  0.3613782913 |  0.1537467268
5 |      500 |  0.2943841066 |  0.1248852603
6 |      600 |  0.2390656265 |  0.0915007895
7 |      700 |  0.1853366629 |  0.0915007895
8 |      800 |  0.1387657210 |  0.0754750567
9 |      900 |  0.1094417023 |  0.0602953187
10 |     1000 |  0.0857317521 |  0.0421502116
11 |     1100 |  0.0687946933 |  0.0103089979
12 |     1200 |  0.0542747790 |  0.0103089979
13 |     1300 |  0.0405142603 |  0.0103089979
14 |     1400 |  0.0295846547 |  0.0090169601
15 |     1500 |  0.0194033707 |  0.0090169601
16 |     1600 |  0.0138729323 |  0.0075106464
17 |     1700 |  0.0110389363 |  0.0058717921
18 |     1800 |  0.0092084195 |  0.0052438445
19 |     1900 |  0.0075903176 |  0.0014825906
20 |     2000 |  0.0063122006 |  0.0014825906


Note that the Callback object from the Result object needs to be accessed res.algorithm.callback because the original object keeps unmodified to ensure reproducibility.

For completeness, the history-based convergence analysis looks as follows:

[2]:

res = minimize(problem,
algorithm,
('n_gen', 20),
seed=1,
save_history=True)

val = [e.opt.get("F")[0] for e in res.history]
plt.plot(np.arange(len(val)), val)
plt.show()