pymoo
Latest Version: pymoo==0.3.2

Repair OperatorΒΆ

The repair operator is mostly problem dependent. Most commonly it is used to make sure the algorithm is only searching in the feasible space. It is applied after the offsprings have been reproduced. In the following, we are using the knapsack problem to demonstrate the repair operator in pymoo.

In the well-known Knapsack Problem. In this problem, a knapsack has to be filled with items without violating the maximum weight constraint. Each item \(j\) has a value \(b_j \geq 0\) and a weight \(w_j \geq 0\) where \(j \in \{1, .., m\}\). The binary decision vector \(z = (z_1, .., z_m)\) defines, if an item is picked or not. The aim is to maximize the profit \(g(z)\):

\begin{eqnarray} max & & g(z) \\[2mm] \notag \text{s.t.} & & \sum_{j=1}^m z_j \, w_j \leq Q \\[1mm] \notag & & z = (z_1, .., z_m) \in \mathbb{B}^m \\[1mm] \notag g(z) & = & \sum_{j=1}^{m} z_j \, b_j \\[2mm] \notag \end{eqnarray}

A simple GA will have some infeasible evaluations in the beginning and then concentrate on the infeasible space.

[1]:
from pymoo.factory import get_algorithm, get_crossover, get_mutation, get_sampling
from pymoo.optimize import minimize
from pymoo.problems.single.knapsack import create_random_knapsack_problem

problem = create_random_knapsack_problem(30)

algorithm = get_algorithm("ga",
                       pop_size=200,
                       sampling=get_sampling("bin_random"),
                       crossover=get_crossover("bin_hux"),
                       mutation=get_mutation("bin_bitflip"),
                       eliminate_duplicates=True)

res = minimize(problem,
               algorithm,
               termination=('n_gen', 10),
               verbose=True)
======================================================================
n_gen | n_eval  | cv (min/avg)                | favg  | fopt
======================================================================
1     | 200     | 2.360000E+02 / 5.185100E+02 | -     | -
2     | 400     | 8.400000E+01 / 3.840100E+02 | -     | -
3     | 600     | 3.900000E+01 / 2.932100E+02 | -     | -
4     | 800     | 3.900000E+01 / 2.193950E+02 | -     | -
5     | 1000    | 0.0000000000 / 1.475900E+02 | -247.1666666667 | -349.0000000000
6     | 1200    | 0.0000000000 / 8.672500E+01 | -272.7894736842 | -484.0000000000
7     | 1400    | 0.0000000000 / 4.091500E+01 | -269.9361702128 | -484.0000000000
8     | 1600    | 0.0000000000 / 1.184500E+01 | -287.2038834951 | -517.0000000000
9     | 1800    | 0.0000000000 / 0.0050000000 | -285.8693467337 | -522.0000000000
10    | 2000    | 0.0000000000 / 0.0000000000 | -360.6150000000 | -677.0000000000

Because the constraint \(\sum_{j=1}^m z_j \, w_j \leq Q\) is fairly easy to satisfy. Therefore, we can make sure before evaluating the objective function, that this constraint is not violated by repairing the individual. A repair class has to be defined and the population are given as input. The repaired population has to be returned.

[2]:
import numpy as np
from pymoo.model.repair import Repair


class ConsiderMaximumWeightRepair(Repair):

    def _do(self, problem, pop, **kwargs):

        # maximum capacity for the problem
        Q = problem.C

        # the packing plan for the whole population (each row one individual)
        Z = pop.get("X")

        # the corresponding weight of each individual
        weights = (Z * problem.W).sum(axis=1)

        # now repair each indvidiual i
        for i in range(len(Z)):

            # the packing plan for i
            z = Z[i]

            # while the maximum capacity violation holds
            while weights[i] > Q:

                # randomly select an item currently picked
                item_to_remove = np.random.choice(np.where(z)[0])

                # and remove it
                z[item_to_remove] = False

                # adjust the weight
                weights[i] -= problem.W[item_to_remove]

        # set the design variables for the population
        pop.set("X", Z)
        return pop
[3]:
algorithm.repair = ConsiderMaximumWeightRepair()

res = minimize(problem,
               algorithm,
               termination=('n_gen', 10),
               verbose=True)

======================================================================
n_gen | n_eval  | cv (min/avg)                | favg         | fopt
======================================================================
1     | 200     | 0.0000000000 / 0.0000000000 | -127.7100000000 | -465.0000000000
2     | 400     | 0.0000000000 / 0.0000000000 | -214.0100000000 | -465.0000000000
3     | 600     | 0.0000000000 / 0.0000000000 | -279.7350000000 | -525.0000000000
4     | 800     | 0.0000000000 / 0.0000000000 | -323.3800000000 | -525.0000000000
5     | 1000    | 0.0000000000 / 0.0000000000 | -367.3500000000 | -552.0000000000
6     | 1200    | 0.0000000000 / 0.0000000000 | -413.2600000000 | -581.0000000000
7     | 1400    | 0.0000000000 / 0.0000000000 | -457.9950000000 | -604.0000000000
8     | 1600    | 0.0000000000 / 0.0000000000 | -490.6850000000 | -657.0000000000
9     | 1800    | 0.0000000000 / 0.0000000000 | -519.4600000000 | -670.0000000000
10    | 2000    | 0.0000000000 / 0.0000000000 | -541.8750000000 | -670.0000000000

As it can be seen, the repair operator makes sure no infeasible solution is evaluated. Even though this example seems to be quite easy, the repair operator makes especially sense for more complex constraints where domain specific knowledge is known.