pymoo
Latest Version: pymoo==0.3.2

Subset Selection ProblemΒΆ

A genetic algorithm can be used to approach subset selection problems by defining custom operators. In general a metaheuristic algorithm might not be the ultimate goal to implement in a real-world scenario, however, it might be useful to investigate patterns or characteristics of possible good subsets. Let us consider a simple toy problem where we have to select numbers from a list. For every solution exactly 10 numbers have to be selected that their sum is minimized. For subset selection problem a binary encoding can be used where 1 indicates a number is picked. In our problem formulation the list of numbers is represented by \(L\) and the binary encoded variable by \(x\).

\begin{align} \begin{split} \min f(x) & = & \sum_{k=1}^{n} L_k \cdot x_k\\[2mm] \text{s.t.} \quad g(x) & = & (\sum_{k=1}^{n} x_k - 10)^2\\[2mm] \end{split} \end{align}

As shown above, the equality constraint is handled by making sure \(g(x)\) can only be zero if exactly 10 numbers are chosen. The problem can be implemented as follows:

[1]:
import numpy as np
from pymoo.model.problem import Problem

class SubsetProblem(Problem):
    def __init__(self,
                 L,
                 n_max
                 ):
        super().__init__(n_var=len(L), n_obj=1, n_constr=1, elementwise_evaluation=True)
        self.L = L
        self.n_max = n_max

    def _evaluate(self, x, out, *args, **kwargs):
        out["F"] = np.sum(self.L[x])
        out["G"] = (self.n_max - np.sum(x)) ** 2


# create the actual problem to be solved
np.random.seed(1)
L = np.array([np.random.randint(100) for _ in range(100)])
n_max = 10
problem = SubsetProblem(L, n_max)

The customization requires to write custom operators in order to solve this problem efficiently. We recommend to consider the feasibility directly in the evolutionary operators, because otherwise most of the time infeasible solutions will be processed. The sampling creates randomly solution where the subset constraint will always be satisfied. The mutation randomly removes a number and chooses another one. The crossover, first takes the common numbers of both parents and then randomly picks either from the first or from the second parent until enough numbers are picked.

[2]:
from pymoo.model.crossover import Crossover
from pymoo.model.mutation import Mutation
from pymoo.model.sampling import Sampling


class MySampling(Sampling):

    def _do(self, problem, n_samples, **kwargs):
        X = np.full((n_samples, problem.n_var), False, dtype=np.bool)

        for k in range(n_samples):
            I = np.random.permutation(problem.n_var)[:problem.n_max]
            X[k, I] = True

        return X


class BinaryCrossover(Crossover):
    def __init__(self):
        super().__init__(2, 1)

    def _do(self, problem, X, **kwargs):
        n_parents, n_matings, n_var = X.shape

        _X = np.full((self.n_offsprings, n_matings, problem.n_var), False)

        for k in range(n_matings):
            p1, p2 = X[0, k], X[1, k]

            both_are_true = np.logical_and(p1, p2)
            _X[0, k, both_are_true] = True

            n_remaining = problem.n_max - np.sum(both_are_true)

            I = np.where(np.logical_xor(p1, p2))[0]

            S = I[np.random.permutation(len(I))][:n_remaining]
            _X[0, k, S] = True

        return _X


class MyMutation(Mutation):
    def _do(self, problem, X, **kwargs):
        for i in range(X.shape[0]):
            X[i, :] = X[i, :]
            is_false = np.where(np.logical_not(X[i, :]))[0]
            is_true = np.where(X[i, :])[0]
            X[i, np.random.choice(is_false)] = True
            X[i, np.random.choice(is_true)] = False

        return X

After having defined the operators a genetic algorithm can be initialized.

[3]:
from pymoo.algorithms.so_genetic_algorithm import GA
from pymoo.optimize import minimize

algorithm = GA(
    pop_size=100,
    sampling=MySampling(),
    crossover=BinaryCrossover(),
    mutation=MyMutation(),
    eliminate_duplicates=True)

res = minimize(problem,
               algorithm,
               ('n_gen', 60),
               seed=1,
               verbose=True)

print("Function value: %s" % res.F[0])
print("Subset:", np.where(res.X)[0])

======================================================================
n_gen | n_eval  | cv (min/avg)                | favg         | fopt
======================================================================
1     | 100     | 0.0000000000 / 0.0000000000 | 4.439400E+02 | 2.580000E+02
2     | 200     | 0.0000000000 / 0.0000000000 | 3.537300E+02 | 2.050000E+02
3     | 300     | 0.0000000000 / 0.0000000000 | 2.967100E+02 | 1.580000E+02
4     | 400     | 0.0000000000 / 0.0000000000 | 2.460400E+02 | 1.400000E+02
5     | 500     | 0.0000000000 / 0.0000000000 | 2.138900E+02 | 1.260000E+02
6     | 600     | 0.0000000000 / 0.0000000000 | 1.869100E+02 | 1.170000E+02
7     | 700     | 0.0000000000 / 0.0000000000 | 1.616500E+02 | 9.700000E+01
8     | 800     | 0.0000000000 / 0.0000000000 | 1.426500E+02 | 9.700000E+01
9     | 900     | 0.0000000000 / 0.0000000000 | 1.300700E+02 | 6.700000E+01
10    | 1000    | 0.0000000000 / 0.0000000000 | 1.195400E+02 | 6.700000E+01
11    | 1100    | 0.0000000000 / 0.0000000000 | 1.108100E+02 | 5.700000E+01
12    | 1200    | 0.0000000000 / 0.0000000000 | 1.021800E+02 | 5.700000E+01
13    | 1300    | 0.0000000000 / 0.0000000000 | 9.592000E+01 | 5.700000E+01
14    | 1400    | 0.0000000000 / 0.0000000000 | 8.898000E+01 | 5.500000E+01
15    | 1500    | 0.0000000000 / 0.0000000000 | 8.368000E+01 | 4.900000E+01
16    | 1600    | 0.0000000000 / 0.0000000000 | 7.953000E+01 | 4.900000E+01
17    | 1700    | 0.0000000000 / 0.0000000000 | 7.565000E+01 | 4.900000E+01
18    | 1800    | 0.0000000000 / 0.0000000000 | 7.155000E+01 | 4.900000E+01
19    | 1900    | 0.0000000000 / 0.0000000000 | 6.941000E+01 | 4.600000E+01
20    | 2000    | 0.0000000000 / 0.0000000000 | 6.790000E+01 | 4.600000E+01
21    | 2100    | 0.0000000000 / 0.0000000000 | 6.538000E+01 | 4.600000E+01
22    | 2200    | 0.0000000000 / 0.0000000000 | 6.350000E+01 | 4.600000E+01
23    | 2300    | 0.0000000000 / 0.0000000000 | 6.222000E+01 | 4.500000E+01
24    | 2400    | 0.0000000000 / 0.0000000000 | 6.060000E+01 | 4.500000E+01
25    | 2500    | 0.0000000000 / 0.0000000000 | 5.918000E+01 | 4.400000E+01
26    | 2600    | 0.0000000000 / 0.0000000000 | 5.788000E+01 | 4.200000E+01
27    | 2700    | 0.0000000000 / 0.0000000000 | 5.678000E+01 | 4.200000E+01
28    | 2800    | 0.0000000000 / 0.0000000000 | 5.548000E+01 | 4.200000E+01
29    | 2900    | 0.0000000000 / 0.0000000000 | 5.451000E+01 | 4.200000E+01
30    | 3000    | 0.0000000000 / 0.0000000000 | 5.331000E+01 | 4.200000E+01
31    | 3100    | 0.0000000000 / 0.0000000000 | 5.220000E+01 | 4.200000E+01
32    | 3200    | 0.0000000000 / 0.0000000000 | 5.158000E+01 | 4.200000E+01
33    | 3300    | 0.0000000000 / 0.0000000000 | 5.080000E+01 | 4.200000E+01
34    | 3400    | 0.0000000000 / 0.0000000000 | 5.014000E+01 | 3.900000E+01
35    | 3500    | 0.0000000000 / 0.0000000000 | 4.958000E+01 | 3.900000E+01
36    | 3600    | 0.0000000000 / 0.0000000000 | 4.916000E+01 | 3.900000E+01
37    | 3700    | 0.0000000000 / 0.0000000000 | 4.869000E+01 | 3.900000E+01
38    | 3800    | 0.0000000000 / 0.0000000000 | 4.846000E+01 | 3.800000E+01
39    | 3900    | 0.0000000000 / 0.0000000000 | 4.819000E+01 | 3.800000E+01
40    | 4000    | 0.0000000000 / 0.0000000000 | 4.789000E+01 | 3.800000E+01
41    | 4100    | 0.0000000000 / 0.0000000000 | 4.728000E+01 | 3.800000E+01
42    | 4200    | 0.0000000000 / 0.0000000000 | 4.708000E+01 | 3.800000E+01
43    | 4300    | 0.0000000000 / 0.0000000000 | 4.677000E+01 | 3.800000E+01
44    | 4400    | 0.0000000000 / 0.0000000000 | 4.640000E+01 | 3.800000E+01
45    | 4500    | 0.0000000000 / 0.0000000000 | 4.610000E+01 | 3.700000E+01
46    | 4600    | 0.0000000000 / 0.0000000000 | 4.578000E+01 | 3.700000E+01
47    | 4700    | 0.0000000000 / 0.0000000000 | 4.565000E+01 | 3.700000E+01
48    | 4800    | 0.0000000000 / 0.0000000000 | 4.554000E+01 | 3.700000E+01
49    | 4900    | 0.0000000000 / 0.0000000000 | 4.536000E+01 | 3.700000E+01
50    | 5000    | 0.0000000000 / 0.0000000000 | 4.531000E+01 | 3.700000E+01
51    | 5100    | 0.0000000000 / 0.0000000000 | 4.509000E+01 | 3.700000E+01
52    | 5200    | 0.0000000000 / 0.0000000000 | 4.480000E+01 | 3.700000E+01
53    | 5300    | 0.0000000000 / 0.0000000000 | 4.456000E+01 | 3.700000E+01
54    | 5400    | 0.0000000000 / 0.0000000000 | 4.444000E+01 | 3.700000E+01
55    | 5500    | 0.0000000000 / 0.0000000000 | 4.431000E+01 | 3.700000E+01
56    | 5600    | 0.0000000000 / 0.0000000000 | 4.431000E+01 | 3.700000E+01
57    | 5700    | 0.0000000000 / 0.0000000000 | 4.424000E+01 | 3.700000E+01
58    | 5800    | 0.0000000000 / 0.0000000000 | 4.411000E+01 | 3.700000E+01
59    | 5900    | 0.0000000000 / 0.0000000000 | 4.402000E+01 | 3.700000E+01
60    | 6000    | 0.0000000000 / 0.0000000000 | 4.364000E+01 | 3.600000E+01
Function value: 36
Subset: [ 5  9 12 31 36 37 47 52 68 99]

Finally, we can compare the found subset with the optimum known simply through sorting:

[4]:
opt = np.sort(np.argsort(L)[:n_max])
print("Optimal Subset:", opt)
print("Optimal Function Value: %s" % L[opt].sum())
Optimal Subset: [ 5  9 12 31 36 37 47 52 68 99]
Optimal Function Value: 36