pymoo
Latest Version: pymoo==0.3.2

Karush Kuhn Tucker Proximity Measure (KKTPM)

In 2016, Deb and Abouhawwash proposed Karush Kuhn Tucker Proximity Measure (KKTPM) [26], a metric that can measure how close a point is from being “an optimum”. The smaller the metric the closer the point. This does not require the Pareto front to be known, but the gradient information need either be available or be approximated. Their metric is applicable to both single objective and multi-objective optimization problems.

In a single objective problem the metric shows how close a point is from being a “local optimum”, while in multi-objective problems the metric shows how close a point is from being a “local Pareto point”. Exact calculations of KKTPM for each point requires solving a whole optimization problem, which is extremely time consuming. In order to avoid this problem, the authors of the original work again proposed several approximations to the true KKTPM, namely Direct KKTPM, Projected KKTPM, Adjusted KKTPM and Approximate KKTPM. Approximate KKTPM is simply the average of the former three and is what we call simply “KKTPM”. Moreover, they were able to show that Approximate KKTPM is reliable and can be used in place of the exact one [27].

nsga2_crowding

Let us now see how to use pymoo to calculate the KKTPM for point:

[1]:
from pymoo.factory import get_problem
problem = get_problem("zdt1", n_var=10)

For instance, the code below calculates the KKTPM metric for randomly sampled points for the given example;

[2]:
from pymoo.performance_indicator.kktpm import KKTPM
from pymoo.operators.sampling.random_sampling import FloatRandomSampling

X = FloatRandomSampling().do(problem, 100).get("X")
kktpm = KKTPM().calc(X, problem)

Moreover, a whole rum of a genetic algorithm can be analyzed by storing the history of each generation and then calculation the KKTPM metric for each of the points:

[3]:
from pymoo.algorithms.nsga2 import NSGA2
from pymoo.factory import get_problem
from pymoo.optimize import minimize
from pymoo.visualization.scatter import Scatter

algorithm = NSGA2(pop_size=100, eliminate_duplicates=True)

res = minimize(problem,
               algorithm,
               ('n_gen', 100),
               seed=1,
               save_history=True,
               verbose=False)
[4]:
import numpy as np
_min, _median, _max = [], [], []

for a in res.history:
    X = a.pop.get("X")
    kktpm = KKTPM().calc(X, problem)

    _min.append(kktpm.min())
    _median.append(np.median(kktpm))
    _max.append(kktpm.max())
[5]:
import matplotlib.pyplot as plt

I = np.arange(len(res.history))
plt.plot(I, _min, label="Min")
plt.plot(I, _median, label="Median")
plt.plot(I, _max, label="Max")
plt.yscale("log")
plt.legend()
plt.show()
../_images/tutorial_kktpm_11_0.png