In this case study, the optimization of a matrix shall be illustrated. Of course, we all know that there are very efficient algorithms for calculating an inverse of a matrix. However, for the sake of illustration, a small example shall show that pymoo can also be used to optimize matrices or even tensors.
A has a size of
n x n, the problem can be defined by optimizing a vector consisting of
n**2 variables. During evaluation the vector
x, is reshaped to inversion of the matrix to be found (and also stored as the attribute
A_inv to be retrieved later).
from pymoo.core.problem import ElementwiseProblem class MatrixInversionProblem(ElementwiseProblem): def __init__(self, A, **kwargs): self.A = A self.n = len(A) super().__init__(n_var=self.n**2, n_obj=1, xl=-100.0, xu=+100.0, **kwargs) def _evaluate(self, x, out, *args, **kwargs): A_inv = x.reshape((self.n, self.n)) out["A_inv"] = A_inv I = np.eye(self.n) out["F"] = ((I - (A @ A_inv)) ** 2).sum()
Now let us see what solution is found to be optimal
import numpy as np from pymoo.algorithms.soo.nonconvex.de import DE from pymoo.optimize import minimize np.random.seed(1) A = np.random.random((2, 2)) problem = MatrixInversionProblem(A) algorithm = DE() res = minimize(problem, algorithm, seed=1, verbose=False) opt = res.opt
In this case the true optimum is actually known. It is
array([[ 2.39952297e+00, -5.71699951e+00], [-9.07758630e-04, 3.30977861e+00]])
Let us see if the black-box optimization algorithm has found something similar
array([[ 2.39916052e+00, -5.71656622e+00], [-8.41267527e-04, 3.30978797e+00]])
np.abs(opt.get("A_inv") - np.linalg.inv(A)).sum()
This small example shall have illustrated how a matrix can be optimized. In fact, this is implemented by optimizing a vector of variables that are reshaped during evaluation.