nmoo.noises
Noise wrappers.
Warning:
In
pymoo
, constrained problems have aG
component used to measure the validity of the inputs. Sometimes, that component depends on the actual value of the problem (i.e. theF
component). See for instance the implementation of CTP1. Let $\mathcal{P}$ be such a problem. Evaluating $\mathcal{P}$ on an input $x$ gives the following output: $$ \mathcal{P} (x) = \begin{cases} F = f (x) \\ G = g (x, f (x)) \end{cases} $$ where $f$ and $g$ are the non-noisy objective and constraint function, respectively. Now, let $\mathcal{E}$ be a noise wrapper adding some random noise $\varepsilon$ on theF
component of $\mathcal{P}$. Then we have $$ \mathcal{E} (\mathcal{P} (x)) = \begin{cases} F = f (x) + \varepsilon \\ G = g (x, f (x)) \end{cases} $$ which is NOT the same as $$ \begin{cases} F = f (x) + \varepsilon \\ G = g (x, f (x) + \varepsilon) \end{cases} $$ The latter may seem more natural, but breaks the wrapper hierarchy, as after getting theF
value of $\mathcal{P} (x)$, the noise layer $\mathcal{E}$ would have to query $\mathcal{P}$ again to calculateG
while somehow substituting $f (x) + \varepsilon$ for $f (x)$ in calculations.
1""" 2Noise wrappers. 3* `nmoo.noises.gaussian.GaussianNoise` 4* `nmoo.noises.uniform.UniformNoise` 5 6Warning: 7 8 In `pymoo`, constrained problems have a `G` component used to measure the 9 validity of the inputs. Sometimes, that component depends on the actual 10 value of the problem (i.e. the `F` component). See for instance [the 11 implementation of 12 CTP1](https://github.com/anyoptimization/pymoo/blob/7b719c330ff22c10980a21a272b3a047419279c8/pymoo/problems/multi/ctp.py#L84). 13 Let $\\mathcal{P}$ be such a problem. Evaluating $\\mathcal{P}$ on an input 14 $x$ gives the following output: 15 $$ 16 \\mathcal{P} (x) = \\begin{cases} 17 F = f (x) 18 \\\\\\ 19 G = g (x, f (x)) 20 \\end{cases} 21 $$ 22 where $f$ and $g$ are the non-noisy objective and constraint function, 23 respectively. Now, let $\\mathcal{E}$ be a noise wrapper adding some random 24 noise $\\varepsilon$ on the `F` component of $\\mathcal{P}$. Then we have 25 $$ 26 \\mathcal{E} (\\mathcal{P} (x)) = \\begin{cases} 27 F = f (x) + \\varepsilon 28 \\\\\\ 29 G = g (x, f (x)) 30 \\end{cases} 31 $$ 32 which is **NOT** the same as 33 $$ 34 \\begin{cases} 35 F = f (x) + \\varepsilon 36 \\\\\\ 37 G = g (x, f (x) + \\varepsilon) 38 \\end{cases} 39 $$ 40 The latter may seem more natural, but breaks the wrapper hierarchy, as 41 after getting the `F` value of $\\mathcal{P} (x)$, the noise layer 42 $\\mathcal{E}$ would have to query $\\mathcal{P}$ again to calculate `G` 43 while somehow substituting $f (x) + \\varepsilon$ for $f (x)$ in 44 calculations. 45 46""" 47__docformat__ = "google" 48 49from .gaussian import GaussianNoise 50from .uniform import UniformNoise