nmoo
noisy-moo
A wrapper-based framework for pymoo problem modification and algorithm benchmarking. Initially developed to test KNN-averaging1.
Installation
Simply run
pip install nmoo
Getting started
In a notebook
See example.ipynb for a quick example.
For larger benchmarks
For larger benchmarks, you may want to use nmoo's CLI. First, create a module,
say example.py
,
containing your benchmark factory (a function that returns your
benchrmark),
say make_benchmark()
. Then, run it using
python -m nmoo run --verbose 10 example:make_benchmark
Refer to
python -m nmoo --help
for more information.
Main submodules and classes
nmoo.benchmark.Benchmark
: ABenchmark
object represents... a benchmark 🤔. At construction, you can specify problems and algorithms to run, how many times to run them, what performance indicators to compute, etc. Refer tonmoo.benchmark.Benchmark.__init__
for more details.nmoo.wrapped_problem.WrappedProblem
: The main idea ofnmoo
is to wrap problems in layers. Each layer should redefinepymoo.Problem._evaluate
to intercept calls to the wrapped problem. It is then possible to apply/remove noise, keep a call history, log, etc.nmoo.denoisers
: Sublasses ofnmoo.wrapped_problem.WrappedProblem
that implement denoising algorithms. In a simple scenario, a synthetic problem would be wrapped in a noise layer, and further wrapped in a denoising layer to test the performance of the latter.nmoo.noises
: Sublasses ofnmoo.wrapped_problem.WrappedProblem
that apply noise.
Contributing
Dependencies
python3.8
or newer;requirements.txt
for runtime dependencies;requirements.dev.txt
for development dependencies (optional);make
(optional).
Simply run
virtualenv venv -p python3.8
. ./venv/bin/activate
pip install -r requirements.txt
pip install -r requirements.dev.txt
Documentation
Simply run
make docs
This will generate the HTML doc of the project, and the index file should be at
docs/index.html
. To have it directly in your browser, run
make docs-browser
Code quality
Don't forget to run
make
to format the code following black, typecheck it using mypy, and check it against coding standards using pylint.
Changelog
v5.0.0
New features
- Seed rotations:
Benchmark.__init__
now has aseeds
argument which can receive a list of seeds. The first seed will be used for all random generators involved in the first run of every algorithm-problem pair, the second for all second runs, etc. When constructing a
WrappedProblem
, the wrapped problem is now deepcopyied by default:zdt1 = ZDT1() noisy_problem = nmoo.GaussianNoise(zdt1, ...) # noisy_problem._problem is now a deep copy of zdt1 zdt1 == noisy_problem._problem #Â False
Essentially all the classes and methods of
nmoo
are exposed at the root level, e.g.nmoo.Benchmark
instead of the oldnmoo.benchmark.Benchmark
. (the latter is still possible of course)In simple use cases, gaussian noises can be specified more easily:
# Assume that the F component is numerical and 2-dimensional. # Before (still possible) mean = np.array([0., 0.]) cov = .1 * np.eye(2) noisy_problem = nmoo.GaussianNoise(problem, parameters={"F": (mean, cov)}) # Now noisy_problem = nmoo.GaussianNoise(problem, mean, cov) # Since cov is constand diagonal, the following is also possible noisy_problem = nmoo.GaussianNoise(problem, mean, .1)
Added uniform noise wrapper, see
nmoo.UniformNoise
.
Breaking changes
- Seeds can no longer be specified in algorithm description dicts (see
Benchmark.__init__
). Instead, use theseeds
argument when constructing benchmarks (see above). - Class
nmoo.benchmark.Pair
has been replaced bynmoo.benchmark.PAPair
, representing a problem-algorithm pair, andnmoo.benchmark.PARTriple
, representing a problem-algorithm-(run number) triple. Methodnmoo.Benchmark._all_pairs
has been replaced bynmoo.Benchmark.all_pa_pairs
andnmoo.Benchmark.all_par_triples
. - Performance indicator files
<problem_name>.<algorithm_name>.<n_run>.pi.csv
are now split into<problem_name>.<algorithm_name>.<n_run>.pi-<pi_name>.csv
, one for each performance indicator. GaussianNoise.__init__
: The old parameter dicts must bow be passed as a key-value argument:# Assume that the F component is numerical and 2-dimensional. # Old way, NO LONGER WORKS mean = np.array([0., 0.]) cov = .1 * np.eye(2) noisy_problem = nmoo.GaussianNoise(problem, {"F": (mean, cov)}) # New way noisy_problem = nmoo.GaussianNoise(problem, parameters={"F": (mean, cov)})
The awkwardly named
nmoo.evaluators.evaluation_penalty_evaluator.EvaluationPenaltyEvaluator
has been renamed tonmoo.evaluators.penalized_evaluator.PenalizedEvaluator
.PenalizedEvaluator.__init__
: In the past, the only supported penalty type was"times"
(meaning that the perceived number of evaluations was the actual number times a certain coefficient). Since this will not change in the forseeable future, thepenalty_type
argument has been removed.#Â Old way, NO LONGER WORKS evaluator = PenalizedEvaluator("times", 5) # New way evaluator = PenalizedEvaluator(5)
Aditionally, the name of the argument is now
multiplier
(instead of the oldcoefficient
).#Â Old keyval style, NO LONGER WORKS evaluator = PenalizedEvaluator(penalty_type="times", coefficient=5) # New keyval style evaluator = PenalizedEvaluator(multiplier=5)
v4.0.0
Breaking changes
- In the wrapped problem histories, the
x
field has been renamed toX
. This implies that thex
field in history dumps are now calledX
instead. - In argorithms descriptions, the
save_history
option is now ignored. - When constructing a benchmark, the default performance indicator list now
consist of only
igd
(instead ofgd
,gd+
,igd
andigd+
previously).
v3.0.0
Beaking changes
- In the algorithm specification dictionaries of
Benchmark.__init__
, keyminimize_kwargs
is no longer considered. Instead, various other keys have been added. Refer to the documentation.
v2.0.0
Breaking changes
GaussianNoise.__init__
now takes (a dict of) multivariate Gaussian noise parameters as arguments. Previously, it took (a dict of) tuples indicating the mean and standard deviation of a 1-dimensional Gaussian noise that would then be applied to all components independently. This old behaviour can be replicated by specifying a diagonal covariance matrix, e.g. the following are equivalent:# Assume that the F component is numerical and 2-dimensional. # Old way, NO LONGER WORKS noisy_problem = nmoo.GaussianNoise(problem, {"F": (0., 1.)}) # New way mean = np.array([0., 0.]) cov = np.array([ [1., 0.], [0., 1.], ]) # Or more concisely, np.eye(2) noisy_problem = nmoo.GaussianNoise(problem, {"F": (mean, cov)})
-
Klikovits, S., Arcaini, P. (2021). KNN-Averaging for Noisy Multi-objective Optimisation. In: Paiva, A.C.R., Cavalli, A.R., Ventura Martins, P., Pérez-Castillo, R. (eds) Quality of Information and Communications Technology. QUATIC 2021. Communications in Computer and Information Science, vol 1439. Springer, Cham. https://doi.org/10.1007/978-3-030-85347-1_36 ↩
1""" 2.. include:: ../README.md 3.. include:: ../CHANGELOG.md 4""" 5 6from pkg_resources import DistributionNotFound, get_distribution 7 8from .algorithms import ARNSGA2 9from .benchmark import Benchmark 10from .callbacks import TimerCallback 11from .denoisers import GPSS, KNNAvg, ResampleAverage 12from .evaluators import PenalizedEvaluator 13from .noises import GaussianNoise, UniformNoise 14from .plotting import generate_delta_F_plots, plot_performance_indicators 15from .wrapped_problem import WrappedProblem 16 17try: 18 __version__ = get_distribution("nmoo").version 19except DistributionNotFound: 20 __version__ = "local"