A Python toolbox for performing gradient-free optimization
MIT License
Bot releases are visible (Hide)
Published by teytaud 8 months ago
Full Changelog: https://github.com/facebookresearch/nevergrad/compare/1.0.1...1.0.2
Published by teytaud 8 months ago
Full Changelog: https://github.com/facebookresearch/nevergrad/compare/1.0.1...1.02
Published by teytaud 11 months ago
Full Changelog: https://github.com/facebookresearch/nevergrad/compare/1.0.0...1.0.1
Published by teytaud about 1 year ago
Full Changelog: https://github.com/facebookresearch/nevergrad/compare/0.14.0...1.0.0
Published by teytaud about 1 year ago
Full Changelog: https://github.com/facebookresearch/nevergrad/compare/0.13.0...0.14.0
Published by teytaud about 1 year ago
Full Changelog: https://github.com/facebookresearch/nevergrad/compare/0.12.0...0.13.0
Published by teytaud about 1 year ago
More plots,
better texts (in the automatic latex creation),
incorporating new chaining methods (Carola*)
new wizards (NgIoh and Wiz)
Published by teytaud about 1 year ago
Full Changelog: https://github.com/facebookresearch/nevergrad/compare/0.10.0...0.11.0
Published by teytaud about 1 year ago
Full Changelog: https://github.com/facebookresearch/nevergrad/compare/v0.9.0...v0.10.0
Published by teytaud about 1 year ago
Full Changelog: https://github.com/facebookresearch/nevergrad/compare/0.8.0...v0.9.0
Published by teytaud over 1 year ago
Inspired by discussions at the Dagstuhl seminar, and others.
Full Changelog: https://github.com/facebookresearch/nevergrad/compare/0.7.0...v0.8.0
Published by teytaud over 1 year ago
adding weighted DE multiobjective
adding various metamodels
Published by teytaud over 1 year ago
Adding NGOptRW, presumably better than NGOpt for real-world problems.
Adding YAPBBOB, with a parameter regulating YABBOB-like problems so that the distribution of the optimum is less rotationally invariant.
Making some dependencies optional because running was becoming too complicated.
There should be no breaking change.
Adding constrained counterparts of YABBOB: yapenbbob (a few constraints), yaonepenbbob (single constraint), yamegapenbbob (many constraints).
Improvements in the photonics benchmarks.
Externalizing CompilerGym.
Making some tests less flaky.
Adding Simulated annealing and Tabu search.
Adding the NLOPT library.
Making the code more robust to Gym environments.
Adding smoothness operators for discrete optimization.
Published by teytaud almost 2 years ago
See CHANGELOG for details.
This version provides a few fixes and the new multi-objective API of optimizers (you can now provide a list/array of float to tell
directly). This allows fore more efficient multi-objective optimization for some optimizers (DE, NGOpt). Future work will continue to improve multi-objective capacities and aim at improving constraints management.
See CHANGELOG for details.
Published by jrapin about 4 years ago
This version should be robust. Following versions may become more unstable as we will add more native multiobjective optimization as an experimental feature. We also are in the process of simplifying the naming pattern for the "NGO/Shiwa" type optimizers which may cause some changes in the future.
See CHANGELOG for details.
Published by jrapin over 4 years ago
See CHANGELOG for details.
Published by jrapin over 4 years ago
This is the final step for creating the new instrumentation/parametrization framework and removing the old one.
Learn more on the Facebook user group
Important changes:
archive
does not store anymore all evaluated points, for memory reasons.See CHANGELOG for more details.