xtsci-optimize

Optimization routines with xtensor

MIT License

Stars
0
  • About
    #+begin_quote
    This has no official relationship to either xtensor or scipy!!!
    #+end_quote

Part of the larger xtsci project, which iteratively implements parts of scipy using xtensor and modern C++ based on my needs. At some point (or with funding) this might cover all of scipy, but its really more of a rapid prototyping thing.

These are meant for unconstrained non-linear problems for now. The main classes written in are:

  • Non-linear conjugate gradient methods
    • Fletcher-Reeves
    • Polak-Ribiere
    • Fletcher-Reeves-Polak-Ribiere
    • Hestenes-Stiefel
    • Liu-Storey
    • Dai-Yuan
    • Conjugate descent
    • Hager-Zhang
    • Hybridized methods of the above with unary operations
  • Newton's method
  • Quasi-Newton methods
    • SR1
    • BFGS
    • L-BFGS

** Usage Until bindings are ready, tiny_cli.cpp can be edited and run with output piped to get a visual for the trial functions.

#+begin_src bash meson setup bbdir meson compile -C bbdir ./bbdir/CppCore/tiny_cli > output.txt python scripts/plot_cpp_rosen.py --step-size-method "Wolfe" --line-search-method "Zoom" --minimize-method "LBFGS m(30)" #+end_src

** Components The heart of the library is the xts namespace, with functions further demarcated according to the relevant scipy modules e.g. xts::optimize.

  • Note that this uses designated initializers, a C++20 feature.

** Provenance These are constructed mostly with the notation of the Nocedal and Wright book.

** License MIT.