torch_numopt

This package implements some optimization methods used in numerical optimiation tasks for neural networks that aren't usually implemented in deep learning frameworks because of their computational constraints.

Stars
0

Torch Numerical Optimization

Implementation of numerical optimization methods for Neural Networks.

Due to computational constraints, methods like Newton-Raphson or Levenberg-Marquardt are to be used with small Neural Networks as they require $O(p^3)$ space for a network with $p$ parameters.

References

relevant paper

Planned optimizers

  • Newton-Raphson
  • Gauss-Newton
  • Levenberg-Marquard (LM)
  • Approximate Greatest Descent (AGD)
  • Stochastic Gradient Descent with Line Search
  • Conjugate Gradient
  • Quasi-Newton (LBFGS already in pytorch)
  • Hessian-free / truncated Newton