CasADi is a symbolic framework for numeric optimization implementing automatic differentiation in forward and reverse modes on sparse matrix-valued computational graphs. It supports self-contained C-code generation and interfaces state-of-the-art codes such as SUNDIALS, IPOPT etc. It can be used from C++, Python or Matlab/Octave.
LGPL-3.0 License
Bot releases are visible (Hide)
Published by jgillis 3 months ago
Grab a binary from the table:
For Matlab/Octave, unzip in your home directory and adapt the path:
Check your installation:
Get started with the example pack. Onboarding pointers have been gathered by the community at our wiki.
LD_PRELOAD=<knitro_lin_path>/libiomp5.so
.hypot(x,y) = sqrt(x*x+y*y)
log1p(x) = log(1+x)
expm1(x) = exp(x-1)
remainder
with the semantics of the C operation
fmin/
fmax` is now symmetric:jacobian(fmin(x,y),vertcat(x,y))
used to be [1 0] for x==y. Now yields [0.5 0.5].mmin
/mmax
logsumexp
which behaves like log(sum(exp(x)))
but is numerically more accurate (and no overflow issues).vertcat
/vcat
,horzcat
/hcat
, etc now return a DM
type instead of a Sparsity
type #2549
mod
has been renamed to rem
, because its numerical behaviour is like the builtin-Matlab rem
. The builtin-Matlab mod
has no CasADi counterpart. CasADi-Python mod
has been removed, because its numerical behaviour is not like numpy.mod
. #2767. numpy.mod
has no counterpart in CasADi; only fmod
is equivalent.Before, CasADi internals would avoid introducing redundant nodes during operations on a given expression, but the user was responsible to avoid duplication when constructing that expression.
There is a function cse()
that you may apply to expressions:
x = MX.sym('x')
# User responsibility
sx = sin(x)
y = sqrt(sx)+sx # MX(@1=sin(x), (sqrt(@1)+@1))
# cse
y = sqrt(sin(x))+sin(x) # MX((sqrt(sin(x))+sin(x)))
y = cse(y) # MX(@1=sin(x), (sqrt(@1)+@1))
There is a boolean option cse
that may be used when constructing a Function
:
x = MX.sym('x')
f = Function('f',[x],[sqrt(sin(x))+sin(x)],{"cse":True})
f.disp(True)
f:(i0)->(o0) MXFunction
Algorithm:
@0 = input[0][0]
@0 = sin(@0)
@1 = sqrt(@0)
@1 = (@1+@0)
output[0][0] = @1
The technique scales favorably for large graphs.
MX how has atomic support for solving upper and lower triangular linear systems without allocating any linear solver instance. The operation handles the case with unity diagonal separately for efficiency and supports C code generation. To use the feature, call casadi.solve(A, b)
(Python or MATLAB/Octave)
# Python
import casadi
A = casadi.MX.sym('A', casadi.Sparsity.upper(2))
b = casadi.MX.sym('b', 2)
x = casadi.solve(A, b)
// C++
casadi::MX A = casadi::MX::sym("A", casadi::Sparsity::upper(2));
casadi::MX b = casadi::MX::sym("b", 2);
casadi::MX x = solve(A, b); // for argument-dependent lookup, alternatively casadi::MX::solve(A, b) for static function
Cf. #2688.
SX
/MX
Function
construction with free variables (i.e. symbols used in the output expressions that are not declared as inputs) now fails immediately unless the allow_free
option is used.SX
/MX
Function
construction now fails if there are duplicates in input names or output names, unless the allow_duplicate_io_names
option is used #2604.custom_jacobian
semantics changed. The Function must now return individual blocks (Jacobian of an output w.r.t. to an input)external
or Callback
)bool has_jac_sparsity(casadi_int oind, casadi_int iind) const override;
Sparsity get_jac_sparsity(casadi_int oind, casadi_int iind, bool symmetric) const override;
Function.find_function
Can be used to retrieve Functions in a hierarchy.dump_in
option ) as nan
instead of earlier 0
. E.g. Ipopt nlp_grad_f
has two outputs, f
and grad_f_x
. The f
output is not used internally, so will be logged as nan
.Function
objects with an external
call can now be codegenerated.mmin
/mmax
now support codegenerationnlpsol
/Opti.solver
can now take an option 'detect_simple_bounds' (default False
) that will promote general constraints to simple bounds (lbx/ubx).libcplex<CPLEX_VERSION>
, where CPLEX_VERSION is read from environmental variables. Same strategy for Gurobi
.The Integrator
class, which solves initial-value problems in ODEs and DAEs has been thoroughly refactored. Changes include:
integrator
constructor. Unlike before, this support should now work in combination with forward/adjoint sensitivity analysis (to any order) and sparsity pattern calculations. Cf. #2823.u
). The interface will keep track of changes to u
and avoid integrating past such changes; for the Sundials (CVODES/IDAS) interfaces by setting a "stop time", for fixed step integrators by aligning the integration points with the grid points. Cf. #3025. Development versions of CasADi included support for this in a dedicated class, called Simulator
, but this class has now been removed (breaking) and the functionality has been ported to the Integrator
class.cs.integrator('sim_function', 'cvodes', dae, tgrid, opts)
, you may replace it by cs.integrator('sim_function', 'cvodes', dae, 0, tgrid[1:], opts)
.Function
class for derivative calculations - this makes the class now more efficient for use with non-symbolic DAEs, including FMUs or other external models.t0
, tf
, output_t0
and grid
have been deprecated and will result in a warning if used. Instead, the user can provide equivalent information via the integrator
constructor, cf. previous point.backward states
are no longer part of the DAE formulation. They are now derived from a user specified number of sensitivity equations (nadj
). This is a slight restriction in the possible problem formulations, but on the other hand allows for a much better exploitation of adjoint sensitivity structure. The the backward states remain in the integrator class function inputs and outputs, but have now been renamed to align with their meaning; adj_xf
means the adjoint seeds corresponding to xf
(before they were called rx0
), adj_p
are the adjoint sensitivities corresponding to p
(before called rqf
and so on.scale_abstol
has been added to the Sundials integrators. If this is set to true, nominal values for the differential state and algebraic variables will be passed on to the solver. Cf. #3046
See "multipoint_simulation" in the example pack for a good starting point.
triu:
to get the old behavior.d
and local dependent variables w
have been replaced by the single dependent variables v
.octaveinterp
version, such that the new binaries work with future releases of Octave that increment the octaveinterp
ABI version number.casadi-cli
. At the moment, functionality is very limited, just eval_dump
, to evaluate Function that have been dumped to the disk (options dump
,dump_in
)-DWITH_IPOPT=ON -DWITH_BUILD_REQUIRED=ON
-DWITH_CPLEX=ON -DWITH_MOCKUP_CPLEX=ON
pip
are now availablemaster
branch has been renamed to main
, and has different semantics: it will be the branch where new features are added regularly before they become an official release. Latest official release is available as latest
branch.opti.set_domain(x,'integer')
Published by jgillis 8 months ago
Grab a binary from the table:
For Matlab/Octave, unzip in your home directory and adapt the path:
Check your installation:
Get started with the example pack. Onboarding pointers have been gathered by the community at our wiki.
LD_PRELOAD=<knitro_lin_path>/libiomp5.so
.hypot(x,y) = sqrt(x*x+y*y)
log1p(x) = log(1+x)
expm1(x) = exp(x-1)
remainder
with the semantics of the C operation
fmin/
fmax` is now symmetric:jacobian(fmin(x,y),vertcat(x,y))
used to be [1 0] for x==y. Now yields [0.5 0.5].mmin
/mmax
logsumexp
which behaves like log(sum(exp(x)))
but is numerically more accurate (and no overflow issues).vertcat
/vcat
,horzcat
/hcat
, etc now return a DM
type instead of a Sparsity
type #2549
mod
has been renamed to rem
, because its numerical behaviour is like the builtin-Matlab rem
. The builtin-Matlab mod
has no CasADi counterpart. CasADi-Python mod
has been removed, because its numerical behaviour is not like numpy.mod
. #2767. numpy.mod
has no counterpart in CasADi; only fmod
is equivalent.Before, CasADi internals would avoid introducing redundant nodes during operations on a given expression, but the user was responsible to avoid duplication when constructing that expression.
There is a function cse()
that you may apply to expressions:
x = MX.sym('x')
# User responsibility
sx = sin(x)
y = sqrt(sx)+sx # MX(@1=sin(x), (sqrt(@1)+@1))
# cse
y = sqrt(sin(x))+sin(x) # MX((sqrt(sin(x))+sin(x)))
y = cse(y) # MX(@1=sin(x), (sqrt(@1)+@1))
There is a boolean option cse
that may be used when constructing a Function
:
x = MX.sym('x')
f = Function('f',[x],[sqrt(sin(x))+sin(x)],{"cse":True})
f.disp(True)
f:(i0)->(o0) MXFunction
Algorithm:
@0 = input[0][0]
@0 = sin(@0)
@1 = sqrt(@0)
@1 = (@1+@0)
output[0][0] = @1
The technique scales favorably for large graphs.
MX how has atomic support for solving upper and lower triangular linear systems without allocating any linear solver instance. The operation handles the case with unity diagonal separately for efficiency and supports C code generation. To use the feature, call casadi.solve(A, b)
(Python or MATLAB/Octave)
# Python
import casadi
A = casadi.MX.sym('A', casadi.Sparsity.upper(2))
b = casadi.MX.sym('b', 2)
x = casadi.solve(A, b)
// C++
casadi::MX A = casadi::MX::sym("A", casadi::Sparsity::upper(2));
casadi::MX b = casadi::MX::sym("b", 2);
casadi::MX x = solve(A, b); // for argument-dependent lookup, alternatively casadi::MX::solve(A, b) for static function
Cf. #2688.
SX
/MX
Function
construction with free variables (i.e. symbols used in the output expressions that are not declared as inputs) now fails immediately unless the allow_free
option is used.SX
/MX
Function
construction now fails if there are duplicates in input names or output names, unless the allow_duplicate_io_names
option is used #2604.custom_jacobian
semantics changed. The Function must now return individual blocks (Jacobian of an output w.r.t. to an input)external
or Callback
)bool has_jac_sparsity(casadi_int oind, casadi_int iind) const override;
Sparsity get_jac_sparsity(casadi_int oind, casadi_int iind, bool symmetric) const override;
Function.find_function
Can be used to retrieve Functions in a hierarchy.dump_in
option ) as nan
instead of earlier 0
. E.g. Ipopt nlp_grad_f
has two outputs, f
and grad_f_x
. The f
output is not used internally, so will be logged as nan
.Function
objects with an external
call can now be codegenerated.mmin
/mmax
now support codegenerationnlpsol
/Opti.solver
can now take an option 'detect_simple_bounds' (default False
) that will promote general constraints to simple bounds (lbx/ubx).libcplex<CPLEX_VERSION>
, where CPLEX_VERSION is read from environmental variables. Same strategy for Gurobi
.The Integrator
class, which solves initial-value problems in ODEs and DAEs has been thoroughly refactored. Changes include:
integrator
constructor. Unlike before, this support should now work in combination with forward/adjoint sensitivity analysis (to any order) and sparsity pattern calculations. Cf. #2823.u
). The interface will keep track of changes to u
and avoid integrating past such changes; for the Sundials (CVODES/IDAS) interfaces by setting a "stop time", for fixed step integrators by aligning the integration points with the grid points. Cf. #3025. Development versions of CasADi included support for this in a dedicated class, called Simulator
, but this class has now been removed (breaking) and the functionality has been ported to the Integrator
class.cs.integrator('sim_function', 'cvodes', dae, tgrid, opts)
, you may replace it by cs.integrator('sim_function', 'cvodes', dae, 0, tgrid[1:], opts)
.Function
class for derivative calculations - this makes the class now more efficient for use with non-symbolic DAEs, including FMUs or other external models.t0
, tf
, output_t0
and grid
have been deprecated and will result in a warning if used. Instead, the user can provide equivalent information via the integrator
constructor, cf. previous point.backward states
are no longer part of the DAE formulation. They are now derived from a user specified number of sensitivity equations (nadj
). This is a slight restriction in the possible problem formulations, but on the other hand allows for a much better exploitation of adjoint sensitivity structure. The the backward states remain in the integrator class function inputs and outputs, but have now been renamed to align with their meaning; adj_xf
means the adjoint seeds corresponding to xf
(before they were called rx0
), adj_p
are the adjoint sensitivities corresponding to p
(before called rqf
and so on.scale_abstol
has been added to the Sundials integrators. If this is set to true, nominal values for the differential state and algebraic variables will be passed on to the solver. Cf. #3046
See "multipoint_simulation" in the example pack for a good starting point.
triu:
to get the old behavior.d
and local dependent variables w
have been replaced by the single dependent variables v
.octaveinterp
version, such that the new binaries work with future releases of Octave that increment the octaveinterp
ABI version number.casadi-cli
. At the moment, functionality is very limited, just eval_dump
, to evaluate Function that have been dumped to the disk (options dump
,dump_in
)-DWITH_IPOPT=ON -DWITH_BUILD_REQUIRED=ON
-DWITH_CPLEX=ON -DWITH_MOCKUP_CPLEX=ON
pip
are now availablemaster
branch has been renamed to main
, and has different semantics: it will be the branch where new features are added regularly before they become an official release. Latest official release is available as latest
branch.Published by jgillis 12 months ago
Grab a binary from the table:
For Matlab/Octave, unzip in your home directory and adapt the path:
Check your installation:
Get started with the example pack. Onboarding pointers have been gathered by the community at our wiki.
LD_PRELOAD=<knitro_lin_path>/libiomp5.so
.hypot(x,y) = sqrt(x*x+y*y)
log1p(x) = log(1+x)
expm1(x) = exp(x-1)
remainder
with the semantics of the C operation
fmin/
fmax` is now symmetric:jacobian(fmin(x,y),vertcat(x,y))
used to be [1 0] for x==y. Now yields [0.5 0.5].mmin
/mmax
logsumexp
which behaves like log(sum(exp(x)))
but is numerically more accurate (and no overflow issues).vertcat
/vcat
,horzcat
/hcat
, etc now return a DM
type instead of a Sparsity
type #2549
mod
has been renamed to rem
, because its numerical behaviour is like the builtin-Matlab rem
. The builtin-Matlab mod
has no CasADi counterpart. CasADi-Python mod
has been removed, because its numerical behaviour is not like numpy.mod
. #2767. numpy.mod
has no counterpart in CasADi; only fmod
is equivalent.Before, CasADi internals would avoid introducing redundant nodes during operations on a given expression, but the user was responsible to avoid duplication when constructing that expression.
There is a function cse()
that you may apply to expressions:
x = MX.sym('x')
# User responsibility
sx = sin(x)
y = sqrt(sx)+sx # MX(@1=sin(x), (sqrt(@1)+@1))
# cse
y = sqrt(sin(x))+sin(x) # MX((sqrt(sin(x))+sin(x)))
y = cse(y) # MX(@1=sin(x), (sqrt(@1)+@1))
There is a boolean option cse
that may be used when constructing a Function
:
x = MX.sym('x')
f = Function('f',[x],[sqrt(sin(x))+sin(x)],{"cse":True})
f.disp(True)
f:(i0)->(o0) MXFunction
Algorithm:
@0 = input[0][0]
@0 = sin(@0)
@1 = sqrt(@0)
@1 = (@1+@0)
output[0][0] = @1
The technique scales favorably for large graphs.
MX how has atomic support for solving upper and lower triangular linear systems without allocating any linear solver instance. The operation handles the case with unity diagonal separately for efficiency and supports C code generation. To use the feature, call casadi.solve(A, b)
(Python or MATLAB/Octave)
# Python
import casadi
A = casadi.MX.sym('A', casadi.Sparsity.upper(2))
b = casadi.MX.sym('b', 2)
x = casadi.solve(A, b)
// C++
casadi::MX A = casadi::MX::sym("A", casadi::Sparsity::upper(2));
casadi::MX b = casadi::MX::sym("b", 2);
casadi::MX x = solve(A, b); // for argument-dependent lookup, alternatively casadi::MX::solve(A, b) for static function
Cf. #2688.
SX
/MX
Function
construction with free variables (i.e. symbols used in the output expressions that are not declared as inputs) now fails immediately unless the allow_free
option is used.SX
/MX
Function
construction now fails if there are duplicates in input names or output names, unless the allow_duplicate_io_names
option is used #2604.custom_jacobian
semantics changed. The Function must now return individual blocks (Jacobian of an output w.r.t. to an input)external
or Callback
)bool has_jac_sparsity(casadi_int oind, casadi_int iind) const override;
Sparsity get_jac_sparsity(casadi_int oind, casadi_int iind, bool symmetric) const override;
Function.find_function
Can be used to retrieve Functions in a hierarchy.dump_in
option ) as nan
instead of earlier 0
. E.g. Ipopt nlp_grad_f
has two outputs, f
and grad_f_x
. The f
output is not used internally, so will be logged as nan
.Function
objects with an external
call can now be codegenerated.mmin
/mmax
now support codegenerationnlpsol
/Opti.solver
can now take an option 'detect_simple_bounds' (default False
) that will promote general constraints to simple bounds (lbx/ubx).libcplex<CPLEX_VERSION>
, where CPLEX_VERSION is read from environmental variables. Same strategy for Gurobi
.The Integrator
class, which solves initial-value problems in ODEs and DAEs has been thoroughly refactored. Changes include:
integrator
constructor. Unlike before, this support should now work in combination with forward/adjoint sensitivity analysis (to any order) and sparsity pattern calculations. Cf. #2823.u
). The interface will keep track of changes to u
and avoid integrating past such changes; for the Sundials (CVODES/IDAS) interfaces by setting a "stop time", for fixed step integrators by aligning the integration points with the grid points. Cf. #3025. Development versions of CasADi included support for this in a dedicated class, called Simulator
, but this class has now been removed (breaking) and the functionality has been ported to the Integrator
class.cs.integrator('sim_function', 'cvodes', dae, tgrid, opts)
, you may replace it by cs.integrator('sim_function', 'cvodes', dae, 0, tgrid[1:], opts)
.Function
class for derivative calculations - this makes the class now more efficient for use with non-symbolic DAEs, including FMUs or other external models.t0
, tf
, output_t0
and grid
have been deprecated and will result in a warning if used. Instead, the user can provide equivalent information via the integrator
constructor, cf. previous point.backward states
are no longer part of the DAE formulation. They are now derived from a user specified number of sensitivity equations (nadj
). This is a slight restriction in the possible problem formulations, but on the other hand allows for a much better exploitation of adjoint sensitivity structure. The the backward states remain in the integrator class function inputs and outputs, but have now been renamed to align with their meaning; adj_xf
means the adjoint seeds corresponding to xf
(before they were called rx0
), adj_p
are the adjoint sensitivities corresponding to p
(before called rqf
and so on.scale_abstol
has been added to the Sundials integrators. If this is set to true, nominal values for the differential state and algebraic variables will be passed on to the solver. Cf. #3046
See "multipoint_simulation" in the example pack for a good starting point.
triu:
to get the old behavior.d
and local dependent variables w
have been replaced by the single dependent variables v
.octaveinterp
version, such that the new binaries work with future releases of Octave that increment the octaveinterp
ABI version number.casadi-cli
. At the moment, functionality is very limited, just eval_dump
, to evaluate Function that have been dumped to the disk (options dump
,dump_in
)-DWITH_IPOPT=ON -DWITH_BUILD_REQUIRED=ON
-DWITH_CPLEX=ON -DWITH_MOCKUP_CPLEX=ON
pip
are now availablemaster
branch has been renamed to main
, and has different semantics: it will be the branch where new features are added regularly before they become an official release. Latest official release is available as latest
branch.