CasADi is a symbolic framework for numeric optimization implementing automatic differentiation in forward and reverse modes on sparse matrix-valued computational graphs. It supports self-contained C-code generation and interfaces state-of-the-art codes such as SUNDIALS, IPOPT etc. It can be used from C++, Python or Matlab/Octave.
LGPL-3.0 License
Bot releases are hidden (Show)
Published by jgillis 3 months ago
Grab a binary from the table:
For Matlab/Octave, unzip in your home directory and adapt the path:
Check your installation:
Get started with the example pack. Onboarding pointers have been gathered by the community at our wiki.
LD_PRELOAD=<knitro_lin_path>/libiomp5.so
.hypot(x,y) = sqrt(x*x+y*y)
log1p(x) = log(1+x)
expm1(x) = exp(x-1)
remainder
with the semantics of the C operation
fmin/
fmax` is now symmetric:jacobian(fmin(x,y),vertcat(x,y))
used to be [1 0] for x==y. Now yields [0.5 0.5].mmin
/mmax
logsumexp
which behaves like log(sum(exp(x)))
but is numerically more accurate (and no overflow issues).vertcat
/vcat
,horzcat
/hcat
, etc now return a DM
type instead of a Sparsity
type #2549
mod
has been renamed to rem
, because its numerical behaviour is like the builtin-Matlab rem
. The builtin-Matlab mod
has no CasADi counterpart. CasADi-Python mod
has been removed, because its numerical behaviour is not like numpy.mod
. #2767. numpy.mod
has no counterpart in CasADi; only fmod
is equivalent.Before, CasADi internals would avoid introducing redundant nodes during operations on a given expression, but the user was responsible to avoid duplication when constructing that expression.
There is a function cse()
that you may apply to expressions:
x = MX.sym('x')
# User responsibility
sx = sin(x)
y = sqrt(sx)+sx # MX(@1=sin(x), (sqrt(@1)+@1))
# cse
y = sqrt(sin(x))+sin(x) # MX((sqrt(sin(x))+sin(x)))
y = cse(y) # MX(@1=sin(x), (sqrt(@1)+@1))
There is a boolean option cse
that may be used when constructing a Function
:
x = MX.sym('x')
f = Function('f',[x],[sqrt(sin(x))+sin(x)],{"cse":True})
f.disp(True)
f:(i0)->(o0) MXFunction
Algorithm:
@0 = input[0][0]
@0 = sin(@0)
@1 = sqrt(@0)
@1 = (@1+@0)
output[0][0] = @1
The technique scales favorably for large graphs.
MX how has atomic support for solving upper and lower triangular linear systems without allocating any linear solver instance. The operation handles the case with unity diagonal separately for efficiency and supports C code generation. To use the feature, call casadi.solve(A, b)
(Python or MATLAB/Octave)
# Python
import casadi
A = casadi.MX.sym('A', casadi.Sparsity.upper(2))
b = casadi.MX.sym('b', 2)
x = casadi.solve(A, b)
// C++
casadi::MX A = casadi::MX::sym("A", casadi::Sparsity::upper(2));
casadi::MX b = casadi::MX::sym("b", 2);
casadi::MX x = solve(A, b); // for argument-dependent lookup, alternatively casadi::MX::solve(A, b) for static function
Cf. #2688.
SX
/MX
Function
construction with free variables (i.e. symbols used in the output expressions that are not declared as inputs) now fails immediately unless the allow_free
option is used.SX
/MX
Function
construction now fails if there are duplicates in input names or output names, unless the allow_duplicate_io_names
option is used #2604.custom_jacobian
semantics changed. The Function must now return individual blocks (Jacobian of an output w.r.t. to an input)external
or Callback
)bool has_jac_sparsity(casadi_int oind, casadi_int iind) const override;
Sparsity get_jac_sparsity(casadi_int oind, casadi_int iind, bool symmetric) const override;
Function.find_function
Can be used to retrieve Functions in a hierarchy.dump_in
option ) as nan
instead of earlier 0
. E.g. Ipopt nlp_grad_f
has two outputs, f
and grad_f_x
. The f
output is not used internally, so will be logged as nan
.Function
objects with an external
call can now be codegenerated.mmin
/mmax
now support codegenerationnlpsol
/Opti.solver
can now take an option 'detect_simple_bounds' (default False
) that will promote general constraints to simple bounds (lbx/ubx).libcplex<CPLEX_VERSION>
, where CPLEX_VERSION is read from environmental variables. Same strategy for Gurobi
.The Integrator
class, which solves initial-value problems in ODEs and DAEs has been thoroughly refactored. Changes include:
integrator
constructor. Unlike before, this support should now work in combination with forward/adjoint sensitivity analysis (to any order) and sparsity pattern calculations. Cf. #2823.u
). The interface will keep track of changes to u
and avoid integrating past such changes; for the Sundials (CVODES/IDAS) interfaces by setting a "stop time", for fixed step integrators by aligning the integration points with the grid points. Cf. #3025. Development versions of CasADi included support for this in a dedicated class, called Simulator
, but this class has now been removed (breaking) and the functionality has been ported to the Integrator
class.cs.integrator('sim_function', 'cvodes', dae, tgrid, opts)
, you may replace it by cs.integrator('sim_function', 'cvodes', dae, 0, tgrid[1:], opts)
.Function
class for derivative calculations - this makes the class now more efficient for use with non-symbolic DAEs, including FMUs or other external models.t0
, tf
, output_t0
and grid
have been deprecated and will result in a warning if used. Instead, the user can provide equivalent information via the integrator
constructor, cf. previous point.backward states
are no longer part of the DAE formulation. They are now derived from a user specified number of sensitivity equations (nadj
). This is a slight restriction in the possible problem formulations, but on the other hand allows for a much better exploitation of adjoint sensitivity structure. The the backward states remain in the integrator class function inputs and outputs, but have now been renamed to align with their meaning; adj_xf
means the adjoint seeds corresponding to xf
(before they were called rx0
), adj_p
are the adjoint sensitivities corresponding to p
(before called rqf
and so on.scale_abstol
has been added to the Sundials integrators. If this is set to true, nominal values for the differential state and algebraic variables will be passed on to the solver. Cf. #3046
See "multipoint_simulation" in the example pack for a good starting point.
triu:
to get the old behavior.d
and local dependent variables w
have been replaced by the single dependent variables v
.octaveinterp
version, such that the new binaries work with future releases of Octave that increment the octaveinterp
ABI version number.casadi-cli
. At the moment, functionality is very limited, just eval_dump
, to evaluate Function that have been dumped to the disk (options dump
,dump_in
)-DWITH_IPOPT=ON -DWITH_BUILD_REQUIRED=ON
-DWITH_CPLEX=ON -DWITH_MOCKUP_CPLEX=ON
pip
are now availablemaster
branch has been renamed to main
, and has different semantics: it will be the branch where new features are added regularly before they become an official release. Latest official release is available as latest
branch.opti.set_domain(x,'integer')
Published by jgillis 8 months ago
Grab a binary from the table:
For Matlab/Octave, unzip in your home directory and adapt the path:
Check your installation:
Get started with the example pack. Onboarding pointers have been gathered by the community at our wiki.
LD_PRELOAD=<knitro_lin_path>/libiomp5.so
.hypot(x,y) = sqrt(x*x+y*y)
log1p(x) = log(1+x)
expm1(x) = exp(x-1)
remainder
with the semantics of the C operation
fmin/
fmax` is now symmetric:jacobian(fmin(x,y),vertcat(x,y))
used to be [1 0] for x==y. Now yields [0.5 0.5].mmin
/mmax
logsumexp
which behaves like log(sum(exp(x)))
but is numerically more accurate (and no overflow issues).vertcat
/vcat
,horzcat
/hcat
, etc now return a DM
type instead of a Sparsity
type #2549
mod
has been renamed to rem
, because its numerical behaviour is like the builtin-Matlab rem
. The builtin-Matlab mod
has no CasADi counterpart. CasADi-Python mod
has been removed, because its numerical behaviour is not like numpy.mod
. #2767. numpy.mod
has no counterpart in CasADi; only fmod
is equivalent.Before, CasADi internals would avoid introducing redundant nodes during operations on a given expression, but the user was responsible to avoid duplication when constructing that expression.
There is a function cse()
that you may apply to expressions:
x = MX.sym('x')
# User responsibility
sx = sin(x)
y = sqrt(sx)+sx # MX(@1=sin(x), (sqrt(@1)+@1))
# cse
y = sqrt(sin(x))+sin(x) # MX((sqrt(sin(x))+sin(x)))
y = cse(y) # MX(@1=sin(x), (sqrt(@1)+@1))
There is a boolean option cse
that may be used when constructing a Function
:
x = MX.sym('x')
f = Function('f',[x],[sqrt(sin(x))+sin(x)],{"cse":True})
f.disp(True)
f:(i0)->(o0) MXFunction
Algorithm:
@0 = input[0][0]
@0 = sin(@0)
@1 = sqrt(@0)
@1 = (@1+@0)
output[0][0] = @1
The technique scales favorably for large graphs.
MX how has atomic support for solving upper and lower triangular linear systems without allocating any linear solver instance. The operation handles the case with unity diagonal separately for efficiency and supports C code generation. To use the feature, call casadi.solve(A, b)
(Python or MATLAB/Octave)
# Python
import casadi
A = casadi.MX.sym('A', casadi.Sparsity.upper(2))
b = casadi.MX.sym('b', 2)
x = casadi.solve(A, b)
// C++
casadi::MX A = casadi::MX::sym("A", casadi::Sparsity::upper(2));
casadi::MX b = casadi::MX::sym("b", 2);
casadi::MX x = solve(A, b); // for argument-dependent lookup, alternatively casadi::MX::solve(A, b) for static function
Cf. #2688.
SX
/MX
Function
construction with free variables (i.e. symbols used in the output expressions that are not declared as inputs) now fails immediately unless the allow_free
option is used.SX
/MX
Function
construction now fails if there are duplicates in input names or output names, unless the allow_duplicate_io_names
option is used #2604.custom_jacobian
semantics changed. The Function must now return individual blocks (Jacobian of an output w.r.t. to an input)external
or Callback
)bool has_jac_sparsity(casadi_int oind, casadi_int iind) const override;
Sparsity get_jac_sparsity(casadi_int oind, casadi_int iind, bool symmetric) const override;
Function.find_function
Can be used to retrieve Functions in a hierarchy.dump_in
option ) as nan
instead of earlier 0
. E.g. Ipopt nlp_grad_f
has two outputs, f
and grad_f_x
. The f
output is not used internally, so will be logged as nan
.Function
objects with an external
call can now be codegenerated.mmin
/mmax
now support codegenerationnlpsol
/Opti.solver
can now take an option 'detect_simple_bounds' (default False
) that will promote general constraints to simple bounds (lbx/ubx).libcplex<CPLEX_VERSION>
, where CPLEX_VERSION is read from environmental variables. Same strategy for Gurobi
.The Integrator
class, which solves initial-value problems in ODEs and DAEs has been thoroughly refactored. Changes include:
integrator
constructor. Unlike before, this support should now work in combination with forward/adjoint sensitivity analysis (to any order) and sparsity pattern calculations. Cf. #2823.u
). The interface will keep track of changes to u
and avoid integrating past such changes; for the Sundials (CVODES/IDAS) interfaces by setting a "stop time", for fixed step integrators by aligning the integration points with the grid points. Cf. #3025. Development versions of CasADi included support for this in a dedicated class, called Simulator
, but this class has now been removed (breaking) and the functionality has been ported to the Integrator
class.cs.integrator('sim_function', 'cvodes', dae, tgrid, opts)
, you may replace it by cs.integrator('sim_function', 'cvodes', dae, 0, tgrid[1:], opts)
.Function
class for derivative calculations - this makes the class now more efficient for use with non-symbolic DAEs, including FMUs or other external models.t0
, tf
, output_t0
and grid
have been deprecated and will result in a warning if used. Instead, the user can provide equivalent information via the integrator
constructor, cf. previous point.backward states
are no longer part of the DAE formulation. They are now derived from a user specified number of sensitivity equations (nadj
). This is a slight restriction in the possible problem formulations, but on the other hand allows for a much better exploitation of adjoint sensitivity structure. The the backward states remain in the integrator class function inputs and outputs, but have now been renamed to align with their meaning; adj_xf
means the adjoint seeds corresponding to xf
(before they were called rx0
), adj_p
are the adjoint sensitivities corresponding to p
(before called rqf
and so on.scale_abstol
has been added to the Sundials integrators. If this is set to true, nominal values for the differential state and algebraic variables will be passed on to the solver. Cf. #3046
See "multipoint_simulation" in the example pack for a good starting point.
triu:
to get the old behavior.d
and local dependent variables w
have been replaced by the single dependent variables v
.octaveinterp
version, such that the new binaries work with future releases of Octave that increment the octaveinterp
ABI version number.casadi-cli
. At the moment, functionality is very limited, just eval_dump
, to evaluate Function that have been dumped to the disk (options dump
,dump_in
)-DWITH_IPOPT=ON -DWITH_BUILD_REQUIRED=ON
-DWITH_CPLEX=ON -DWITH_MOCKUP_CPLEX=ON
pip
are now availablemaster
branch has been renamed to main
, and has different semantics: it will be the branch where new features are added regularly before they become an official release. Latest official release is available as latest
branch.Published by jgillis 12 months ago
Grab a binary from the table:
For Matlab/Octave, unzip in your home directory and adapt the path:
Check your installation:
Get started with the example pack. Onboarding pointers have been gathered by the community at our wiki.
LD_PRELOAD=<knitro_lin_path>/libiomp5.so
.hypot(x,y) = sqrt(x*x+y*y)
log1p(x) = log(1+x)
expm1(x) = exp(x-1)
remainder
with the semantics of the C operation
fmin/
fmax` is now symmetric:jacobian(fmin(x,y),vertcat(x,y))
used to be [1 0] for x==y. Now yields [0.5 0.5].mmin
/mmax
logsumexp
which behaves like log(sum(exp(x)))
but is numerically more accurate (and no overflow issues).vertcat
/vcat
,horzcat
/hcat
, etc now return a DM
type instead of a Sparsity
type #2549
mod
has been renamed to rem
, because its numerical behaviour is like the builtin-Matlab rem
. The builtin-Matlab mod
has no CasADi counterpart. CasADi-Python mod
has been removed, because its numerical behaviour is not like numpy.mod
. #2767. numpy.mod
has no counterpart in CasADi; only fmod
is equivalent.Before, CasADi internals would avoid introducing redundant nodes during operations on a given expression, but the user was responsible to avoid duplication when constructing that expression.
There is a function cse()
that you may apply to expressions:
x = MX.sym('x')
# User responsibility
sx = sin(x)
y = sqrt(sx)+sx # MX(@1=sin(x), (sqrt(@1)+@1))
# cse
y = sqrt(sin(x))+sin(x) # MX((sqrt(sin(x))+sin(x)))
y = cse(y) # MX(@1=sin(x), (sqrt(@1)+@1))
There is a boolean option cse
that may be used when constructing a Function
:
x = MX.sym('x')
f = Function('f',[x],[sqrt(sin(x))+sin(x)],{"cse":True})
f.disp(True)
f:(i0)->(o0) MXFunction
Algorithm:
@0 = input[0][0]
@0 = sin(@0)
@1 = sqrt(@0)
@1 = (@1+@0)
output[0][0] = @1
The technique scales favorably for large graphs.
MX how has atomic support for solving upper and lower triangular linear systems without allocating any linear solver instance. The operation handles the case with unity diagonal separately for efficiency and supports C code generation. To use the feature, call casadi.solve(A, b)
(Python or MATLAB/Octave)
# Python
import casadi
A = casadi.MX.sym('A', casadi.Sparsity.upper(2))
b = casadi.MX.sym('b', 2)
x = casadi.solve(A, b)
// C++
casadi::MX A = casadi::MX::sym("A", casadi::Sparsity::upper(2));
casadi::MX b = casadi::MX::sym("b", 2);
casadi::MX x = solve(A, b); // for argument-dependent lookup, alternatively casadi::MX::solve(A, b) for static function
Cf. #2688.
SX
/MX
Function
construction with free variables (i.e. symbols used in the output expressions that are not declared as inputs) now fails immediately unless the allow_free
option is used.SX
/MX
Function
construction now fails if there are duplicates in input names or output names, unless the allow_duplicate_io_names
option is used #2604.custom_jacobian
semantics changed. The Function must now return individual blocks (Jacobian of an output w.r.t. to an input)external
or Callback
)bool has_jac_sparsity(casadi_int oind, casadi_int iind) const override;
Sparsity get_jac_sparsity(casadi_int oind, casadi_int iind, bool symmetric) const override;
Function.find_function
Can be used to retrieve Functions in a hierarchy.dump_in
option ) as nan
instead of earlier 0
. E.g. Ipopt nlp_grad_f
has two outputs, f
and grad_f_x
. The f
output is not used internally, so will be logged as nan
.Function
objects with an external
call can now be codegenerated.mmin
/mmax
now support codegenerationnlpsol
/Opti.solver
can now take an option 'detect_simple_bounds' (default False
) that will promote general constraints to simple bounds (lbx/ubx).libcplex<CPLEX_VERSION>
, where CPLEX_VERSION is read from environmental variables. Same strategy for Gurobi
.The Integrator
class, which solves initial-value problems in ODEs and DAEs has been thoroughly refactored. Changes include:
integrator
constructor. Unlike before, this support should now work in combination with forward/adjoint sensitivity analysis (to any order) and sparsity pattern calculations. Cf. #2823.u
). The interface will keep track of changes to u
and avoid integrating past such changes; for the Sundials (CVODES/IDAS) interfaces by setting a "stop time", for fixed step integrators by aligning the integration points with the grid points. Cf. #3025. Development versions of CasADi included support for this in a dedicated class, called Simulator
, but this class has now been removed (breaking) and the functionality has been ported to the Integrator
class.cs.integrator('sim_function', 'cvodes', dae, tgrid, opts)
, you may replace it by cs.integrator('sim_function', 'cvodes', dae, 0, tgrid[1:], opts)
.Function
class for derivative calculations - this makes the class now more efficient for use with non-symbolic DAEs, including FMUs or other external models.t0
, tf
, output_t0
and grid
have been deprecated and will result in a warning if used. Instead, the user can provide equivalent information via the integrator
constructor, cf. previous point.backward states
are no longer part of the DAE formulation. They are now derived from a user specified number of sensitivity equations (nadj
). This is a slight restriction in the possible problem formulations, but on the other hand allows for a much better exploitation of adjoint sensitivity structure. The the backward states remain in the integrator class function inputs and outputs, but have now been renamed to align with their meaning; adj_xf
means the adjoint seeds corresponding to xf
(before they were called rx0
), adj_p
are the adjoint sensitivities corresponding to p
(before called rqf
and so on.scale_abstol
has been added to the Sundials integrators. If this is set to true, nominal values for the differential state and algebraic variables will be passed on to the solver. Cf. #3046
See "multipoint_simulation" in the example pack for a good starting point.
triu:
to get the old behavior.d
and local dependent variables w
have been replaced by the single dependent variables v
.octaveinterp
version, such that the new binaries work with future releases of Octave that increment the octaveinterp
ABI version number.casadi-cli
. At the moment, functionality is very limited, just eval_dump
, to evaluate Function that have been dumped to the disk (options dump
,dump_in
)-DWITH_IPOPT=ON -DWITH_BUILD_REQUIRED=ON
-DWITH_CPLEX=ON -DWITH_MOCKUP_CPLEX=ON
pip
are now availablemaster
branch has been renamed to main
, and has different semantics: it will be the branch where new features are added regularly before they become an official release. Latest official release is available as latest
branch.Grab a binary from the table:
For Matlab/Octave, unzip in your home directory and adapt the path:
Check your installation:
Get started with the example pack. Onboarding pointers have been gathered by the community at our wiki.
LD_PRELOAD=<knitro_lin_path>/libiomp5.so
.hypot(x,y) = sqrt(x*x+y*y)
log1p(x) = log(1+x)
expm1(x) = exp(x-1)
remainder
with the semantics of the C operation
fmin/
fmax` is now symmetric:jacobian(fmin(x,y),vertcat(x,y))
used to be [1 0] for x==y. Now yields [0.5 0.5].mmin
/mmax
logsumexp
which behaves like log(sum(exp(x)))
but is numerically more accurate (and no overflow issues).vertcat
/vcat
,horzcat
/hcat
, etc now return a DM
type instead of a Sparsity
type #2549
mod
has been renamed to rem
, because its numerical behaviour is like the builtin-Matlab rem
. The builtin-Matlab mod
has no CasADi counterpart. CasADi-Python mod
has been removed, because its numerical behaviour is not like numpy.mod
. #2767. numpy.mod
has no counterpart in CasADi; only fmod
is equivalent.Before, CasADi internals would avoid introducing redundant nodes during operations on a given expression, but the user was responsible to avoid duplication when constructing that expression.
There is a function cse()
that you may apply to expressions:
x = MX.sym('x')
# User responsibility
sx = sin(x)
y = sqrt(sx)+sx # MX(@1=sin(x), (sqrt(@1)+@1))
# cse
y = sqrt(sin(x))+sin(x) # MX((sqrt(sin(x))+sin(x)))
y = cse(y) # MX(@1=sin(x), (sqrt(@1)+@1))
There is a boolean option cse
that may be used when constructing a Function
:
x = MX.sym('x')
f = Function('f',[x],[sqrt(sin(x))+sin(x)],{"cse":True})
f.disp(True)
f:(i0)->(o0) MXFunction
Algorithm:
@0 = input[0][0]
@0 = sin(@0)
@1 = sqrt(@0)
@1 = (@1+@0)
output[0][0] = @1
The technique scales favorably for large graphs.
MX how has atomic support for solving upper and lower triangular linear systems without allocating any linear solver instance. The operation handles the case with unity diagonal separately for efficiency and supports C code generation. To use the feature, call casadi.solve(A, b)
(Python or MATLAB/Octave)
# Python
import casadi
A = casadi.MX.sym('A', casadi.Sparsity.upper(2))
b = casadi.MX.sym('b', 2)
x = casadi.solve(A, b)
// C++
casadi::MX A = casadi::MX::sym("A", casadi::Sparsity::upper(2));
casadi::MX b = casadi::MX::sym("b", 2);
casadi::MX x = solve(A, b); // for argument-dependent lookup, alternatively casadi::MX::solve(A, b) for static function
Cf. #2688.
SX
/MX
Function
construction with free variables (i.e. symbols used in the output expressions that are not declared as inputs) now fails immediately unless the allow_free
option is used.SX
/MX
Function
construction now fails if there are duplicates in input names or output names, unless the allow_duplicate_io_names
option is used #2604.custom_jacobian
semantics changed. The Function must now return individual blocks (Jacobian of an output w.r.t. to an input)external
or Callback
)bool has_jac_sparsity(casadi_int oind, casadi_int iind) const override;
Sparsity get_jac_sparsity(casadi_int oind, casadi_int iind, bool symmetric) const override;
Function.find_function
Can be used to retrieve Functions in a hierarchy.dump_in
option ) as nan
instead of earlier 0
. E.g. Ipopt nlp_grad_f
has two outputs, f
and grad_f_x
. The f
output is not used internally, so will be logged as nan
.Function
objects with an external
call can now be codegenerated.mmin
/mmax
now support codegenerationnlpsol
/Opti.solver
can now take an option 'detect_simple_bounds' (default False
) that will promote general constraints to simple bounds (lbx/ubx).libcplex<CPLEX_VERSION>
, where CPLEX_VERSION is read from environmental variables. Same strategy for Gurobi
.The Integrator
class, which solves initial-value problems in ODEs and DAEs has been thoroughly refactored. Changes include:
integrator
constructor. Unlike before, this support should now work in combination with forward/adjoint sensitivity analysis (to any order) and sparsity pattern calculations. Cf. #2823.u
). The interface will keep track of changes to u
and avoid integrating past such changes; for the Sundials (CVODES/IDAS) interfaces by setting a "stop time", for fixed step integrators by aligning the integration points with the grid points. Cf. #3025. Development versions of CasADi included support for this in a dedicated class, called Simulator
, but this class has now been removed (breaking) and the functionality has been ported to the Integrator
class.cs.integrator('sim_function', 'cvodes', dae, tgrid, opts)
, you may replace it by cs.integrator('sim_function', 'cvodes', dae, 0, tgrid[1:], opts)
.Function
class for derivative calculations - this makes the class now more efficient for use with non-symbolic DAEs, including FMUs or other external models.t0
, tf
, output_t0
and grid
have been deprecated and will result in a warning if used. Instead, the user can provide equivalent information via the integrator
constructor, cf. previous point.backward states
are no longer part of the DAE formulation. They are now derived from a user specified number of sensitivity equations (nadj
). This is a slight restriction in the possible problem formulations, but on the other hand allows for a much better exploitation of adjoint sensitivity structure. The the backward states remain in the integrator class function inputs and outputs, but have now been renamed to align with their meaning; adj_xf
means the adjoint seeds corresponding to xf
(before they were called rx0
), adj_p
are the adjoint sensitivities corresponding to p
(before called rqf
and so on.scale_abstol
has been added to the Sundials integrators. If this is set to true, nominal values for the differential state and algebraic variables will be passed on to the solver. Cf. #3046
See "multipoint_simulation" in the example pack for a good starting point.
triu:
to get the old behavior.d
and local dependent variables w
have been replaced by the single dependent variables v
.octaveinterp
version, such that the new binaries work with future releases of Octave that increment the octaveinterp
ABI version number.casadi-cli
. At the moment, functionality is very limited, just eval_dump
, to evaluate Function that have been dumped to the disk (options dump
,dump_in
)-DWITH_IPOPT=ON -DWITH_BUILD_REQUIRED=ON
-DWITH_CPLEX=ON -DWITH_MOCKUP_CPLEX=ON
pip
are now availablemaster
branch has been renamed to main
, and has different semantics: it will be the branch where new features are added regularly before they become an official release. Latest official release is available as latest
branch.Grab a binary from the table:
For Matlab/Octave, unzip in your home directory and adapt the path:
Check your installation:
Get started with the example pack. Onboarding pointers have been gathered by the community at our wiki.
LD_PRELOAD=<knitro_lin_path>/libiomp5.so
.hypot(x,y) = sqrt(x*x+y*y)
log1p(x) = log(1+x)
expm1(x) = exp(x-1)
remainder
with the semantics of the C operation
fmin/
fmax` is now symmetric:jacobian(fmin(x,y),vertcat(x,y))
used to be [1 0] for x==y. Now yields [0.5 0.5].mmin
/mmax
logsumexp
which behaves like log(sum(exp(x)))
but is numerically more accurate (and no overflow issues).vertcat
/vcat
,horzcat
/hcat
, etc now return a DM
type instead of a Sparsity
type #2549
mod
has been renamed to rem
, because its numerical behaviour is like the builtin-Matlab rem
. The builtin-Matlab mod
has no CasADi counterpart. CasADi-Python mod
has been removed, because its numerical behaviour is not like numpy.mod
. #2767. numpy.mod
has no counterpart in CasADi; only fmod
is equivalent.Before, CasADi internals would avoid introducing redundant nodes during operations on a given expression, but the user was responsible to avoid duplication when constructing that expression.
There is a function cse()
that you may apply to expressions:
x = MX.sym('x')
# User responsibility
sx = sin(x)
y = sqrt(sx)+sx # MX(@1=sin(x), (sqrt(@1)+@1))
# cse
y = sqrt(sin(x))+sin(x) # MX((sqrt(sin(x))+sin(x)))
y = cse(y) # MX(@1=sin(x), (sqrt(@1)+@1))
There is a boolean option cse
that may be used when constructing a Function
:
x = MX.sym('x')
f = Function('f',[x],[sqrt(sin(x))+sin(x)],{"cse":True})
f.disp(True)
f:(i0)->(o0) MXFunction
Algorithm:
@0 = input[0][0]
@0 = sin(@0)
@1 = sqrt(@0)
@1 = (@1+@0)
output[0][0] = @1
The technique scales favorably for large graphs.
MX how has atomic support for solving upper and lower triangular linear systems without allocating any linear solver instance. The operation handles the case with unity diagonal separately for efficiency and supports C code generation. To use the feature, call casadi.solve(A, b)
(Python or MATLAB/Octave)
# Python
import casadi
A = casadi.MX.sym('A', casadi.Sparsity.upper(2))
b = casadi.MX.sym('b', 2)
x = casadi.solve(A, b)
// C++
casadi::MX A = casadi::MX::sym("A", casadi::Sparsity::upper(2));
casadi::MX b = casadi::MX::sym("b", 2);
casadi::MX x = solve(A, b); // for argument-dependent lookup, alternatively casadi::MX::solve(A, b) for static function
Cf. #2688.
SX
/MX
Function
construction with free variables (i.e. symbols used in the output expressions that are not declared as inputs) now fails immediately unless the allow_free
option is used.SX
/MX
Function
construction now fails if there are duplicates in input names or output names, unless the allow_duplicate_io_names
option is used #2604.custom_jacobian
semantics changed. The Function must now return individual blocks (Jacobian of an output w.r.t. to an input)external
or Callback
)bool has_jac_sparsity(casadi_int oind, casadi_int iind) const override;
Sparsity get_jac_sparsity(casadi_int oind, casadi_int iind, bool symmetric) const override;
Function.find_function
Can be used to retrieve Functions in a hierarchy.dump_in
option ) as nan
instead of earlier 0
. E.g. Ipopt nlp_grad_f
has two outputs, f
and grad_f_x
. The f
output is not used internally, so will be logged as nan
.Function
objects with an external
call can now be codegenerated.mmin
/mmax
now support codegenerationnlpsol
/Opti.solver
can now take an option 'detect_simple_bounds' (default False
) that will promote general constraints to simple bounds (lbx/ubx).libcplex<CPLEX_VERSION>
, where CPLEX_VERSION is read from environmental variables. Same strategy for Gurobi
.The Integrator
class, which solves initial-value problems in ODEs and DAEs has been thoroughly refactored. Changes include:
integrator
constructor. Unlike before, this support should now work in combination with forward/adjoint sensitivity analysis (to any order) and sparsity pattern calculations. Cf. #2823.u
). The interface will keep track of changes to u
and avoid integrating past such changes; for the Sundials (CVODES/IDAS) interfaces by setting a "stop time", for fixed step integrators by aligning the integration points with the grid points. Cf. #3025. Development versions of CasADi included support for this in a dedicated class, called Simulator
, but this class has now been removed and the functionality has been ported to the Integrator
class.Function
class for derivative calculations - this makes the class now more efficient for use with non-symbolic DAEs, including FMUs or other external models.t0
, tf
, output_t0
and grid
have been deprecated and will result in a warning if used. Instead, the user can provide equivalent information via the integrator
constructor, cf. previous point.backward states
are no longer part of the DAE formulation. They are now derived from a user specified number of sensitivity equations (nadj
). This is a slight restriction in the possible problem formulations, but on the other hand allows for a much better exploitation of adjoint sensitivity structure. The the backward states remain in the integrator class function inputs and outputs, but have now been renamed to align with their meaning; adj_xf
means the adjoint seeds corresponding to xf
(before they were called rx0
), adj_p
are the adjoint sensitivities corresponding to p
(before called rqf
and so on.scale_abstol
has been added to the Sundials integrators. If this is set to true, nominal values for the differential state and algebraic variables will be passed on to the solver. Cf. #3046
See "multipoint_simulation" in the example pack for a good starting point.
triu:
to get the old behavior.d
and local dependent variables w
have been replaced by the single dependent variables v
.octaveinterp
version, such that the new binaries work with future releases of Octave that increment the octaveinterp
ABI version number.casadi-cli
. At the moment, functionality is very limited, just eval_dump
, to evaluate Function that have been dumped to the disk (options dump
,dump_in
)-DWITH_IPOPT=ON -DWITH_BUILD_REQUIRED=ON
-DWITH_CPLEX=ON -DWITH_MOCKUP_CPLEX=ON
pip
are now availablemaster
branch has been renamed to main
, and has different semantics: it will be the branch where new features are added regularly before they become an official release. Latest official release is available as latest
branch.Published by jgillis over 1 year ago
Grab a binary from the table:
For Matlab/Octave, unzip in your home directory and adapt the path:
Check your installation:
Get started with the example pack. Onboarding pointers have been gathered by the community at our wiki.
LD_PRELOAD=<knitro_lin_path>/libiomp5.so
.hypot(x,y) = sqrt(x*x+y*y)
log1p(x) = log(1+x)
expm1(x) = exp(x-1)
remainder
with the semantics of the C operation
fmin/
fmax` is now symmetric:jacobian(fmin(x,y),vertcat(x,y))
used to be [1 0] for x==y. Now yields [0.5 0.5].mmin
/mmax
logsumexp
which behaves like log(sum(exp(x)))
but is numerically more accurate (and no overflow issues).vertcat
/vcat
,horzcat
/hcat
, etc now return a DM
type instead of a Sparsity
type #2549
mod
has been renamed to rem
, because its numerical behaviour is like the builtin-Matlab rem
. The builtin-Matlab mod
has no CasADi counterpart. CasADi-Python mod
has been removed, because its numerical behaviour is not like numpy.mod
. #2767. numpy.mod
has no counterpart in CasADi; only fmod
is equivalent.Before, CasADi internals would avoid introducing redundant nodes during operations on a given expression, but the user was responsible to avoid duplication when constructing that expression.
There is a function cse()
that you may apply to expressions:
x = MX.sym('x')
# User responsibility
sx = sin(x)
y = sqrt(sx)+sx # MX(@1=sin(x), (sqrt(@1)+@1))
# cse
y = sqrt(sin(x))+sin(x) # MX((sqrt(sin(x))+sin(x)))
y = cse(y) # MX(@1=sin(x), (sqrt(@1)+@1))
There is a boolean option cse
that may be used when constructing a Function
:
x = MX.sym('x')
f = Function('f',[x],[sqrt(sin(x))+sin(x)],{"cse":True})
f.disp(True)
f:(i0)->(o0) MXFunction
Algorithm:
@0 = input[0][0]
@0 = sin(@0)
@1 = sqrt(@0)
@1 = (@1+@0)
output[0][0] = @1
The technique scales favorably for large graphs.
MX how has atomic support for solving upper and lower triangular linear systems without allocating any linear solver instance. The operation handles the case with unity diagonal separately for efficiency and supports C code generation. To use the feature, call casadi.solve(A, b)
(Python or MATLAB/Octave)
# Python
import casadi
A = casadi.MX.sym('A', casadi.Sparsity.upper(2))
b = casadi.MX.sym('b', 2)
x = casadi.solve(A, b)
// C++
casadi::MX A = casadi::MX::sym("A", casadi::Sparsity::upper(2));
casadi::MX b = casadi::MX::sym("b", 2);
casadi::MX x = solve(A, b); // for argument-dependent lookup, alternatively casadi::MX::solve(A, b) for static function
Cf. #2688.
SX
/MX
Function
construction with free variables (i.e. symbols used in the output expressions that are not declared as inputs) now fails immediately unless the allow_free
option is used.SX
/MX
Function
construction now fails if there are duplicates in input names or output names, unless the allow_duplicate_io_names
option is used #2604.custom_jacobian
semantics changed. The Function must now return individual blocks (Jacobian of an output w.r.t. to an input)external
or Callback
)bool has_jac_sparsity(casadi_int oind, casadi_int iind) const override;
Sparsity get_jac_sparsity(casadi_int oind, casadi_int iind, bool symmetric) const override;
Function.find_function
Can be used to retrieve Functions in a hierarchy.dump_in
option ) as nan
instead of earlier 0
. E.g. Ipopt nlp_grad_f
has two outputs, f
and grad_f_x
. The f
output is not used internally, so will be logged as nan
.Function
objects with an external
call can now be codegenerated.mmin
/mmax
now support codegenerationnlpsol
/Opti.solver
can now take an option 'detect_simple_bounds' (default False
) that will promote general constraints to simple bounds (lbx/ubx).libcplex<CPLEX_VERSION>
, where CPLEX_VERSION is read from environmental variables. Same strategy for Gurobi
.The Integrator
class, which solves initial-value problems in ODEs and DAEs has been thoroughly refactored. Changes include:
integrator
constructor. Unlike before, this support should now work in combination with forward/adjoint sensitivity analysis (to any order) and sparsity pattern calculations. Cf. #2823.u
). The interface will keep track of changes to u
and avoid integrating past such changes; for the Sundials (CVODES/IDAS) interfaces by setting a "stop time", for fixed step integrators by aligning the integration points with the grid points. Cf. #3025. Development versions of CasADi included support for this in a dedicated class, called Simulator
, but this class has now been removed and the functionality has been ported to the Integrator
class.Function
class for derivative calculations - this makes the class now more efficient for use with non-symbolic DAEs, including FMUs or other external models.t0
, tf
, output_t0
and grid
have been deprecated and will result in a warning if used. Instead, the user can provide equivalent information via the integrator
constructor, cf. previous point.backward states
are no longer part of the DAE formulation. They are now derived from a user specified number of sensitivity equations (nadj
). This is a slight restriction in the possible problem formulations, but on the other hand allows for a much better exploitation of adjoint sensitivity structure. The the backward states remain in the integrator class function inputs and outputs, but have now been renamed to align with their meaning; adj_xf
means the adjoint seeds corresponding to xf
(before they were called rx0
), adj_p
are the adjoint sensitivities corresponding to p
(before called rqf
and so on.scale_abstol
has been added to the Sundials integrators. If this is set to true, nominal values for the differential state and algebraic variables will be passed on to the solver. Cf. #3046
See "multipoint_simulation" in the example pack for a good starting point.
triu:
to get the old behavior.d
and local dependent variables w
have been replaced by the single dependent variables v
.octaveinterp
version, such that the new binaries work with future releases of Octave that increment the octaveinterp
ABI version number.casadi-cli
. At the moment, functionality is very limited, just eval_dump
, to evaluate Function that have been dumped to the disk (options dump
,dump_in
)-DWITH_IPOPT=ON -DWITH_BUILD_REQUIRED=ON
-DWITH_CPLEX=ON -DWITH_MOCKUP_CPLEX=ON
pip
are now availablemaster
branch has been renamed to main
, and has different semantics: it will be the branch where new features are added regularly before they become an official release. Latest official release is available as latest
branch.Published by jgillis over 1 year ago
Grab a binary from the table:
For Matlab/Octave, unzip in your home directory and adapt the path:
Check your installation:
Get started with the example pack. Onboarding pointers have been gathered by the community at our wiki.
LD_PRELOAD=<knitro_lin_path>/libiomp5.so
.hypot(x,y) = sqrt(x*x+y*y)
log1p(x) = log(1+x)
expm1(x) = exp(x-1)
remainder
with the semantics of the C operation
fmin/
fmax` is now symmetric:jacobian(fmin(x,y),vertcat(x,y))
used to be [1 0] for x==y. Now yields [0.5 0.5].mmin
/mmax
logsumexp
which behaves like log(sum(exp(x)))
but is numerically more accurate (and no overflow issues).vertcat
/vcat
,horzcat
/hcat
, etc now return a DM
type instead of a Sparsity
type #2549
mod
has been renamed to rem
, because its numerical behaviour is like the builtin-Matlab rem
. The builtin-Matlab mod
has no CasADi counterpart. CasADi-Python mod
has been removed, because its numerical behaviour is not like numpy.mod
. #2767. numpy.mod
has no counterpart in CasADi; only fmod
is equivalent.Before, CasADi internals would avoid introducing redundant nodes during operations on a given expression, but the user was responsible to avoid duplication when constructing that expression.
There is a function cse()
that you may apply to expressions:
x = MX.sym('x')
# User responsibility
sx = sin(x)
y = sqrt(sx)+sx # MX(@1=sin(x), (sqrt(@1)+@1))
# cse
y = sqrt(sin(x))+sin(x) # MX((sqrt(sin(x))+sin(x)))
y = cse(y) # MX(@1=sin(x), (sqrt(@1)+@1))
There is a boolean option cse
that may be used when constructing a Function
:
x = MX.sym('x')
f = Function('f',[x],[sqrt(sin(x))+sin(x)],{"cse":True})
f.disp(True)
f:(i0)->(o0) MXFunction
Algorithm:
@0 = input[0][0]
@0 = sin(@0)
@1 = sqrt(@0)
@1 = (@1+@0)
output[0][0] = @1
The technique scales favorably for large graphs.
MX how has atomic support for solving upper and lower triangular linear systems without allocating any linear solver instance. The operation handles the case with unity diagonal separately for efficiency and supports C code generation. To use the feature, call casadi.solve(A, b)
(Python or MATLAB/Octave)
# Python
import casadi
A = casadi.MX.sym('A', casadi.Sparsity.upper(2))
b = casadi.MX.sym('b', 2)
x = casadi.solve(A, b)
// C++
casadi::MX A = casadi::MX::sym("A", casadi::Sparsity::upper(2));
casadi::MX b = casadi::MX::sym("b", 2);
casadi::MX x = solve(A, b); // for argument-dependent lookup, alternatively casadi::MX::solve(A, b) for static function
Cf. #2688.
SX
/MX
Function
construction with free variables (i.e. symbols used in the output expressions that are not declared as inputs) now fails immediately unless the allow_free
option is used.SX
/MX
Function
construction now fails if there are duplicates in input names or output names, unless the allow_duplicate_io_names
option is used #2604.custom_jacobian
semantics changed. The Function must now return individual blocks (Jacobian of an output w.r.t. to an input)external
or Callback
)bool has_jac_sparsity(casadi_int oind, casadi_int iind) const override;
Sparsity get_jac_sparsity(casadi_int oind, casadi_int iind, bool symmetric) const override;
Function.find_function
Can be used to retrieve Functions in a hierarchy.Function
objects with an external
call can now be codegenerated.mmin
/mmax
now support codegenerationnlpsol
/Opti.solver
can now take an option 'detect_simple_bounds' (default False
) that will promote general constraints to simple bounds (lbx/ubx).libcplex<CPLEX_VERSION>
, where CPLEX_VERSION is read from environmental variables. Same strategy for Gurobi
.The Integrator
class, which solves initial-value problems in ODEs and DAEs has been thoroughly refactored. Changes include:
integrator
constructor. Unlike before, this support should now work in combination with forward/adjoint sensitivity analysis (to any order) and sparsity pattern calculations. Cf. #2823.u
). The interface will keep track of changes to u
and avoid integrating past such changes; for the Sundials (CVODES/IDAS) interfaces by setting a "stop time", for fixed step integrators by aligning the integration points with the grid points. Cf. #3025. Development versions of CasADi included support for this in a dedicated class, called Simulator
, but this class has now been removed and the functionality has been ported to the Integrator
class.Function
class for derivative calculations - this makes the class now more efficient for use with non-symbolic DAEs, including FMUs or other external models.t0
, tf
, output_t0
and grid
have been deprecated and will result in a warning if used. Instead, the user can provide equivalent information via the integrator
constructor, cf. previous point.backward states
are no longer part of the DAE formulation. They are now derived from a user specified number of sensitivity equations (nadj
). This is a slight restriction in the possible problem formulations, but on the other hand allows for a much better exploitation of adjoint sensitivity structure. The the backward states remain in the integrator class function inputs and outputs, but have now been renamed to align with their meaning; adj_xf
means the adjoint seeds corresponding to xf
(before they were called rx0
), adj_p
are the adjoint sensitivities corresponding to p
(before called rqf
and so on.scale_abstol
has been added to the Sundials integrators. If this is set to true, nominal values for the differential state and algebraic variables will be passed on to the solver. Cf. #3046
See "multipoint_simulation" in the example pack for a good starting point.
triu:
to get the old behavior.d
and local dependent variables w
have been replaced by the single dependent variables v
.octaveinterp
version, such that the new binaries work with future releases of Octave that increment the octaveinterp
ABI version number.casadi-cli
. At the moment, functionality is very limited, just eval_dump
, to evaluate Function that have been dumped to the disk (options dump
,dump_in
)-DWITH_IPOPT=ON -DWITH_BUILD_REQUIRED=ON
-DWITH_CPLEX=ON -DWITH_MOCKUP_CPLEX=ON
pip
are now availablePublished by jgillis about 4 years ago
Grab a binary from the table (for MATLAB, use the newest compatible version below):
(*) Check your Python console if you need 32bit or 64bit - bitness should be printed at startup.
Unzip in your home directory and adapt the path:
Get started with the example pack.
f.save('f.casadi') % Dump any CasADi Function to a file
f = Function.load('f.casadi') % Loads back in
This enables easy sharing of models/solver isntances beteen Matlab/Python/C++ cross-platform, and enables a form of parallelization.
print_time
, default true for QP and NLP solvers). Use record_time
to make timings available through f.stats()
without printing them.map
with reduce arguments now has an efficient implementation (no copying/repmat)eval_buffer
FunctionInternal::finalize
no longer takes options dict.always_inline
and never_inline
were addedis_diff_in
and is_diff_out
were addedhas_jacobian_sparsity/get_jacobian_sparsity
IM
type is removed from public API (was used to represent integer sparse matrices). Use DM
instead.linspace(0,1,3)
and linspace(0.0,1,3)
now both return [0 0.5 1]
instead of [0 0 1]
for the formerMX
supports slicing with MX
now (symbolic indexing).veccat
of an empty list now returns 0-by-1
instead of 0-by-0
.jtimes
output dimensions have changed when any of the arguments is empty.0-by-1
in case of missing parameters.cosh
derivativeconvexify
was addedinterpolant
, new constructors where added that takes dimensions instead of concrete vectorsinline
option to true).a(:)=b
now behaves like Matlab builtin matrices when a
is a CasADi matrix. Before, only the first column of a
would be touched by this statement. (#2363)MX
constructor treated a numeric row vector as column vector. Now size(MX(ones(1,4)))
returns (1,4)
as expected. (#2366)DM
,MX
,SX
Opti('conic')
opti = Opti()
x = opti.variable()
y = opti.variable()
p = opti.parameter()
opti.minimize(y**2+sin(x-y-p)**2)
opti.subject_to(x+y>=1)
opti.solver(nlpsolver,nlpsolver_options)
F = opti.to_function("F",[x,p,opti.lam_g],[x,y])
r = F(0,0.1,0)
(3.5.1) Improved support for vertcatted inputs to to_function
max_iter
is more natural now: use solve_limited()
to avoid exceptions to be raised when iterations or time runs out. No need to try/catch.external
now looks for a .dylib
file, not .so
for macvoid* mem
changed to int mem
alloc_mem
, init_mem
, free_mem
have been purged. checkout
and release
replace them. int mem = checkout();
eval(arg, res, iw, w, mem);
release(mem);
mem.h
regressionmain
and mex-related functions c89
-compliantbreaking: NLP solvers - bound_consistency
, an option to post-process the primal and dual solution by projecting it on the bounds, introduced in 3.4, was changed to default off
Sundials was patched to support multi-threading
WORHP was bumped to v1.13
SNOPT was bumped to v7.7
SuperSCS (conic solver) was added
OSQP (QP solver) was added
CBC (LP solver) was added
(3.5.3) AMPL was fixed to allow other solvers than IPOPT
breaking: SQP Method
regularize_margin
option was addedregularize
(bool) option was removed. To get the effect of regularize=true
, specify convexify_strategy='regularize'
. Other strategies include clipping eigenvalues.CPLEX and Gurobi got support for sos constraints
Conic/qpsol interface extended for semidefinite programming and SOCP
Gurobi
, SuperSCS
, CPLEX
breaking: Newton Rootfinder now supports a line_search
option (default true)
Rootfinder now throws an exception by default ('error_on_fail' option true) when failing to converge
(3.5.5) Implemented constraints in IDAS and step size limits in CVODES/IDAS integrators
print_in
/print_in
print inputs/outputs when numerically evaluating a functiondump_in
/dump_out
dumps to the file systemdump
dumps the function itself (loadable with Function.load
)DM.from_file
and DM.to_file
with a MatrixMarket
and txt
supportmain=true
: Function.generate_in
/Function.nz_from_in
/Function.nz_to_in
to help creating input text files.Function.convert_in
/Function.convert_out
to switch between list and dictionary arguments/resultsVersions used in binaries ( see FAQ ):
Published by jgillis about 4 years ago
Grab a binary from the table (for MATLAB, use the newest compatible version below):
(*) Check your Python console if you need 32bit or 64bit - bitness should be printed at startup.
Unzip in your home directory and adapt the path:
Get started with the example pack.
f.save('f.casadi') % Dump any CasADi Function to a file
f = Function.load('f.casadi') % Loads back in
This enables easy sharing of models/solver isntances beteen Matlab/Python/C++ cross-platform, and enables a form of parallelization.
print_time
, default true for QP and NLP solvers). Use record_time
to make timings available through f.stats()
without printing them.map
with reduce arguments now has an efficient implementation (no copying/repmat)eval_buffer
FunctionInternal::finalize
no longer takes options dict.always_inline
and never_inline
were addedis_diff_in
and is_diff_out
were addedhas_jacobian_sparsity/get_jacobian_sparsity
IM
type is removed from public API (was used to represent integer sparse matrices). Use DM
instead.linspace(0,1,3)
and linspace(0.0,1,3)
now both return [0 0.5 1]
instead of [0 0 1]
for the formerMX
supports slicing with MX
now (symbolic indexing).veccat
of an empty list now returns 0-by-1
instead of 0-by-0
.jtimes
output dimensions have changed when any of the arguments is empty.0-by-1
in case of missing parameters.cosh
derivativeconvexify
was addedinterpolant
, new constructors where added that takes dimensions instead of concrete vectorsinline
option to true).a(:)=b
now behaves like Matlab builtin matrices when a
is a CasADi matrix. Before, only the first column of a
would be touched by this statement. (#2363)MX
constructor treated a numeric row vector as column vector. Now size(MX(ones(1,4)))
returns (1,4)
as expected. (#2366)DM
,MX
,SX
Opti('conic')
opti = Opti()
x = opti.variable()
y = opti.variable()
p = opti.parameter()
opti.minimize(y**2+sin(x-y-p)**2)
opti.subject_to(x+y>=1)
opti.solver(nlpsolver,nlpsolver_options)
F = opti.to_function("F",[x,p,opti.lam_g],[x,y])
r = F(0,0.1,0)
(3.5.1) Improved support for vertcatted inputs to to_function
max_iter
is more natural now: use solve_limited()
to avoid exceptions to be raised when iterations or time runs out. No need to try/catch.external
now looks for a .dylib
file, not .so
for macvoid* mem
changed to int mem
alloc_mem
, init_mem
, free_mem
have been purged. checkout
and release
replace them. int mem = checkout();
eval(arg, res, iw, w, mem);
release(mem);
mem.h
regressionmain
and mex-related functions c89
-compliantbreaking: NLP solvers - bound_consistency
, an option to post-process the primal and dual solution by projecting it on the bounds, introduced in 3.4, was changed to default off
Sundials was patched to support multi-threading
WORHP was bumped to v1.13
SNOPT was bumped to v7.7
SuperSCS (conic solver) was added
OSQP (QP solver) was added
CBC (LP solver) was added
(3.5.3) AMPL was fixed to allow other solvers than IPOPT
breaking: SQP Method
regularize_margin
option was addedregularize
(bool) option was removed. To get the effect of regularize=true
, specify convexify_strategy='regularize'
. Other strategies include clipping eigenvalues.CPLEX and Gurobi got support for sos constraints
Conic/qpsol interface extended for semidefinite programming and SOCP
Gurobi
, SuperSCS
, CPLEX
breaking: Newton Rootfinder now supports a line_search
option (default true)
Rootfinder now throws an exception by default ('error_on_fail' option true) when failing to converge
print_in
/print_in
print inputs/outputs when numerically evaluating a functiondump_in
/dump_out
dumps to the file systemdump
dumps the function itself (loadable with Function.load
)DM.from_file
and DM.to_file
with a MatrixMarket
and txt
supportmain=true
: Function.generate_in
/Function.nz_from_in
/Function.nz_to_in
to help creating input text files.Function.convert_in
/Function.convert_out
to switch between list and dictionary arguments/resultsPublished by jgillis about 4 years ago
Grab a binary from the table (for MATLAB, use the newest compatible version below):
(*) Check your Python console if you need 32bit or 64bit - bitness should be printed at startup.
Unzip in your home directory and adapt the path:
Get started with the example pack.
f.save('f.casadi') % Dump any CasADi Function to a file
f = Function.load('f.casadi') % Loads back in
This enables easy sharing of models/solver isntances beteen Matlab/Python/C++ cross-platform, and enables a form of parallelization.
print_time
, default true for QP and NLP solvers). Use record_time
to make timings available through f.stats()
without printing them.map
with reduce arguments now has an efficient implementation (no copying/repmat)eval_buffer
FunctionInternal::finalize
no longer takes options dict.always_inline
and never_inline
were addedis_diff_in
and is_diff_out
were addedhas_jacobian_sparsity/get_jacobian_sparsity
IM
type is removed from public API (was used to represent integer sparse matrices). Use DM
instead.linspace(0,1,3)
and linspace(0.0,1,3)
now both return [0 0.5 1]
instead of [0 0 1]
for the formerMX
supports slicing with MX
now (symbolic indexing).veccat
of an empty list now returns 0-by-1
instead of 0-by-0
.jtimes
output dimensions have changed when any of the arguments is empty.0-by-1
in case of missing parameters.cosh
derivativeconvexify
was addedinterpolant
, new constructors where added that takes dimensions instead of concrete vectorsinline
option to true).a(:)=b
now behaves like Matlab builtin matrices when a
is a CasADi matrix. Before, only the first column of a
would be touched by this statement. (#2363)MX
constructor treated a numeric row vector as column vector. Now size(MX(ones(1,4)))
returns (1,4)
as expected. (#2366)DM
,MX
,SX
Opti('conic')
opti = Opti()
x = opti.variable()
y = opti.variable()
p = opti.parameter()
opti.minimize(y**2+sin(x-y-p)**2)
opti.subject_to(x+y>=1)
opti.solver(nlpsolver,nlpsolver_options)
F = opti.to_function("F",[x,p,opti.lam_g],[x,y])
r = F(0,0.1,0)
(3.5.1) Improved support for vertcatted inputs to to_function
max_iter
is more natural now: use solve_limited()
to avoid exceptions to be raised when iterations or time runs out. No need to try/catch.external
now looks for a .dylib
file, not .so
void* mem
changed to int mem
alloc_mem
, init_mem
, free_mem
have been purged. checkout
and release
replace them. int mem = checkout();
eval(arg, res, iw, w, mem);
release(mem);
mem.h
regressionmain
and mex-related functions c89
-compliantbreaking: NLP solvers - bound_consistency
, an option to post-process the primal and dual solution by projecting it on the bounds, introduced in 3.4, was changed to default off
Sundials was patched to support multi-threading
WORHP was bumped to v1.13
SNOPT was bumped to v7.7
SuperSCS (conic solver) was added
OSQP (QP solver) was added
CBC (LP solver) was added
(3.5.3) AMPL was fixed to allow other solvers than IPOPT
breaking: SQP Method
regularize_margin
option was addedregularize
(bool) option was removed. To get the effect of regularize=true
, specify convexify_strategy='regularize'
. Other strategies include clipping eigenvalues.CPLEX and Gurobi got support for sos constraints
Conic/qpsol interface extended for semidefinite programming and SOCP
Gurobi
, SuperSCS
, CPLEX
breaking: Newton Rootfinder now supports a line_search
option (default true)
Rootfinder now throws an exception by default ('error_on_fail' option true) when failing to converge
print_in
/print_in
print inputs/outputs when numerically evaluating a functiondump_in
/dump_out
dumps to the file systemdump
dumps the function itself (loadable with Function.load
)DM.from_file
and DM.to_file
with a MatrixMarket
and txt
supportmain=true
: Function.generate_in
/Function.nz_from_in
/Function.nz_to_in
to help creating input text files.Function.convert_in
/Function.convert_out
to switch between list and dictionary arguments/resultsPublished by jgillis over 4 years ago
Grab a binary from the table (for MATLAB, use the newest compatible version below):
(*) Check your Python console if you need 32bit or 64bit - bitness should be printed at startup.
Unzip in your home directory and adapt the path:
Get started with the example pack.
f.save('f.casadi') % Dump any CasADi Function to a file
f = Function.load('f.casadi') % Loads back in
This enables easy sharing of models/solver isntances beteen Matlab/Python/C++ cross-platform, and enables a form of parallelization.
print_time
, default true for QP and NLP solvers). Use record_time
to make timings available through f.stats()
without printing them.map
with reduce arguments now has an efficient implementation (no copying/repmat)eval_buffer
FunctionInternal::finalize
no longer takes options dict.always_inline
and never_inline
were addedis_diff_in
and is_diff_out
were addedhas_jacobian_sparsity/get_jacobian_sparsity
IM
type is removed from public API (was used to represent integer sparse matrices). Use DM
instead.linspace(0,1,3)
and linspace(0.0,1,3)
now both return [0 0.5 1]
instead of [0 0 1]
for the formerMX
supports slicing with MX
now (symbolic indexing).veccat
of an empty list now returns 0-by-1
instead of 0-by-0
.jtimes
output dimensions have changed when any of the arguments is empty.0-by-1
in case of missing parameters.cosh
derivativeconvexify
was addedinterpolant
, new constructors where added that takes dimensions instead of concrete vectorsinline
option to true).a(:)=b
now behaves like Matlab builtin matrices when a
is a CasADi matrix. Before, only the first column of a
would be touched by this statement. (#2363)MX
constructor treated a numeric row vector as column vector. Now size(MX(ones(1,4)))
returns (1,4)
as expected. (#2366)DM
,MX
,SX
Opti('conic')
opti = Opti()
x = opti.variable()
y = opti.variable()
p = opti.parameter()
opti.minimize(y**2+sin(x-y-p)**2)
opti.subject_to(x+y>=1)
opti.solver(nlpsolver,nlpsolver_options)
F = opti.to_function("F",[x,p,opti.lam_g],[x,y])
r = F(0,0.1,0)
(3.5.1) Improved support for vertcatted inputs to to_function
max_iter
is more natural now: use solve_limited()
to avoid exceptions to be raised when iterations or time runs out. No need to try/catch.external
now looks for a .dylib
file, not .so
void* mem
changed to int mem
alloc_mem
, init_mem
, free_mem
have been purged. checkout
and release
replace them. int mem = checkout();
eval(arg, res, iw, w, mem);
release(mem);
mem.h
regressionmain
and mex-related functions c89
-compliantbreaking: NLP solvers - bound_consistency
, an option to post-process the primal and dual solution by projecting it on the bounds, introduced in 3.4, was changed to default off
Sundials was patched to support multi-threading
WORHP was bumped to v1.13
SNOPT was bumped to v7.7
SuperSCS (conic solver) was added
OSQP (QP solver) was added
CBC (LP solver) was added
(3.5.2) AMPL was fixed to allow other solvers than IPOPT
breaking: SQP Method
regularize_margin
option was addedregularize
(bool) option was removed. To get the effect of regularize=true
, specify convexify_strategy='regularize'
. Other strategies include clipping eigenvalues.CPLEX and Gurobi got support for sos constraints
Conic/qpsol interface extended for semidefinite programming and SOCP
Gurobi
, SuperSCS
, CPLEX
breaking: Newton Rootfinder now supports a line_search
option (default true)
Rootfinder now throws an exception by default ('error_on_fail' option true) when failing to converge
print_in
/print_in
print inputs/outputs when numerically evaluating a functiondump_in
/dump_out
dumps to the file systemdump
dumps the function itself (loadable with Function.load
)DM.from_file
and DM.to_file
with a MatrixMarket
and txt
supportmain=true
: Function.generate_in
/Function.nz_from_in
/Function.nz_to_in
to help creating input text files.Function.convert_in
/Function.convert_out
to switch between list and dictionary arguments/resultsPublished by jgillis about 5 years ago
Grab a binary from the table (for MATLAB, use the newest compatible version below):
(*) Check your Python console if you need 32bit or 64bit - bitness should be printed at startup.
Unzip in your home directory and adapt the path:
Get started with the example pack.
f.save('f.casadi') % Dump any CasADi Function to a file
f = Function.load('f.casadi') % Loads back in
This enables easy sharing of models/solver isntances beteen Matlab/Python/C++ cross-platform, and enables a form of parallelization.
print_time
, default true for QP and NLP solvers). Use record_time
to make timings available through f.stats()
without printing them.map
with reduce arguments now has an efficient implementation (no copying/repmat)eval_buffer
FunctionInternal::finalize
no longer takes options dict.always_inline
and never_inline
were addedis_diff_in
and is_diff_out
were addedIM
type is removed from public API (was used to represent integer sparse matrices). Use DM
instead.linspace(0,1,3)
and linspace(0.0,1,3)
now both return [0 0.5 1]
instead of [0 0 1]
for the formerMX
supports slicing with MX
now (symbolic indexing).veccat
of an empty list now returns 0-by-1
instead of 0-by-0
.jtimes
output dimensions have changed when any of the arguments is empty.0-by-1
in case of missing parameters.interpolant
, new constructors where added that takes dimensions instead of concrete vectorsinline
option to true).a(:)=b
now behaves like Matlab builtin matrices when a
is a CasADi matrix. Before, only the first column of a
would be touched by this statement. (#2363)MX
constructor treated a numeric row vector as column vector. Now size(MX(ones(1,4)))
returns (1,4)
as expected. (#2366)DM
,MX
,SX
Opti('conic')
opti = Opti()
x = opti.variable()
y = opti.variable()
p = opti.parameter()
opti.minimize(y**2+sin(x-y-p)**2)
opti.subject_to(x+y>=1)
opti.solver(nlpsolver,nlpsolver_options)
F = opti.to_function("F",[x,p,opti.lam_g],[x,y])
r = F(0,0.1,0)
(3.5.1) Improved support for vertcatted inputs to to_function
max_iter
is more natural now: use solve_limited()
to avoid exceptions to be raised when iterations or time runs out. No need to try/catch.external
now looks for a .dylib
file, not .so
void* mem
changed to int mem
alloc_mem
, init_mem
, free_mem
have been purged. checkout
and release
replace them. int mem = checkout();
eval(arg, res, iw, w, mem);
release(mem);
breaking: NLP solvers - bound_consistency
, an option to post-process the primal and dual solution by projecting it on the bounds, introduced in 3.4, was changed to default off
Sundials was patched to support multi-threading
WORHP was bumped to v1.13
SNOPT was bumped to v7.7
SuperSCS (conic solver) was added
OSQP (QP solver) was added
CBC (LP solver) was added
breaking: SQP Method
regularize_margin
option was addedregularize
(bool) option was removed. To get the effect of regularize=true
, specify convexify_strategy='regularize'
. Other strategies include clipping eigenvalues.CPLEX and Gurobi got support for sos constraints
Conic/qpsol interface extended for semidefinite programming and SOCP
Gurobi
, SuperSCS
, CPLEX
breaking: Newton Rootfinder now supports a line_search
option (default true)
Rootfinder now throws an exception by default ('error_on_fail' option true) when failing to converge
print_in
/print_in
print inputs/outputs when numerically evaluating a functiondump_in
/dump_out
dumps to the file systemdump
dumps the function itself (loadable with Function.load
)DM.from_file
and DM.to_file
with a MatrixMarket
and txt
supportmain=true
: Function.generate_in
/Function.nz_from_in
/Function.nz_to_in
to help creating input text files.Function.convert_in
/Function.convert_out
to switch between list and dictionary arguments/resultsPublished by jgillis about 5 years ago
Grab a binary from the table (for MATLAB, use the newest compatible version below):
(*) Check your Python console if you need 32bit or 64bit - bitness should be printed at startup.
Unzip in your home directory and adapt the path:
Get started with the example pack.
f.save('f.casadi') % Dump any CasADi Function to a file
f = Function.load('f.casadi') % Loads back in
This enables easy sharing of models/solver isntances beteen Matlab/Python/C++ cross-platform, and enables a form of parallelization.
print_time
, default true for QP and NLP solvers). Use record_time
to make timings available through f.stats()
without printing them.map
with reduce arguments now has an efficient implementation (no copying/repmat)eval_buffer
FunctionInternal::finalize
no longer takes options dict.always_inline
and never_inline
were addedis_diff_in
and is_diff_out
were addedIM
type is removed from public API (was used to represent integer sparse matrices). Use DM
instead.linspace(0,1,3)
and linspace(0.0,1,3)
now both return [0 0.5 1]
instead of [0 0 1]
for the formerMX
supports slicing with MX
now (symbolic indexing).veccat
of an empty list now returns 0-by-1
instead of 0-by-0
.jtimes
output dimensions have changed when any of the arguments is empty.0-by-1
in case of missing parameters.interpolant
, new constructors where added that takes dimensions instead of concrete vectorsinline
option to true).a(:)=b
now behaves like Matlab builtin matrices when a
is a CasADi matrix. Before, only the first column of a
would be touched by this statement. (#2363)MX
constructor treated a numeric row vector as column vector. Now size(MX(ones(1,4)))
returns (1,4)
as expected. (#2366)DM
,MX
,SX
Opti('conic')
opti = Opti()
x = opti.variable()
y = opti.variable()
p = opti.parameter()
opti.minimize(y**2+sin(x-y-p)**2)
opti.subject_to(x+y>=1)
opti.solver(nlpsolver,nlpsolver_options)
F = opti.to_function("F",[x,p,opti.lam_g],[x,y])
r = F(0,0.1,0)
max_iter
is more natural now: use solve_limited()
to avoid exceptions to be raised when iterations or time runs out. No need to try/catch.external
now looks for a .dylib
file, not .so
void* mem
changed to int mem
alloc_mem
, init_mem
, free_mem
have been purged. checkout
and release
replace them. int mem = checkout();
eval(arg, res, iw, w, mem);
release(mem);
breaking: NLP solvers - bound_consistency
, an option to post-process the primal and dual solution by projecting it on the bounds, introduced in 3.4, was changed to default off
Sundials was patched to support multi-threading
WORHP was bumped to v1.13
SNOPT was bumped to v7.7
SuperSCS (conic solver) was added
OSQP (QP solver) was added
CBC (LP solver) was added
breaking: SQP Method
regularize_margin
option was addedregularize
(bool) option was removed. To get the effect of regularize=true
, specify convexify_strategy='regularize'
. Other strategies include clipping eigenvalues.CPLEX and Gurobi got support for sos constraints
Conic/qpsol interface extended for semidefinite programming and SOCP
Gurobi
, SuperSCS
, CPLEX
breaking: Newton Rootfinder now supports a line_search
option (default true)
Rootfinder now throws an exception by default ('error_on_fail' option true) when failing to converge
print_in
/print_in
print inputs/outputs when numerically evaluating a functiondump_in
/dump_out
dumps to the file systemdump
dumps the function itself (loadable with Function.load
)DM.from_file
and DM.to_file
with a MatrixMarket
and txt
supportmain=true
: Function.generate_in
/Function.nz_from_in
/Function.nz_to_in
to help creating input text files.Function.convert_in
/Function.convert_out
to switch between list and dictionary arguments/resultsPublished by jgillis about 6 years ago
Grab a binary from the table (for MATLAB, use the newest compatible version below):
(*) Check your Python console if you need 32bit or 64bit - bitness should be printed at startup.
Unzip in your home directory and adapt the path:
Get started with the example pack.
Getting error "CasADi is not running from its package context." in Python? Check that you have casadi-py27-v3.4.5/casadi/casadi.py
. If you have casadi-py27-v3.4.5/casadi.py
instead, that's not good; add an extra casadi
folder.
Got stuck while installing? You may also try out CasADi without installing, right in your browser (pick Python or Octave/Matlab).
CasADi 3.3 introduced support for two sparse direct linear solvers relying based on sparse direct QR factorization and sparse direct LDL factorization, respectively. In the release notes and in the code, it was not made clear enough that part of these routines could be considered derivative works of CSparse and LDL, respectively, both under copyright of Tim Davis. In the current release, routines derived from CSparse and LDL are clearly marked as such and to be considered derivative work under LGPL. All these routines reside inside the casadi::Sparsity
class.
Since CasADi, CSparse and LDL all have the same open-source license (LGPL), this will not introduce any additional restrictions for users.
Since C code generated from CasADi is not LGPL (allowing CasADi users to use the generated code freely), all CSparse and LDL derived routines have been removed or replaced in CasADi's C runtime. This means that code generation for CasADi's 'qr' and 'ldl' is now possible without any additional license restrictions. A number of bugs have also been resolved.
CasADi 3.4 introduces differentiability for NLP solver instances in CasADi. Derivatives can be calculated efficiently with either forward or reverse mode algorithmic differentiation. We will detail this functionality in future publications, but in the meantime, feel free to reach out to Joel if you have questions about the functionality. The implementation is based on using derivative propagation rules to the implicit function theorem, applied to the nonlinear KKT system. It is part of the NLP solver base class and should in principle work with any NLP solver, although the factorization and solution of the KKT system (based on the sparse QR above) is likely to be a speed bottle neck in applications. The derivative calculations also depend on accurate Lagrange multipliers to be available, in particular with the correct signs for all multipliers. Functions for calculating parametric sensitivities for a particular system can be C code generated.
The parametric sensitivity analysis for NLP solvers, detailed above, is only as good as the multipliers you provide to it. Multipliers from an interior point method such as IPOPT are usually not accurate enough to be used for the parametric sensitivity analysis, which in particular relies on knowledge of the active set. For this reason, we have started work on a primal-dual active set method for quadratic programming. The method relies on the same factorization of the linearized KKT system as the parametric sensitivity analysis and will support C code generation. The solver is available as the "activeset" plugin in CasADi. The method is still work-in-progress and in particular performs poorly if the Hessian matrix is not strictly positive definite.
describe
methods in Matlab now follows index-1 based convention.show_infeasibilities
to help debugging infeasible problems.opti.lbg,opti.ubg
2^31-1
.2^63-1
by changing CasADi integer types to casadi_int
(long long
).for-loop equivalents
to the users guidesolver.stats()
for nlpsol
/conic
evalf
function to numerically evaluate an SX/MX matrix that does not depend on any symbolsdiff
and cumsum
(follows the Matlab convention)Release
mode once again (as was always intended)-Werror
for gcc-6
and gcc-7
Published by jgillis over 6 years ago
Grab a binary from the table (for MATLAB, use the newest compatible version below):
(*) Check your Python console if you need 32bit or 64bit - bitness should be printed at startup.
Unzip in your home directory and adapt the path:
Get started with the example pack.
Getting error "CasADi is not running from its package context." in Python? Check that you have casadi-py27-v3.4.4/casadi/casadi.py
. If you have casadi-py27-v3.4.4/casadi.py
instead, that's not good; add an extra casadi
folder.
Got stuck while installing? You may also try out CasADi without installing, right in your browser (pick Python or Octave/Matlab).
CasADi 3.3 introduced support for two sparse direct linear solvers relying based on sparse direct QR factorization and sparse direct LDL factorization, respectively. In the release notes and in the code, it was not made clear enough that part of these routines could be considered derivative works of CSparse and LDL, respectively, both under copyright of Tim Davis. In the current release, routines derived from CSparse and LDL are clearly marked as such and to be considered derivative work under LGPL. All these routines reside inside the casadi::Sparsity
class.
Since CasADi, CSparse and LDL all have the same open-source license (LGPL), this will not introduce any additional restrictions for users.
Since C code generated from CasADi is not LGPL (allowing CasADi users to use the generated code freely), all CSparse and LDL derived routines have been removed or replaced in CasADi's C runtime. This means that code generation for CasADi's 'qr' and 'ldl' is now possible without any additional license restrictions. A number of bugs have also been resolved.
CasADi 3.4 introduces differentiability for NLP solver instances in CasADi. Derivatives can be calculated efficiently with either forward or reverse mode algorithmic differentiation. We will detail this functionality in future publications, but in the meantime, feel free to reach out to Joel if you have questions about the functionality. The implementation is based on using derivative propagation rules to the implicit function theorem, applied to the nonlinear KKT system. It is part of the NLP solver base class and should in principle work with any NLP solver, although the factorization and solution of the KKT system (based on the sparse QR above) is likely to be a speed bottle neck in applications. The derivative calculations also depend on accurate Lagrange multipliers to be available, in particular with the correct signs for all multipliers. Functions for calculating parametric sensitivities for a particular system can be C code generated.
The parametric sensitivity analysis for NLP solvers, detailed above, is only as good as the multipliers you provide to it. Multipliers from an interior point method such as IPOPT are usually not accurate enough to be used for the parametric sensitivity analysis, which in particular relies on knowledge of the active set. For this reason, we have started work on a primal-dual active set method for quadratic programming. The method relies on the same factorization of the linearized KKT system as the parametric sensitivity analysis and will support C code generation. The solver is available as the "activeset" plugin in CasADi. The method is still work-in-progress and in particular performs poorly if the Hessian matrix is not strictly positive definite.
describe
methods in Matlab now follows index-1 based convention.show_infeasibilities
to help debugging infeasible problems.opti.lbg,opti.ubg
2^31-1
.2^63-1
by changing CasADi integer types to casadi_int
(long long
).for-loop equivalents
to the users guidesolver.stats()
for nlpsol
/conic
evalf
function to numerically evaluate an SX/MX matrix that does not depend on any symbolsdiff
and cumsum
(follows the Matlab convention)Release
mode once again (as was always intended)-Werror
for gcc-6
and gcc-7
Published by jgillis over 6 years ago
Published by jgillis over 6 years ago
Octave bumped to 4.2.2
.
Worhp solver fixed.
Docstrings are back for all casadi methods.
Published by jgillis over 6 years ago