cotengra

Hyper optimized contraction trees for large tensor networks and einsums

APACHE-2.0 License

Downloads
42.4K
Stars
180
cotengra - v0.6.2 Latest Release

Published by jcmgray 5 months ago

Bug fixes

  • Fix final, output contractions being mistakenly marked as not tensordot-able.
  • When contracting with implementation="autoray", don't require a backend to have both einsum and tensordot, instead fallback to cotengra's own.

Full Changelog: https://github.com/jcmgray/cotengra/compare/v0.6.1...v0.6.2

cotengra - v0.6.1

Published by jcmgray 5 months ago

What's Changed

Breaking changes

  • The number of workers initialized (for non-distributed pools) is now set to, in order of preference, 1. the environment variable COTENGRA_NUM_WORKERS, 2. the environment variable OMP_NUM_THREADS, or 3. os.cpu_count().

Enhancements

  • add RandomGreedyOptimizer which is a lightweight and performant randomized greedy optimizer, eschewing both hyper parameter tuning and full contraction tree construction, making it suitable for very large contractions (10,000s of tensors+).
  • add optimize_random_greedy_track_flops which runs N trials of (random) greedy path optimization, whilst computing the FLOP count simultaneously. This or its accelerated rust counterpart in cotengrust is the driver for the above optimizer.
  • add parallel="threads" backend, and make it the default for RandomGreedyOptimizer when cotengrust is present, since its version of optimize_random_greedy_track_flops releases the GIL.
  • significantly improve both the speed and memory usage of SliceFinder
  • alias tree.total_cost() to tree.combo_cost()

Full Changelog: https://github.com/jcmgray/cotengra/compare/v0.6.0...v0.6.1

cotengra - v0.6.0

Published by jcmgray 7 months ago

Bug fixes

  • all input node legs and pre-processing steps are now calculated lazily, allowing slicing of indices including those 'simplified' away #31.
  • make tree.peak_size more accurate, by taking max assuming left, right and parent present at the same time.

Enhancements

  • add simulated annealing tree refinement (in path_simulated_annealing.py), based on "Multi-Tensor Contraction for XEB Verification of Quantum Circuits" by Gleb Kalachev, Pavel Panteleev, Man-Hong Yung (arXiv:2108.05665), and the "treesa" implementation in OMEinsumContractionOrders.jl by Jin-Guo Liu and Pan Zhang. This can be accessed most easily by supplying opt = HyperOptimizer(simulated_annealing_opts={}).
  • add ContractionTree.plot_flat: a new method for plotting the contraction tree as a flat diagram showing all indices on
    every intermediate (without requiring any graph layouts), which is useful for visualizing and understanding small contractions.
    image
  • HyperGraph.plot: support showing hyper outer indices, multi-edges, and automatic unique coloring of nodes and indices (to match plot_flat).
  • add `ContractionTree.plot_circuit for plotting the contraction tree as a circuit diagram, which is fast and useful for visualizing the traversal ordering for larger trees.
    image
  • add ContractionTree.restore_ind for 'unslicing' or 'unprojecting' previously removed indices.
  • ContractionTree.from_path: add option complete to automatically complete the tree given an incomplete path (usually disconnected subgraphs - #29).
  • add ContractionTree.get_incomplete_nodes for finding all uncontracted childless-parentless node groups.
  • add ContractionTree.autocomplete for automatically completing a contraction tree, using above method.
  • tree.plot_flat: show any preprocessing steps and optionally list sliced indices
  • add get_rng as a single entry point for getting or propagating a random number generator, to help determinism.
  • set autojit="auto" for contractions, which by default turns on jit for backend="jax" only.
  • add tree.describe for a various levels of information about a tree, e.g. tree.describe("full") and tree.describe("concise").
  • add ctg.GreedyOptimizer and ctg.OptimalOptimizer to the top namespace.
  • add ContractionTree.benchmark for for automatically assessing hardware performance vs theoretical cost.
  • contraction trees now have a get_default_objective method to return the objective function they were optimized with, for simpler further refinement or scoring, where it is now picked up automatically.
  • change the default 'sub' optimizer on divisive partition building algorithms to be 'greedy' rather than 'auto'. This might make individual trials slightly worse but makes each cheaper, see discussion: #27.

Full Changelog: https://github.com/jcmgray/cotengra/compare/v0.5.6...v0.6.0

cotengra - v0.5.6

Published by jcmgray 11 months ago

Bug fixes

  • fix a very rare but very infuriating bug related somehow to ReusableHyperOptimizer not being thread-safe and returning the wrong tree, especially on github actions

Full Changelog: https://github.com/jcmgray/cotengra/compare/v0.5.5...v0.5.6

cotengra - v0.5.5

Published by jcmgray 11 months ago

Enhancements

  • HyperOptimizer: by default simply warn if an individual trial fails, rather than raising an exception. This is to ensure rare failures do not spoil an entire optimization run. The behavior can be controlled with the on_trial_error argument.

Bug fixes

  • fixed bug in greedy optimizer that produced negative scores and otherwise inaccurate scores.
  • fixed bug for contraction with many inputs and also preprocessing steps

Full Changelog: https://github.com/jcmgray/cotengra/compare/v0.5.4...v0.5.5

cotengra - v0.5.4

Published by jcmgray about 1 year ago

Bug fixes

  • the auto and auto-hq optimizers are now safe to run under multi-threading.
cotengra - v0.5.3

Published by jcmgray about 1 year ago

  • einsum, einsum_tree and einsum_expression: add support for all numpy input formats, including interleaved indices and ellipses.
  • remove some hidden opt_einsum dependence (via a PathOptimizer method)

Full Changelog: https://github.com/jcmgray/cotengra/compare/v0.5.2...v0.5.3

cotengra - v0.5.2

Published by jcmgray about 1 year ago

Full Changelog: https://github.com/jcmgray/cotengra/compare/v0.5.1...v0.5.2

cotengra - v0.5.1

Published by jcmgray about 1 year ago

cotengra - v0.5.0

Published by jcmgray about 1 year ago

cotengra - v0.4.0

Published by jcmgray about 1 year ago

  • remove all hard dependencies
  • implement presets and cotengra versions of 'greedy', 'optimal', 'auto', 'auto-hq'
  • cotengrust integration for fast greedy/optimal subtree reconfiguration

Full Changelog: https://github.com/jcmgray/cotengra/compare/v0.3.2...v0.4.0

cotengra - v0.3.2

Published by jcmgray about 1 year ago

  • fix a bug in optimize_greedy

Full Changelog: https://github.com/jcmgray/cotengra/compare/v0.3.1...v0.3.2

cotengra - v0.3.1

Published by jcmgray about 1 year ago

What's Changed

  • faster index computations, and candidate faster greedy and optimal implementations
  • allow single term pre-processing in order to support arbitrary einsums
  • change 'flops' to everywhere be scalar operations not specialised to real float dtypes (results in halving cost in various places)
  • more preparation to fully decouple from opt_einsum
  • more robust path caching for many parallel processes
  • add utils.perverse_equation and utils.tree_equation
  • add CI testing
  • Adding fallback for kahypar version number check by @emprice in https://github.com/jcmgray/cotengra/pull/25

Full Changelog: https://github.com/jcmgray/cotengra/compare/v0.3.0...v0.3.1

cotengra - v0.3.0

Published by jcmgray about 1 year ago

  • add ContractionTree.slice_key
  • default to cotengra bmm implementation for numpy only
  • unify scoring objectives in scoring.py
  • update compressed contraction path finding
  • centralize and internalize necessary opt_einsum functionality to make it an optional dep
  • add ContractionTree.get_eq_sliced and friends
  • remove obsolete SlicedContractor
  • add hypergraph.py for hypergraph functionality
  • use quimb not hypernetx for rubberband plots
  • fix compressed contraction missing index bug
  • fix contract_expression for single terms
  • allow pairwise contractions with 52+
  • initial support for cuquantum contraction
  • add lazy output chunked example
  • add approx contraction example using quimb
  • make ContractTreeCompressed metrics default to compressed versions
  • suppress kahypar warning by removing dangling indices pre partitioning

Full Changelog: https://github.com/jcmgray/cotengra/compare/v0.2.0...v0.3.0

cotengra - v0.2.0

Published by jcmgray over 1 year ago

Initial relase for pypi.

What's Changed

New Contributors

Full Changelog: https://github.com/jcmgray/cotengra/commits/v0.2.0