TensorLy: Tensor Learning in Python.
OTHER License
Bot releases are visible (Hide)
We're excited to release version 0.8.2 of TensorLy. As always, a huge thank you to the core team and all the contributors!
This version adds many improvements to TensorLy 0.8, including:
We now provide an ALS-based method for tensor ring decomposition, as well as a randomized sampling-based version, thanks to @OsmanMalik in https://github.com/tensorly/tensorly/pull/501 and https://github.com/tensorly/tensorly/pull/511
We are now deprecating MXNet, and both MXNet and TensorFlow backend will be remove in the near future.
We provide a neat, clean and simple to use interface to use all the major variants of SVD, and it keeps improving!
pip
caching to CI by @SauravMaheshkar in https://github.com/tensorly/tensorly/pull/514
Full Changelog: https://github.com/tensorly/tensorly/compare/0.8.1...0.8.2
There are two main ways to implement tensor algebraic methods:
We improved the tenalg backend, you can transparently dispatch all tensor algebraic operations to the backend's einsum:
import tensorly as tl
# Tensor algebra
from tensorly import tenalg
# Dispatch all operations to einsum
tenalg.set_backend('einsum')
Now all tenalg functions will call einsum under the hood!
In addition, for each einsum call, you can now use opt-einsum to compute a (near) optimal contraction path and cache it with just one call!
# New opt-einsum plugin
from tensorly.plugins import use_opt_einsum
# Transparently compute and cache contraction path using opt-einsum
use_opt_einsum('optimal')
Switch back to the original backend's einsum:
# New opt-einsum plugin
from tensorly.plugins import use_default_einsum
use_default_einsum()
If you want to accelerate your computation, you probably want to use the GPU.
TensorLy has been supporting GPU transparently for a while, through its MXNet, CuPy, TensorFlow, PyTorch and more recently, JAX backends.
Now you can also get efficient tensor contractions on GPU using NVIDIA's cuQuantum library!
Now any function to the `tenalg` module
# New opt-einsum plugin
from tensorly.plugins import use_cuquantum
# Transparently compute and cache contraction path using opt-einsum
use_cuquantum('optimal')
# Create a new tensor on GPU
tensor = tl.randn((32, 256, 256, 3), device='cuda')
# Decompose it with CP, keep 5% of the parameters
parafac(tensor, rank=0.05, init='random', n_iter_max=10)
We now provide CorrIndex, a correlation invariant index
This release brings a new multi-linear partial least squares regression, as first introduce by Rasmus Bro, exposed in a convenient scikit-learn-like class, CP_PLSR
We have a new tensor decomposition tensor_train_OI
class for tensor-train decomposition via orthogonal iteration.
We now have a unified interface for Singular Value Decomposition svd_interface
.
It has support for resolving sign indeterminacy, returning a non-negative output, missing values (masked input), and various computation methods, all in one, neat interface!
TensorLy now includes real-world datasets well-suited for tensor analysis, that you can now directly load/download in a ready to use form!
Systems serology is a new technology that examines the antibodies from a patient’s serum, aiming to comprehensively profile the interactions between the antibodies and Fc receptors alongside other types of immunological and demographic data. Here, we will apply CP decomposition to a COVID-19 system serology dataset. In this dataset, serum antibodies of 438 samples collected from COVID-19 patients were systematically profiled by their binding behavior to SARS-CoV-2 (the virus that causes COVID-19) antigens and Fc receptors activities. The data is formatted in a three-mode tensor of samples, antigens, and receptors Samples are labeled by the status of the patients.
IL-2 signals through the Jak/STAT pathway and transmits a signal into immune cells by phosphorylating STAT5 (pSTAT5). When phosphorylated, STAT5 will cause various immune cell types to proliferate, and depending on whether regulatory (regulatory T cells, or Tregs) or effector cells (helper T cells, natural killer cells, and cytotoxic T cells, or Thelpers, NKs, and CD8+ cells) respond, IL-2 signaling can result in immunosuppression or immunostimulation respectively. Thus, when designing a drug meant to repress the immune system, potentially for the treatment of autoimmune diseases, IL-2 which primarily enacts a response in Tregs is desirable. Conversely, when designing a drug that is meant to stimulate the immune system, potentially for the treatment of cancer, IL-2 which primarily enacts a response in effector cells is desirable. In order to achieve either signaling bias, IL-2 variants with altered affinity for its various receptors (IL2Rα or IL2Rβ) have been designed. Furthermore IL-2 variants with multiple binding domains have been designed as multivalent IL-2 may act as a more effective therapeutic.
The data contains the responses of 8 different cell types to 13 different IL-2 mutants, at 4 different timepoints, at 12 standardized IL-2 concentrations. It is formatted as a 4th order tensor of shape (13 x 4 x 12 x 8), with dimensions representing IL-2 mutant, stimulation time, dose, and cell type respectively.
A Kinetic fluorescence dataset, well suited for Parafac and multi-way partial least squares regression (N-PLS).
The data is represented as a four-way data set with the modes: Concentration, excitation wavelength, emission wavelength and time.
Airborne Visible / Infrared Imaging Spectrometer (AVIRIS) hyperspectral sensor data. It consists of 145 times 145 pixels and 220 spectral reflectance bands in the wavelength range 0.4–2.5 10^(-6) meters.
We now automatically check for code formatting and the CI tests the code style against the Black styleguides.
In addition to these big features, this release also comes with a whole lot of improvements, better documentation and bug fixes!
Non-exhaustive list of changes:
tl.shape
return tuple for PyTorch backend by @MarieRoald in https://github.com/tensorly/tensorly/pull/357
keepdims
to tl.sum
with the PyTorch backend by @MarieRoald in https://github.com/tensorly/tensorly/pull/356
tl.clip
for the PyTorch and TensorFlow backends by @MarieRoald in https://github.com/tensorly/tensorly/pull/355
This release is only possible thanks to a lot of voluntary work by the whole TensorLy team that work hard to maintain and improve the library! Thanks in particular to the core devs
Big thanks to all the new contributors and welcome to the TensorLy community!
Full Changelog: https://github.com/tensorly/tensorly/compare/0.7.0...0.8.0
Published by JeanKossaifi almost 2 years ago
We are releasing a new version of TensorLy, long in the making, with a long list of major improvements, new features, better documentation, bug fixes and overall quality of life improvements!
There are two main ways to implement tensor algebraic methods:
We improved the tenalg backend, you can transparently dispatch all tensor algebraic operations to the backend's einsum:
import tensorly as tl
# Tensor algebra
from tensorly import tenalg
# Dispatch all operations to einsum
tenalg.set_backend('einsum')
Now all tenalg functions will call einsum under the hood!
In addition, for each einsum call, you can now use opt-einsum to compute a (near) optimal contraction path and cache it with just one call!
# New opt-einsum plugin
from tensorly.plugins import use_opt_einsum
# Transparently compute and cache contraction path using opt-einsum
use_opt_einsum('optimal')
Switch back to the original backend's einsum:
# New opt-einsum plugin
from tensorly.plugins import use_default_einsum
use_default_einsum()
If you want to accelerate your computation, you probably want to use the GPU.
TensorLy has been supporting GPU transparently for a while, through its MXNet, CuPy, TensorFlow, PyTorch and more recently, JAX backends.
Now you can also get efficient tensor contractions on GPU using NVIDIA's cuQuantum library!
Now any function to the `tenalg` module
# New opt-einsum plugin
from tensorly.plugins import use_cuquantum
# Transparently compute and cache contraction path using opt-einsum
use_cuquantum('optimal')
# Create a new tensor on GPU
tensor = tl.randn((32, 256, 256, 3), device='cuda')
# Decompose it with CP, keep 5% of the parameters
parafac(tensor, rank=0.05, init='random', n_iter_max=10)
We now provide CorrIndex, a correlation invariant index
This release brings a new multi-linear partial least squares regression, as first introduce by Rasmus Bro, exposed in a convenient scikit-learn-like class, CP_PLSR
We have a new tensor decomposition tensor_train_OI
class for tensor-train decomposition via orthogonal iteration.
We now have a unified interface for Singular Value Decomposition svd_interface
.
It has support for resolving sign indeterminacy, returning a non-negative output, missing values (masked input), and various computation methods, all in one, neat interface!
TensorLy now includes real-world datasets well-suited for tensor analysis, that you can now directly load/download in a ready to use form!
Systems serology is a new technology that examines the antibodies from a patient’s serum, aiming to comprehensively profile the interactions between the antibodies and Fc receptors alongside other types of immunological and demographic data. Here, we will apply CP decomposition to a COVID-19 system serology dataset. In this dataset, serum antibodies of 438 samples collected from COVID-19 patients were systematically profiled by their binding behavior to SARS-CoV-2 (the virus that causes COVID-19) antigens and Fc receptors activities. The data is formatted in a three-mode tensor of samples, antigens, and receptors Samples are labeled by the status of the patients.
IL-2 signals through the Jak/STAT pathway and transmits a signal into immune cells by phosphorylating STAT5 (pSTAT5). When phosphorylated, STAT5 will cause various immune cell types to proliferate, and depending on whether regulatory (regulatory T cells, or Tregs) or effector cells (helper T cells, natural killer cells, and cytotoxic T cells, or Thelpers, NKs, and CD8+ cells) respond, IL-2 signaling can result in immunosuppression or immunostimulation respectively. Thus, when designing a drug meant to repress the immune system, potentially for the treatment of autoimmune diseases, IL-2 which primarily enacts a response in Tregs is desirable. Conversely, when designing a drug that is meant to stimulate the immune system, potentially for the treatment of cancer, IL-2 which primarily enacts a response in effector cells is desirable. In order to achieve either signaling bias, IL-2 variants with altered affinity for its various receptors (IL2Rα or IL2Rβ) have been designed. Furthermore IL-2 variants with multiple binding domains have been designed as multivalent IL-2 may act as a more effective therapeutic.
The data contains the responses of 8 different cell types to 13 different IL-2 mutants, at 4 different timepoints, at 12 standardized IL-2 concentrations. It is formatted as a 4th order tensor of shape (13 x 4 x 12 x 8), with dimensions representing IL-2 mutant, stimulation time, dose, and cell type respectively.
A Kinetic fluorescence dataset, well suited for Parafac and multi-way partial least squares regression (N-PLS).
The data is represented as a four-way data set with the modes: Concentration, excitation wavelength, emission wavelength and time.
Airborne Visible / Infrared Imaging Spectrometer (AVIRIS) hyperspectral sensor data. It consists of 145 times 145 pixels and 220 spectral reflectance bands in the wavelength range 0.4–2.5 10^(-6) meters.
We now automatically check for code formatting and the CI tests the code style against the Black styleguides.
In addition to these big features, this release also comes with a whole lot of improvements, better documentation and bug fixes!
Non-exhaustive list of changes:
tl.shape
return tuple for PyTorch backend by @MarieRoald in https://github.com/tensorly/tensorly/pull/357
keepdims
to tl.sum
with the PyTorch backend by @MarieRoald in https://github.com/tensorly/tensorly/pull/356
tl.clip
for the PyTorch and TensorFlow backends by @MarieRoald in https://github.com/tensorly/tensorly/pull/355
This release is only possible thanks to a lot of voluntary work by the whole TensorLy team that work hard to maintain and improve the library! Thanks in particular to the core devs
Big thanks to all the new contributors and welcome to the TensorLy community!
Full Changelog: https://github.com/tensorly/tensorly/compare/0.7.0...0.8.0
Published by JeanKossaifi almost 3 years ago
In this new version of TensorLy, the whole team has been working hard to bring you lots of improvements, from new decompositions to new functions, faster code and better documentation.
We added some great new tensor decompositions, including
We added a brand new tensordot
that supports batching!
[ Adding a new Batched Tensor Dot + API simplification #309 ]
Normalization for Tucker factors, #283 thanks to @caglayantuna and @cohenjer!
Added a convenient function to compute the gradient of the difference norm between a CP and dense tensor, #294, thanks to @aarmey
In an effort to make the TensorLy backend even more flexible and fast, we refactored the main backend as well as the tensor algebra backend. We make lots of small quality of life improvement in the process! In particular, reconstructing a tt-matrix is a lot more efficient now.
[ Backend refactoring : use a BackendManager class and use it directly as tensorly.backend's Module class #330, @JeanKossaifi ]
Improvements to Parafac2 (convergence criteria, etc) #267, thanks to @marieRoald
HALS convergence FIX TODO, @marieRoald and @IsabellLehmann, #271
Ensuring consistency between the object oriented API and the functional one thanks to @yngvem, #268
Added lstsq to backend, #305, thanks to @merajhashemi
Fix documentation for case insensitive clashes between the function and class: https://github.com/tensorly/tensorly/issues/219
Added random-seed for TT-cross, #304 thanks to @yngvem
Fix svd sign indeterminacy #216, thanks to @merajhashemi
Rewrote vonneumann_entropy to handle multidimensional tensors. #270, thanks to @taylorpatti
Adding check for all modes fixed case and if true then to just return the initialization #325, thanks to @ParvaH
We now provide a prod
function that works like math.prod for users using Python < 3.8, in tensorly.utils.prod
All backend now support matmul
, tensor dot
(#306), as well as sin
, cos
, flip
, argsort
, count_nonzero
, cumsum
, any
, lstsq
and trace
.
Fixed NN-Tucker hals sparsity coefficient issue, thanks to @caglayantuna #295
Fix svd for pytorch < 1.8 #312 thanks to @merajhashemi
Fix dot and matmul in PyTorch and TF #313 thanks to @merajhashemi
Fix tl.partial_unfold #315, thanks to @merajhashemi
Fixed behaviour of diag for TensorFlow backend.
Fix tl.partial_svd : now explicitly check for NaN values, #318 thanks to @merajhashemi
fix diag function for tensorflow and pytorch backends #321, thanks to @caglayantuna
Fix singular vectors to be orthonormal #320 thanks to @merajhashemi
fix active set and hals tests #323 thanks to @caglayantuna
Add test for matmul #322 thanks to @merajhashemi
Sparse backend usage fix by @caglayantuna in #280
Published by JeanKossaifi over 3 years ago
CP: l2 reg
CP: sparsity
Added fixed_modes for CP and Tucker.
Masked Tucker
Sparse backend
And many small improvements and bug fixes!
Standardisation of the names:
Kruskal-tensors have been renamed cp_tensors
Matrix-product-state has now been renamed tensor-train
Rank selection: validate_cp_rank, option to set rank=‘same’ or rank=float to automatically determine the rank.
Published by JeanKossaifi over 3 years ago
This version brings lots of new functionalities, improvements and fixes many small bugs and issues. We have a new theme for the TensorLy project's documentations, our new TensorLy Sphinx theme, which we've open-sourced and that you can also easily use in your own projects! We've also switched testing from Travis to Github actions and for coverage, from Coveralls to CodeCov.
check_tucker_rank
tensordot
in all backendsAnd many other small improvements!