๐ฎ A refreshing functional take on deep learning, compatible with your favorite libraries
MIT License
Bot releases are hidden (Show)
Numpy v2.0 isn't binary compatible with v1 (understandably). We build against numpy so we need to restrict the pin.
Published by svlandeg 4 months ago
nbconvert
pintyping_extensions
pin for Python 3.7@honnibal, @ines, @svlandeg
Published by danieldk 6 months ago
The main new feature of Thinc v9 is the support for learning rate schedules that can take the training dynamics into account. For example, the new
plateau.v1
schedule scales the learning rate when no progress has been found after a given number of evaluation steps. Another visible change is thatAppleOps
is now part of Thinc, so it is not necessary anymore to installthinc-apple-ops
to use the AMX units on Apple Silicon.
plateau.v1
schedule (#842). This schedule scales the learning rate if training was found to be stagnant for a given period.thinc-apple-ops
is integrated into Thinc (#927). Starting with this version of Thinc, it is not necessary anymore to install thinc-apple-ops
.Schedule
class (#804).thinc.backends.linalg
has been removed (#742). The same functionality is provided by implementations in BLAS that are better tested and more performant.thinc.extra.search
has been removed (#743). The beam search functionality in this module was strongly coupled to the spaCy transition parser and has therefore moved to spaCy in v4.@adrianeboyd, @danieldk, @honnibal, @ines, @kadarakos, @shadeMe, @svlandeg
Published by danieldk 8 months ago
cupy.cublas
import (#921).@danieldk, @honnibal, @ines, @svlandeg
Published by danieldk 10 months ago
Add the ParametricAttention_v2 layer, which adds support for key transformations (#913).
@danieldk, @honnibal, @ines, @svlandeg
Published by adrianeboyd about 1 year ago
Updates and binary wheels for Python 3.12.
@adrianeboyd, @honnibal, @ines, @svlandeg
Published by adrianeboyd about 1 year ago
To improve loading times and reduce conflicts, MXNet and TensorFlow are no longer imported automatically (#890).
MXNet and TensorFlow support needs to be enabled explicitly. Previously, MXNet and TensorFlow were imported automatically if they were available in the current environment.
To enable MXNet:
from thinc.api import enable_mxnet
enable_mxnet()
To enable TensorFlow:
from thinc.api import enable_tensorflow
enable_tensorflow()
With spaCy CLI commands you can provide this custom code using -c code.py
. For training use spacy train -c code.py
and to package your code with your pipeline use spacy package -c code.py
.
Future deprecation warning: built-in MXNet and TensorFlow support will be removed in Thinc v9. If you need MXNet or TensorFlow support in the future, you can transition to using a custom copy of the current MXNetWrapper
or TensorFlowWrapper
in your package or project.
@adrianeboyd, @danieldk, @honnibal, @ines, @svlandeg
Published by adrianeboyd about 1 year ago
reduce_{max,mean,sum}
(#882).NumpyOps/CupyOps.asarray
(#897).@adrianeboyd, @danieldk, @honnibal, @ines, @svlandeg
Published by adrianeboyd about 1 year ago
distutils
to setuptools
/sysconfig
(#888).@adrianeboyd, @Ankush-Chander, @danieldk, @honnibal, @ines, @svlandeg
Published by adrianeboyd over 1 year ago
pad
as a CUDA kernel (#860).unflatten
(#861).cupy
kernels (#870).@adrianeboyd, @danieldk, @honnibal, @ines, @shadeMe, @svlandeg
Published by danieldk over 1 year ago
Model.begin_update
(#858).@danieldk, @honnibal, @ines
Published by adrianeboyd over 1 year ago
premap_ids.v1
layer for mapping from ints to ints (#815).Dockerfile
(#843, #844, #845).@adrianeboyd, @danieldk, @essenmitsosse, @honnibal, @ines, @kadarakos, @patjouk, @polm, @svlandeg
Published by adrianeboyd almost 2 years ago
with_flatten.v2
layer with symmetric input/output types (#821).typing_extensions
v4.4.x for Python 3.6 and 3.7 (#833).@adrianeboyd, @albertvillanova, @danieldk, @essenmitsosse, @honnibal, @ines, @shadchin, @shadeMe, @svlandeg
Published by adrianeboyd almost 2 years ago
SparseLinear.v2
, to fix indexing issues (#754).TorchScriptWrapper_v1
(#802).PyTorchShim
(#796).packaging
requirement (#799).reduce_first/last
(#807).CupyOps.asarray
to always copy cupy arrays to the current device (#812).Ops.asarray*
(#819).@adrianeboyd, @danieldk, @frobnitzem, @honnibal, @ines, @richardpaulhudson, @ryndaniels, @shadeMe, @svlandeg
Published by adrianeboyd almost 2 years ago
__all__
static to support type checking (#780).@adrianeboyd, @honnibal, @ines, @rmitsch
Published by adrianeboyd almost 2 years ago
wrapt
to v1.14.1.@adrianeboyd, @honnibal, @ines
Published by adrianeboyd about 2 years ago
Ops.alloc
from #779.@adrianeboyd, @honnibal, @ines, @svlandeg
Published by adrianeboyd about 2 years ago
fix_random_seed
entry point in setup.cfg
.@adrianeboyd, @honnibal, @ines, @pawamoy, @svlandeg
Published by adrianeboyd about 2 years ago
cuda116
, cuda117
, cuda11x
and cuda-autodetect
, which uses the new cupy-wheel
package (#740).fix_random_seed
(#748).blis
versions to ~=0.7.8
to avoid bugs in BLIS 0.9.0.@adrianeboyd, @honnibal, @ines, @rmitsch, @svlandeg, @willfrey
Published by danieldk about 2 years ago
with_signpost_interval
layer to support layer profiling with macOS Instruments (#711).remap_ids.v2
layer which allows more types of inputs (#726).argmax
in maxout
(#702).FloatsType
in Ops
by a TypeVar
.Ops.asarrayDf
methods.@adrianeboyd, @cclauss, @danieldk, @honnibal, @ines, @kadarakos, @polm, @rmitsch, @shadeMe