lightseq

LightSeq: A High Performance Library for Sequence Processing and Generation

OTHER License

Downloads
5K
Stars
3.2K
Committers
19
lightseq - Support HIP Latest Release

Published by neopro12 almost 2 years ago

In the hip_dev branch, LightSeq supports CUDA backend and HIP backend(now support training only). LightSeq transformer has a speedup about 7% comparing with FairsSeq transformer under the HIP backend. LightSeq HIP supports multiple NLP models, such as transformer, bert, gpt, etc. Users need no modification with python training. More information about the LightSeq HIP can be found here https://github.com/bytedance/lightseq/blob/hip_dev/README_HIP.md

lightseq - Release 3.0.1

Published by godweiyang almost 2 years ago

What's Changed

Full Changelog: https://github.com/bytedance/lightseq/compare/v3.0.0...v3.0.1

lightseq - Release 3.0.0

Published by godweiyang almost 2 years ago

It's been a long time since our last release (v2.2.0). For the past one year, we have focused on int8 quantization.

In this release, LightSeq supports int8 quantized training and inference. Compared with PyTorch QAT, LightSeq int8 training has a speedup of 3x without any performance loss. Compared with previous LightSeq fp16 inference, int8 engine has a speedup up to 1.7x.

LightSeq int8 engine supports multiple models, such as Transformer, BERT, GPT, etc. For int8 training, the users only need to apply quantization mode to the model using model.apply(enable_quant). For int8 inference, the users only need to use QuantTransformer instead of fp16 Transformer.

Other releases include supporting models like MoE, fix bugs, performance improvement, etc.

lightseq - Release 2.2.0

Published by Taka152 almost 3 years ago

Inference

Support more multi-language models #209

Fixes

Fix inference error on HDF5 #208
Fix training error when batch_size=1 #192
Other minor fixes: #205 #202 #193

lightseq - Release 2.1.3

Published by neopro12 about 3 years ago

This version contains several features and bug fixes.

Training

relax restriction of layer norm hidden size #137 #161
support inference during training for transformer #141 #146 #147

Inference

Add inference support and examples for BERT #145

Fixes

fix save/load for training with pytorch #139
fix pos embedding index bug #144

lightseq - Release 2.1.0

Published by Taka152 over 3 years ago

This version contains several features and bug fixes.

Training

support BertEncoder #116
support torch amp and apex amp #100

Inference

support big models like gpt2-large and bart-large #82

Fixes

fix adam bug when param size < 1024 #98
fix training compiling fail in cuda < 11 #80

lightseq - Release 2.0.2

Published by Taka152 over 3 years ago

[inference] fix warp reduce bug in inference. #74

lightseq - Release 2.0.1

Published by neopro12 over 3 years ago

Merge codes about training and inference.
Reorganize docs and README.

lightseq - Release 2.0.0

Published by neopro12 over 3 years ago

It's been a long time since our last release (v1.2.0). For the past six months, we have focused on training efficiency.

In this release, LightSeq supports fast training for models in the Transformer family!

We provide highly optimized custom operators for PyTorch and TensorFlow, which cover the entire training process for Transformer-based models. Users of LightSeq can use these operators to build their own models with efficient computation.

In addition, we integrate our custom operators into popular training libraries like Fairseq, Hugging Face, NeurST, which enables a 1.5X-3X end-to-end speedup campred to the native version.

With only a small amount of code, you can enjoy the excellent performance provided by LightSeq. Try it now!

Training

  • support lightseq-train to accelerate fairseq training, including optimized transformer model, adam, and label smoothed loss
  • huggingface bert training example
  • neurst transformer training example for Tensorflow users

Inference

  • support GPT python wrapper
  • inference APIs are moved to lightseq.inference

This release has API change for inference, all inference API has moved to lightseq.inference. For example, use import lightseq.inference and model = lightseq.inference.Transformer("$PB_PATH", max_batch_size)

lightseq - Release 1.2.0

Published by neopro12 almost 4 years ago

Support Python API and multilingual nmt

lightseq - Release 1.1.0

Published by neopro12 almost 4 years ago

Support sampling/diverse beam search and VAE.

lightseq - Release 1.0.0

Published by neopro12 almost 5 years ago

Byseqlib is a high performance inference library for SOTA NLU/NLG models.

lightseq - initial releases

Published by neopro12 almost 5 years ago

To provide test model weights ans inputs

Package Rankings
Top 4.54% on Proxy.golang.org
Top 3.83% on Pypi.org