Efficient Triton Kernels for LLM Training
BSD-2-CLAUSE License
Installation | Getting Started | Examples | APIs | Structure | Contributing | Acknowledgement
Liger Kernel is a collection of Triton kernels designed specifically for LLM training. It can effectively increase multi-GPU training throughput by 20% and reduces memory usage by 60%. We have implemented Hugging Face Compatible RMSNorm
, RoPE
, SwiGLU
, CrossEntropy
, FusedLinearCrossEntropy
, and more to come. The kernel works out of the box with Flash Attention, PyTorch FSDP, and Microsoft DeepSpeed. We welcome contributions from the community to gather the best kernels for LLM training.
With one line of code, Liger Kernel can increase throughput by more than 20% and reduce memory usage by 60%, thereby enabling longer context lengths, larger batch sizes, and massive vocabularies.
Speed Up | Memory Reduction |
---|---|
Note:
- Benchmark conditions: LLaMA 3-8B, Batch Size = 8, Data Type =
bf16
, Optimizer = AdamW, Gradient Checkpointing = True, Distributed Strategy = FSDP1 on 8 A100s.- Hugging Face models start to OOM at a 4K context length, whereas Hugging Face + Liger Kernel scales up to 16K.
Example | Description | Lightning Studio |
---|---|---|
Hugging Face Trainer | Train LLaMA 3-8B ~20% faster with over 40% memory reduction on Alpaca dataset using 4 A100s with FSDP | TBA |
Lightning Trainer | Increase 15% throughput and reduce memory usage by 40% with LLaMA3-8B on MMLU dataset using 8 A100s with DeepSpeed ZeRO3 | TBA |
Example | Description | Lightning Studio |
---|---|---|
Medusa Multi-head LLM (Retraining Phase) | Reduce memory usage by 80% with 5 LM heads and improve throughput by 40% using 8 A100s with FSDP | TBA |
torch >= 2.1.2
triton >= 2.3.0
transformers >= 4.x
: Required if you plan to use the transformers models patching APIs. The specific model you are working will dictate the minimum version of transformers.Note: Our kernels inherit the full spectrum of hardware compatibility offered by Triton.
To install the stable version:
$ pip install liger-kernel
To install the nightly version:
$ pip install liger-kernel-nightly
To install from source:
git clone https://github.com/linkedin/Liger-Kernel.git
cd Liger-Kernel
pip install -e .
# or if using transformers
pip install -e .[transformers]
There are a couple of ways to apply Liger kernels, depending on the level of customization required.
Using the AutoLigerKernelForCausalLM
is the simplest approach, as you don't have to import a model-specific patching API. If the model type is supported, the modeling code will be automatically patched using the default settings.
from liger_kernel.transformers import AutoLigerKernelForCausalLM
# This AutoModel wrapper class automatically monkey-patches the
# model with the optimized Liger kernels if the model is supported.
model = AutoLigerKernelForCausalLM.from_pretrained("path/to/some/model")
Using the patching APIs, you can swap Hugging Face models with optimized Liger Kernels.
import transformers
from liger_kernel.transformers import apply_liger_kernel_to_llama
# 1a. Adding this line automatically monkey-patches the model with the optimized Liger kernels
apply_liger_kernel_to_llama()
# 1b. You could alternatively specify exactly which kernels are applied
apply_liger_kernel_to_llama(
rope=True,
swiglu=True,
cross_entropy=True,
fused_linear_cross_entropy=False,
rms_norm=False
)
# 2. Instantiate patched model
model = transformers.AutoModelForCausalLM("path/to/llama/model")
You can take individual kernels to compose your models.
from liger_kernel.transformers import LigerFusedLinearCrossEntropyLoss
import torch.nn as nn
import torch
model = nn.Linear(128, 256).cuda()
# fuses linear + cross entropy layers together and performs chunk-by-chunk computation to reduce memory
loss_fn = LigerFusedLinearCrossEntropyLoss()
input = torch.randn(4, 128, requires_grad=True, device="cuda")
target = torch.randint(256, (4, ), device="cuda")
loss = loss_fn(model.weight, input, target)
loss.backward()
ops/
: Core Triton operations.transformers/
: PyTorch nn.Module
implementations built on Triton operations, compliant with the transformers
API.transformers/
: Correctness tests for the Triton-based layers.convergence/
: Patches Hugging Face models with all kernels, runs multiple iterations, and compares weights, logits, and loss layer-by-layer.benchmark/
: Execution time and memory benchmarks compared to Hugging Face layers.AutoModel Variant | API |
---|---|
AutoModelForCausalLM | liger_kernel.transformers.AutoLigerKernelForCausalLM |
Model | API | Supported Operations |
---|---|---|
LLaMA 2 & 3 | liger_kernel.transformers.apply_liger_kernel_to_llama |
RoPE, RMSNorm, SwiGLU, CrossEntropyLoss, FusedLinearCrossEntropy |
Mistral | liger_kernel.transformers.apply_liger_kernel_to_mistral |
RoPE, RMSNorm, SwiGLU, CrossEntropyLoss, FusedLinearCrossEntropy |
Mixtral | liger_kernel.transformers.apply_liger_kernel_to_mixtral |
RoPE, RMSNorm, SwiGLU, CrossEntropyLoss, FusedLinearCrossEntropy |
Gemma1 | liger_kernel.transformers.apply_liger_kernel_to_gemma |
RoPE, RMSNorm, GeGLU, CrossEntropyLoss, FusedLinearCrossEntropy |
Gemma2 | liger_kernel.transformers.apply_liger_kernel_to_gemma2 |
RoPE, RMSNorm, GeGLU, CrossEntropyLoss |
Qwen2 & Qwen2.5 | liger_kernel.transformers.apply_liger_kernel_to_qwen2 |
RoPE, RMSNorm, SwiGLU, CrossEntropyLoss, FusedLinearCrossEntropy |
Qwen2-VL | liger_kernel.transformers.apply_liger_kernel_to_qwen2_vl |
RMSNorm, LayerNorm, SwiGLU, CrossEntropyLoss, FusedLinearCrossEntropy |
Phi3 & Phi3.5 | liger_kernel.transformers.apply_liger_kernel_to_phi3 |
RoPE, RMSNorm, SwiGLU, CrossEntropyLoss, FusedLinearCrossEntropy |
Kernel | API |
---|---|
RMSNorm | liger_kernel.transformers.LigerRMSNorm |
LayerNorm | liger_kernel.transformers.LigerLayerNorm |
RoPE | liger_kernel.transformers.liger_rotary_pos_emb |
SwiGLU | liger_kernel.transformers.LigerSwiGLUMLP |
GeGLU | liger_kernel.transformers.LigerGEGLUMLP |
CrossEntropy | liger_kernel.transformers.LigerCrossEntropyLoss |
FusedLinearCrossEntropy | liger_kernel.transformers.LigerFusedLinearCrossEntropyLoss |
KLDivergence | liger_kernel.transformers.LigerKLDIVLoss |
JSD | liger_kernel.transformers.LigerJSD |
Kernel | API |
---|---|
Embedding | liger_kernel.transformers.experimental.LigerEmbedding |
Matmul int2xint8 | liger_kernel.transformers.experimental.matmul |
Note: Reported speedups and memory reductions are with respect to the LLaMA 3-8B Hugging Face layer implementations. All models use 4K hidden size and 4K sequence length and are evaluated based on memory usage and wall time for the forward+backward pass on a single NVIDIA A100 80G GPU using small batch sizes. Liger kernels exhibit more efficient scaling to larger batch sizes, detailed further in the Benchmark folder.
Since Liger Kernel is 100% Triton-based, it works seamlessly with torch.compile
. In the following example, Liger Kernel can further optimize the model on top of Torch Compile, reducing the memory by more than half.
Configuration | Throughput (tokens/sec) | Memory Reserved (GB) |
---|---|---|
Torch Compile | 3780 | 66.4 |
Torch Compile + Liger Kernel | 3702 | 31.0 |
Note:
- Benchmark conditions: LLaMA 3-8B, Batch Size = 8, Seq Len = 4096, Data Type =
bf16
, Optimizer = AdamW, Gradient Checkpointing = True, Distributed Strategy = FSDP1 on 8 A100s.- Tested on torch
2.5.0.dev20240731+cu118
We referenced or used the following projects:
# | Project | Description | Location | License |
---|---|---|---|---|
1 | Unsloth |
calculate_settings to determine block size and warp; We reuse it for Norm and MLP |
Liger Kernel Utils | Apache |
2 | Unsloth | We modified and added dW calculation on top of Unsloth implementation | Liger Kernel RMS Norm | Apache |
3 | Triton tutorial | We modified on top of triton tutorials | Liger Kernel RMS Norm | MIT |
4 | tiny shakespeare dataset | We use tiny shakespeare dataset to conduct convergence test on mini model | Liger Kernel Convergence | N/A |
5 | Efficient Cross Entropy | We use the idea of gradient-in-forward and chunking | Liger Kernel Linear Cross Entropy | MIT |
6 | Flash attn | We take many optimization ideas from the work, such as tiling and recomputation | BSD | |
7 | AutoAWQ | We reference the design of automodel | Liger Kernel Auto Model | MIT |
8 | llm.c | We reference the design of end-to-end testing | Liger Kernel Convergence Tests | MIT |
Many thanks to the contributors to these projects for their invaluable work that helped make Liger possible.
This project is licensed under the BSD 2-CLAUSE License (see LICENSE
for details).
It also includes components from projects licensed under:
LICENSE-APACHE-2.0
for details).LICENSE-MIT-AutoAWQ
for details).LICENSE-MIT-Efficient Cross Entropy
for details).LICENSE-MIT-llmc
for details).LICENSE-MIT-triton
for details).Biblatex entry:
@software{liger2024,
title = {Liger-Kernel: Efficient Triton Kernels for LLM Training},
author = {Hsu, Pin-Lun and Dai, Yun and Kothapalli, Vignesh and Song, Qingquan and Tang, Shao and Zhu, Siyu},
url = {https://github.com/linkedin/Liger-Kernel},
year = {2024}
}