๐ Easy, open-source LLM finetuning with one-line commands, seamless cloud integration, and popular optimization frameworks. โจ
GPL-3.0 License
Try here
Join our Discord
Simplifine streamlines LLM finetuning on any dataset or model with one simple command, handling all infrastructure, job management, cloud storage, and inference.
๐ Easy Cloud-Based LLM Finetuning: Fine-tune any LLM with just one command.
โ๏ธ Seamless Cloud Integration: Automatically manage the downloading, storing, and running of models directly from the cloud.
๐ค Built-in AI Assistance: Get help with hyperparameter selection, synthetic dataset generation, and data quality checks.
๐ On-Device to Cloud Switching: Add a simple decorator to transition from local to cloud-based training.
โก Auto-Optimization: Automatically optimizes model and data parallelization through Deepspeed โ and FDSP โ .
๐ Custom Evaluation Support: Use the built-in LLM for evaluations functions or import your own custom evaluation metrics.
๐ผ Community Support: Asking any support questions on the Simplifine Community Discord.
๐ Trusted by Leading Institutions: Research labs at the University of Oxford rely on Simplifine for their LLM finetuning needs.
Get started here >
Find our full documentation at docs.simplifine.com.
Installing from PyPI
pip install simplifine-alpha
You can also directly install from github using the following command:
pip install git+https://github.com/simplifine-llm/Simplifine.git
We are looking for contributors! Join the contributors
thread on our Discord:
Simplifine is licensed under the GNU General Public License Version 3. See the LICENSE file for more details.
For all feature-requests, bugs and support, join our Discord!
If you have any suggestions for new features you'd like to see implemented, please raise an issueโwe will work hard to make it happen ASAP!
For any other questions, feel free to contact us at [email protected].
We currently support both DistributedDataParallel (DDP) and ZeRO from DeepSpeed.
TL;DR:
Longer Version:
DDP: Distributed Data Parallel (DDP) creates a replica of the model on each processor (GPU). For example, imagine 8 GPUs, each being fed a single data pointโthis would make a batch size of 8. The model replicas are then updated on each device. DDP speeds up training by parallelizing the data-feeding process. However, DDP fails if the replica cannot fit in GPU memory. Remember, the memory not only hosts parameters but also gradients and optimizer states.
ZeRO: ZeRO is a powerful optimization developed by DeepSpeed and comes in different stages (1, 2, and 3). Each stage shards different parts of the training process (parameters, gradients, and activation states). This is really useful if a model cannot fit in GPU memory. ZeRO also supports offloading to the CPU, making even more room for training larger models.
Issue: RuntimeError: Error building extension 'cpu_adam' python dev
This error occurs when python-dev
is not installed, and ZeRO is using offload. To resolve this, try:
# Try sudo apt-get install python3-dev if the following fails.
apt-get install python-dev # for Python 2.x installs
apt-get install python3-dev # for Python 3.x installs
See this link