ipex-llm

Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, Phi, MiniCPM, etc.) on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, GraphRAG, DeepSpeed, vLLM, FastChat, Axolotl, etc.

APACHE-2.0 License

Downloads
61.8K
Stars
6.4K
Committers
118

Commit Statistics

Past Year

All Time

Total Commits
2,060
3,337
Total Committers
63
130
Avg. Commits Per Committer
32.7
25.67
Bot Commits
0
0

Issue Statistics

Past Year

All Time

Total Pull Requests
746
827
Merged Pull Requests
642
710
Total Issues
299
319
Time to Close Issues
17 days
25 days
Package Rankings
Top 33.85% on Pypi.org
Top 6.61% on Proxy.golang.org
Related Projects