Explore a simple example of utilizing MLX for RAG application running locally on your Apple Silicon device.
APACHE-2.0 License
An all-in-one LLMs Chat UI for Apple Silicon Mac using MLX Framework.
Easily use and train state of the art late-interaction retrieval methods (ColBERT) in any RAG pip...
MLX-Embeddings is the best package for running Vision and Language Embedding models locally on yo...
A barebones RAG implementation for kubernetes, including a local LLM deployment and vector database.
RAG for Local LLM, chat with PDF/doc/txt files, ChatPDF. 纯原生实现RAG功能,基于本地LLM、embedding模型、reranker模...
FastMLX is a high performance production ready API to host MLX models.
MLX-VLM is a package for running Vision LLMs locally on your Mac using MLX.
Run Llama 2 using MLX on macOS