An all-in-one LLMs Chat UI for Apple Silicon Mac using MLX Framework.
MIT License
An all-in-one Chat Playground using Apple MLX on Apple Silicon Macs.
pip install chat-with-mlx
git clone https://github.com/qnguyen3/chat-with-mlx.git
cd chat-with-mlx
python -m venv .venv
source .venv/bin/activate
pip install -e .
git clone https://github.com/qnguyen3/chat-with-mlx.git
cd chat-with-mlx
conda create -n mlx-chat python=3.11
conda activate mlx-chat
pip install -e .
chat-with-mlx
Please checkout the guide HERE
control + C
on your Terminal.MLX is an array framework for machine learning research on Apple silicon, brought to you by Apple machine learning research.
Some key features of MLX include:
Familiar APIs: MLX has a Python API that closely follows NumPy. MLX
also has fully featured C++, C, and
Swift APIs, which closely mirror
the Python API. MLX has higher-level packages like mlx.nn
and
mlx.optimizers
with APIs that closely follow PyTorch to simplify building
more complex models.
Composable function transformations: MLX supports composable function transformations for automatic differentiation, automatic vectorization, and computation graph optimization.
Lazy computation: Computations in MLX are lazy. Arrays are only materialized when needed.
Dynamic graph construction: Computation graphs in MLX are constructed dynamically. Changing the shapes of function arguments does not trigger slow compilations, and debugging is simple and intuitive.
Multi-device: Operations can run on any of the supported devices (currently the CPU and the GPU).
Unified memory: A notable difference from MLX and other frameworks is the unified memory model. Arrays in MLX live in shared memory. Operations on MLX arrays can be performed on any of the supported device types without transferring data.
I would like to send my many thanks to: