llm-mlx-llama

Run Llama 2 using MLX on macOS

MIT License

Stars
32

llm-mlx-llama

Using MLX on macOS to run Llama 2. Highly experimental.

Installation

Install this plugin in the same environment as LLM.

llm install https://github.com/simonw/llm-mlx-llama/archive/refs/heads/main.zip

Usage

Download Llama-2-7b-chat.npz and tokenizer.model from mlx-llama/Llama-2-7b-chat-mlx.

Pass paths to those files as options when you run a prompt:

llm -m mlx-llama \
  'five great reasons to get a pet pelican:' \
  -o model Llama-2-7b-chat.npz \
  -o tokenizer tokenizer.model

Chat mode and continuing a conversation are not yet supported.

Development

To set up this plugin locally, first checkout the code. Then create a new virtual environment:

cd llm-mlx-llama
python3 -m venv venv
source venv/bin/activate

Now install the dependencies and test dependencies:

llm install -e '.[test]'

To run the tests:

pytest