llm-llamafile

Access llamafile localhost models via LLM

APACHE-2.0 License

Downloads
109
Stars
13

llm-llamafile

Access llamafile localhost models via LLM

Installation

Install this plugin in the same environment as LLM.

llm install llm-llamafile

Usage

Make sure you have a llamafile running on localhost, serving an OpenAI compatible API endpoint on port 8080.

You can then use llm to interact with that model like so:

llm -m llamafile "3 neat characteristics of a pelican"

Development

To set up this plugin locally, first checkout the code. Then create a new virtual environment:

cd llm-llamafile
python3 -m venv venv
source venv/bin/activate

Now install the dependencies and test dependencies:

llm install -e '.[test]'

To run the tests:

pytest