Access llamafile localhost models via LLM
APACHE-2.0 License
Access llamafile localhost models via LLM
Install this plugin in the same environment as LLM.
llm install llm-llamafile
Make sure you have a llamafile
running on localhost
, serving an OpenAI compatible API endpoint on port 8080.
You can then use llm
to interact with that model like so:
llm -m llamafile "3 neat characteristics of a pelican"
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd llm-llamafile
python3 -m venv venv
source venv/bin/activate
Now install the dependencies and test dependencies:
llm install -e '.[test]'
To run the tests:
pytest