LLM plugin for models hosted on Replicate
APACHE-2.0 License
Bot releases are visible (Hide)
llm replicate fetch-predictions
, which fetches all predictions that have been run through Replicate (including for models other than language models queried using this tool) and stores them in a replicate_predictions
table in the logs.db
SQLite database. Documentation here. #11
replicate-python
library is no longer bundled with this package, it is installed as a dependency instead. #10
Published by simonw about 1 year ago
Support for adding chat models using llm replicate add ... --chat
. These models will then use the User: ...\nAssistant:
prompt format and can be used for continued conversations.
This means the new Llama 2 model from Meta can be added like this:
llm replicate add a16z-infra/llama13b-v2-chat \
--chat --alias llama2
Then:
llm -m llama2 "Ten great names for a pet pelican"
# output here, then to continue the conversation:
llm -c "Five more and make them more nautical"
Published by simonw over 1 year ago
llm replicate fetch-models
, then run prompts against them. #1
llm replicate add joehoover/falcon-40b-instruct --alias falcon
to add support for additional models, optionally with aliases. #2