Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level
MIT License
Bot releases are visible (Hide)
Published by github-actions[bot] about 1 year ago
llama.cpp
FalconChatPromptWrapper
LlamaChatSession
cmake
node-gyp
default
package.json
threads
build
.gguf
LlamaModel
spawnCommand
spawn
$
Node.js binding of Llama.cpp
Run AI ✨ assistant locally! with simple API for Node.js 🚀
Node Llama Cpp wrapper for Node JS
llama.cpp 🦙 LLM inference in TypeScript