Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level
MIT License
Bot releases are visible (Hide)
Published by github-actions[bot] 5 months ago
Shipped with llama.cpp
release b2861
To use the latest
llama.cpp
release available, runnpx --no node-llama-cpp download --release latest
. (learn more)
Published by github-actions[bot] 5 months ago
pull
command (#214) (453c162)stopOnAbortSignal
and customStopTriggers
on LlamaChat
and LlamaChatSession
(#214) (453c162)checkTensors
parameter on loadModel
(#214) (453c162)Shipped with llama.cpp
release b2834
To use the latest
llama.cpp
release available, runnpx --no node-llama-cpp download --release latest
. (learn more)
Published by github-actions[bot] 6 months ago
FunctionaryChatWrapper
bugs (#205) (ef501f9)GPU layers
in the Model
line in CLI commands (#205) ([ef501f9]LlamaChatWrapper
to Llama2ChatWrapper
--gpu
flag in generation CLI commands (#205) (ef501f9)specialTokens
parameter on model.detokenize
(#205) (ef501f9)Shipped with llama.cpp
release b2717
To use the latest
llama.cpp
release available, runnpx --no node-llama-cpp download --release latest
. (learn more)
Published by github-actions[bot] 6 months ago
inspect gpu
command: print device names (#198) (5ca33c7)inspect gpu
command: print env info (#202) (d332b77)Shipped with llama.cpp
release b2665
To use the latest
llama.cpp
release available, runnpx --no node-llama-cpp download --release latest
. (learn more)
Published by github-actions[bot] 7 months ago
llama.cpp
CUDA flag (#182) (35e6f50)llama.cpp
changes (#183) (6b012a6)inspect gguf
command (#182) (35e6f50)inspect measure
command (#182) (35e6f50)readGgufFileInfo
function (#182) (35e6f50)LlamaModel
(#182) (35e6f50)JinjaTemplateChatWrapper
(#182) (35e6f50)tokenizer.chat_template
header from the gguf
file when available - use it to find a better specialized chat wrapper or use JinjaTemplateChatWrapper
with it as a fallback (#182) (35e6f50)chat
, complete
, infill
(#182) (35e6f50)Shipped with llama.cpp
release b2608
To use the latest
llama.cpp
release available, runnpx --no node-llama-cpp download --release latest
. (learn more)
Published by github-actions[bot] 7 months ago
DisposedError
was thrown when calling .dispose()
(#178) (315a3eb)llama.cpp
changes (#178) (315a3eb)Failed to detect a default CUDA architecture
CUDA compilation error (#178) (315a3eb)cmake
binary issues and suggest fixes on detection (#178) (315a3eb)Shipped with llama.cpp
release b2440
To use the latest
llama.cpp
release available, runnpx --no node-llama-cpp download --release latest
. (learn more)
Published by github-actions[bot] 8 months ago
llama.cpp
breaking change (#175) (5a70576)inspect
command (#175) (5a70576)GemmaChatWrapper
(#175) (5a70576)TemplateChatWrapper
(#175) (5a70576)Shipped with llama.cpp
release b2329
To use the latest
llama.cpp
release available, runnpx --no node-llama-cpp download --release latest
. (learn more)
Published by github-actions[bot] 8 months ago
Shipped with llama.cpp
release b2254
To use the latest
llama.cpp
release available, runnpx --no node-llama-cpp download --release latest
. (learn more)
Published by github-actions[bot] 8 months ago
getLlama
when using "lastBuild"
(#164) (ede69c1)resolveChatWrapperBasedOnWrapperTypeName
(#165) (624fa30)Shipped with llama.cpp
release b2174
To use the latest
llama.cpp
release available, runnpx --no node-llama-cpp download --release latest
. (learn more)
Published by github-actions[bot] 8 months ago
chatWrapper
getter on a LlamaChatSession
(#161) (46235a2)Shipped with llama.cpp
release b2127
To use the latest
llama.cpp
release available, runnpx --no node-llama-cpp download --release latest
. (learn more)
Published by github-actions[bot] 9 months ago
logLevel
and logger
params when using "lastBuild"
(#157) (74fb35c)Shipped with llama.cpp
release b2074
To use the latest
llama.cpp
release available, runnpx --no node-llama-cpp download --release latest
. (learn more)
Published by github-actions[bot] 9 months ago
Shipped with llama.cpp
release b2060
To use the latest
llama.cpp
release available, runnpx --no node-llama-cpp download --release latest
. (learn more)
Published by github-actions[bot] 9 months ago
Shipped with llama.cpp
release b2060
To use the latest
llama.cpp
release available, runnpx --no node-llama-cpp download --release latest
. (learn more)
Published by github-actions[bot] 9 months ago
Shipped with llama.cpp
release b2060
To use the latest
llama.cpp
release available, runnpx --no node-llama-cpp download --release latest
. (learn more)
Published by github-actions[bot] 9 months ago
Shipped with llama.cpp
release b1961
To use the latest
llama.cpp
release available, runnpx --no node-llama-cpp download --release latest
. (learn more)