node-llama-cpp

Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level

MIT License

Downloads
37.6K
Stars
905
Committers
6

Bot releases are visible (Hide)

node-llama-cpp - v3.0.0-beta.37

Published by github-actions[bot] 4 months ago

3.0.0-beta.37 (2024-07-05)

Features


Shipped with llama.cpp release b3322

To use the latest llama.cpp release available, run npx --no node-llama-cpp download --release latest. (learn more)

node-llama-cpp - v3.0.0-beta.36

Published by github-actions[bot] 4 months ago

3.0.0-beta.36 (2024-06-30)

Bug Fixes


Shipped with llama.cpp release b3267

To use the latest llama.cpp release available, run npx --no node-llama-cpp download --release latest. (learn more)

node-llama-cpp - v3.0.0-beta.35

Published by github-actions[bot] 4 months ago

3.0.0-beta.35 (2024-06-30)

Bug Fixes


Shipped with llama.cpp release b3266

To use the latest llama.cpp release available, run npx --no node-llama-cpp download --release latest. (learn more)

node-llama-cpp - v3.0.0-beta.34

Published by github-actions[bot] 4 months ago

3.0.0-beta.34 (2024-06-30)

Bug Fixes


Shipped with llama.cpp release b3265

To use the latest llama.cpp release available, run npx --no node-llama-cpp download --release latest. (learn more)

node-llama-cpp - v3.0.0-beta.33

Published by github-actions[bot] 4 months ago

3.0.0-beta.33 (2024-06-29)

Bug Fixes

Features

  • move CUDA prebuilt binaries to dependency modules (#250) (8a92e31)

Shipped with llama.cpp release b3265

To use the latest llama.cpp release available, run npx --no node-llama-cpp download --release latest. (learn more)

node-llama-cpp - v2.8.12

Published by github-actions[bot] 4 months ago

2.8.12 (2024-06-21)

Bug Fixes

  • bump llama.cpp release used in prebuilt binaries (#247) (2137c46)
node-llama-cpp - v3.0.0-beta.32

Published by github-actions[bot] 4 months ago

3.0.0-beta.32 (2024-06-18)

Bug Fixes


Shipped with llama.cpp release b3166

To use the latest llama.cpp release available, run npx --no node-llama-cpp download --release latest. (learn more)

node-llama-cpp - v3.0.0-beta.31

Published by github-actions[bot] 4 months ago

3.0.0-beta.31 (2024-06-17)

Bug Fixes

  • remove CUDA binary compression for Windows (#243) (0b85800)
  • improve inspect gpu command output (#243) (0b85800)

Shipped with llama.cpp release b3166

To use the latest llama.cpp release available, run npx --no node-llama-cpp download --release latest. (learn more)

node-llama-cpp - v3.0.0-beta.30

Published by github-actions[bot] 4 months ago

3.0.0-beta.30 (2024-06-17)

Bug Fixes


Shipped with llama.cpp release b3166

To use the latest llama.cpp release available, run npx --no node-llama-cpp download --release latest. (learn more)

node-llama-cpp - v3.0.0-beta.29

Published by github-actions[bot] 4 months ago

3.0.0-beta.29 (2024-06-16)

Bug Fixes

  • remove CUDA binary compression for now (#238) (0d40ffc)

Shipped with llama.cpp release b3153

To use the latest llama.cpp release available, run npx --no node-llama-cpp download --release latest. (learn more)

node-llama-cpp - v3.0.0-beta.28

Published by github-actions[bot] 4 months ago

3.0.0-beta.28 (2024-06-15)

Features

  • compress CUDA prebuilt binaries (#236) (b89ad2d)
  • automatically solve more CUDA compilation errors (#236) (b89ad2d)

Shipped with llama.cpp release b3153

To use the latest llama.cpp release available, run npx --no node-llama-cpp download --release latest. (learn more)

node-llama-cpp - v3.0.0-beta.27

Published by github-actions[bot] 4 months ago

3.0.0-beta.27 (2024-06-12)

Features

  • render markdown in the Electron example (#234) (23012d1)

Shipped with llama.cpp release b3135

To use the latest llama.cpp release available, run npx --no node-llama-cpp download --release latest. (learn more)

node-llama-cpp - v3.0.0-beta.26

Published by github-actions[bot] 4 months ago

3.0.0-beta.26 (2024-06-11)

Bug Fixes


Shipped with llama.cpp release b3135

To use the latest llama.cpp release available, run npx --no node-llama-cpp download --release latest. (learn more)

node-llama-cpp - v3.0.0-beta.25

Published by github-actions[bot] 4 months ago

3.0.0-beta.25 (2024-06-10)

Bug Fixes


Shipped with llama.cpp release b3091

To use the latest llama.cpp release available, run npx --no node-llama-cpp download --release latest. (learn more)

node-llama-cpp - v3.0.0-beta.24

Published by github-actions[bot] 4 months ago

3.0.0-beta.24 (2024-06-09)

Bug Fixes


Shipped with llama.cpp release b3091

To use the latest llama.cpp release available, run npx --no node-llama-cpp download --release latest. (learn more)

node-llama-cpp - v3.0.0-beta.23

Published by github-actions[bot] 4 months ago

3.0.0-beta.23 (2024-06-09)

Bug Fixes

Features


Shipped with llama.cpp release b3091

To use the latest llama.cpp release available, run npx --no node-llama-cpp download --release latest. (learn more)

node-llama-cpp - v2.8.11

Published by github-actions[bot] 5 months ago

2.8.11 (2024-05-24)

Bug Fixes

  • bump llama.cpp release used in prebuilt binaries (#223) (81a203e)
node-llama-cpp - v3.0.0-beta.22

Published by github-actions[bot] 5 months ago

3.0.0-beta.22 (2024-05-19)

Bug Fixes


Shipped with llama.cpp release b2929

To use the latest llama.cpp release available, run npx --no node-llama-cpp download --release latest. (learn more)

node-llama-cpp - v3.0.0-beta.21

Published by github-actions[bot] 5 months ago

3.0.0-beta.21 (2024-05-19)

Bug Fixes


Shipped with llama.cpp release b2929

To use the latest llama.cpp release available, run npx --no node-llama-cpp download --release latest. (learn more)

node-llama-cpp - v3.0.0-beta.20

Published by github-actions[bot] 5 months ago

3.0.0-beta.20 (2024-05-19)

Bug Fixes

  • improve binary compatibility detection on Linux (#217) (d6a0f43)

Features

  • init command to scaffold a new project from a template (with node-typescript and electron-typescript-react templates) (#217) (d6a0f43)
  • debug mode (#217) (d6a0f43)
  • load LoRA adapters (#217) (d6a0f43)
  • improve Electron support (#217) (d6a0f43)

Shipped with llama.cpp release b2928

To use the latest llama.cpp release available, run npx --no node-llama-cpp download --release latest. (learn more)