WebAssembly binding for llama.cpp - Enabling in-browser LLM inference
MIT License
llama.cpp 🦙 LLM inference in TypeScript
Run any Large Language Model behind a unified API
A simple "Be My Eyes" web app with a llama.cpp/llava backend
A C#/.NET library to run LLM (🦙LLaMA/LLaVA) on your local device efficiently.
Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema ...
Finetune llama2-70b and codellama on MacBook Air without quantization
Yet another `llama.cpp` Rust wrapper
♾️ toolkit for air-gapped LLMs on consumer-grade hardware
LLM inference in Fortran
This repository contains a web application designed to execute relatively compact, locally-operat...
llama.go is like llama.cpp in pure Golang!
Run any Llama 2 locally with gradio UI on GPU or CPU from anywhere (Linux/Windows/Mac). Use `llam...
Node.js binding of Llama.cpp
LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable fo...
Node Llama Cpp wrapper for Node JS