godot-llama-cpp

Run large language models in Godot.

MIT License

Stars
29
Committers
2

Run large language models in Godot. Powered by llama.cpp.

Overview

This library aims to provide a high-level interface to run large language models in Godot, following Godot's node-based design principles.

@onready var llama_context = %LlamaContext

var messages = [
  { "sender": "system", "text": "You are a pirate chatbot who always responds in pirate speak!" },
  { "sender": "user", "text": "Who are you?" }
]
var prompt = ChatFormatter.apply("llama3", messages)
var completion_id = llama_context.request_completion(prompt)

while (true):
  var response = await llama_context.completion_generated
  print(response["text"])

  if response["done"]: break

Features

  • Platform and compute backend support:
    Platform CPU Metal Vulkan CUDA
    macOS
    Linux
    Windows
  • Asynchronous completion generation
  • Support any language model that llama.cpp supports in GGUF format
  • GGUF files are Godot resources

Roadmap

  • Chat completions support via dedicated library for jinja2 templating in zig
  • Grammar support
  • Multimodal models support
  • Embeddings
  • Vector database using LibSQL

Building & Installation

  1. Download zig v0.13.0 from https://ziglang.org/download/
  2. Clone the repository:
    git clone --recurse-submodules https://github.com/hazelnutcloud/godot-llama-cpp.git
    
  3. Copy the godot-llama-cpp addon folder in godot/addons to your Godot project's addons folder.
     cp -r godot-llama-cpp/godot/addons/godot-llama-cpp <your_project>/addons
    
  4. Build the extension and install it in your Godot project:
    cd godot-llama-cpp
    zig build --prefix <your_project>/addons/godot-llama-cpp
    
  5. Enable the plugin in your Godot project settings.
  6. Add the LlamaContext node to your scene.
  7. Run your Godot project.
  8. Enjoy!

License

This project is licensed under the MIT License - see the LICENSE file for details.