A Fast and Scalable Chatbot Built with Leptos and Rustformers LLM
UNLICENSE License
RusticAI is an advanced chatbot developed using the Leptos web framework and Ollama-rs for maintaining chat conversations. Designed for performance and scalability, this chatbot leverages the power of Rust's concurrency and memory safety to deliver fast and intelligent conversations.
Perfect for developers looking to build reliable and scalable AI-driven chat applications!
Implement user authentication
Integrate database storage for conversations
By default, the project is configured for Nvidia GPU acceleration using CUDA. If you're running the chatbot on a macOS system with Apple Silicon, you can test it with Metal acceleration by enabling the metal
feature in Cargo.toml
. If you encounter issues or successfully configure for other platforms, feel free to submit a PR to update the README.md
.
You'll need to use the nightly Rust toolchain, and install the wasm32-unknown-unknown
target as well as the Trunk and cargo-leptos
tools:
rustup toolchain install nightly
rustup target add wasm32-unknown-unknown
cargo install trunk wasm-bindgen-cli cargo-leptos
You'll also need to install Ollama, and download a models (i.e Llama3.1) or any model of your choice.
In the root of the project directory, you'll find a .env
file where two environment variables called OLLAMA_SYSTEM_PROMPT
& OLLAMA_MODEL_NAME
are defined. Replace these values with the the desired model and prompt.
OLLAMA_SYSTEM_PROMPT=
OLLAMA_MODEL_NAME=
Install TailwindCSS with npm install -D sass tailwindcss
To run the project locally,
run npx sass style/main.scss | npx tailwindcss -i - -o style/main.css
in a terminal - this will build style/main.css
and automatically rebuild when a change is detected in styles/main.css
in the Cargo.toml
file.
cargo leptos watch
in the project directory.
In in your browser, navigate to http://localhost:3000/.
The following list of models was seemless in terms of integration and I did't have any sort of problems working with them.
This template itself is released under the Unlicense. You should replace the LICENSE for your own application with an appropriate license if you plan to release it publicly.