A fullstack Rust + React chat app using open-source Llama language models
MIT License
$ make install-huggingface-cli
Create a huggingface token: https://huggingface.co/settings/tokens
, then set the token as env variable on your machine:
$ export HF_TOKEN=<your-token-here>
$ make download-model
$ make chatty-llama
PS! If you're having issues connecting to the backend
, try running make chatty-llama-host
instead.
In your browser, open http://localhost:80
Enjoy!