Only windows is current supported for now. The new llama.cpp binaries that support GGUF have not been built for other platforms yet.
If you would like to help, please makea pull request and update the binaries in ./bin
Note Download links will not be provided in this repository.
Download the latest installer from the releases page section.
Open the installer and wait for it to install.
Once done installing, it'll ask for a valid path to a model. Now, go to where you placed the model, hold shift, right click on the file, and then click on "Copy as Path". Then, paste this into that dialog box and click Confirm
.
The program will automatically restart. Now you can begin chatting!
Note The program will also accept any other 4 bit quantized .bin model files. If you can find other .bin Alpaca model files, you can use them instead of the one recommended in the Quick Start Guide to experiment with different models. As always, be careful about what you download from the internet.
xattr -cr /Applications/Alpaca\ Electron.app/
You can either download the prebuilt app (packaged as tar.gz) from the releases page, extract it and execute it with ./"alpaca electron"
or build the application on yourself.
If you want to build the application yourself:
Clone the repository:
git clone https://github.com/ItsPi3141/alpaca-electron.git
Change your current directory to alpaca-electron:
cd alpaca-electron
Install application specific dependencies:
npm install --save-dev
Build the application:
npm run linux-x64
Change your current directory to the build target:
cd release-builds/'Alpaca Electron-linux-x64'
Run the application with
./'Alpaca Electron'
Clone the repository:
git clone https://github.com/ItsPi3141/alpaca-electron.git
Change your current directory to alpaca-electron:
cd alpaca-electron
Build the container image:
docker compose build
Run the application container:
docker compose up -d
docker compose up
(without the -d). If there is an error like Authorization required, but no authorization protocol specified
run xhost local:root
on your docker host.git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
mkdir build
cd build
cmake ..
cmake . --config Release
On Linux and MacOS:
make
git clone https://github.com/ItsPi3141/alpaca-electron
cd alpaca-electron
npm install
npm run rebuild
Info If you are on Linux, replace
npm run rebuild
withnpm run rebuild-linux
Warning This step is not required. Only do it if you had built llama.cpp yourself and you want to use that build. Otherwise, skip to step 4 If you had built llama.cpp in the previous section, copy the
main
executable file into thebin
folder inside the alpaca-electron folder.
Make sure the file replaces the correct file. E.g. if you're on Windows, replace chat.exe with your file. If you're on arm64 MacOS, replace chat_mac_arm64. Etc...
npm start
Run one of the following commands:
npm run win
npm run mac-x64
npm run mac-arm64
npm run linux-x64
You can only build for the OS you are running the build on. E.g. if you are on Windows, you can build for Windows, but not for MacOS and Linux.
Credits go to @antimatter15 for creating alpaca.cpp and to @ggerganov for creating llama.cpp, the backbones behind alpaca.cpp. Finally, credits go to Meta and Stanford for creating the LLaMA and Alpaca models, respectively.
Special thanks to @keldenl for providing arm64 builds for MacOS and @W48B1T for providing Linux builds