An AI assistant for jMonkeyEngine and related projects.
An AI assistant for jMonkeyEngine and related projects.
It knows :
For more details, check the knowledge base section.
To provide the functionality of the bot, the following libraries are used:
The bot extends the knowledge of GPT-3 by embedding pieces of information from the following sources:
Static embeddings are updated periodically and stored in the embeddings/ folder in this repo.
Dynamic embeddings are generated on the fly for the requested information.
conda env create -f environment.yml
conda activate jmebot
pip install -r requirements.txt
5b. If you want to run it on a GPU and you have a CUDA compatible GPU, make sure to have a recent version of CUDA installed and then install the required dependencies
# For Ubuntu
sudo apt-get install build-essential cmake swig libopenblas-dev libpcre3-dev
bash installForCuda.sh # Install the required dependencies (note: this builds faiss-gpu from source, so it will take a while)
export OPENAI_API_KEY="XXXXX"
bash bot.sh
or regenerate the embeddings
bash bot.sh ingest
In a docker host
The snippets below show how to use the prebuild images on github registry, If you want to build your own image:
# For cpu
docker build -t chat-jme .
# For cuda
docker build -t chat-jme:cuda . -f Dockerfile.cuda
mkdir -p /srv/chat-jme/cache
chown -Rf 1000:1000 /srv/chat-jme/cache
# For CPU
docker run -d --restart=always \
-eOPENAI_API_KEY="XXXXXXXX" \
-v/srv/chat-jme/cache:/home/nonroot/.cache \
-p8080:8080 \
--name="chat-jme" \
ghcr.io/riccardobl/chat-jme/chat-jme:snapshot bot
# For Cuda (recommended)
GPUID="device=GPU-XXXXX"
docker run -d --restart=always \
-eOPENAI_API_KEY="XXXXXXXX" \
-v/srv/chat-jme/cache:/home/nonroot/.cache \
-p8080:8080 \
--gpus $GPUID
--name="chat-jme" \
ghcr.io/riccardobl/chat-jme/chat-jme:cuda-snapshot bot
NOTE: To use custom static embeddings specify the INDEX_PATH environment variable
NOTE2: the first run might take some time since it has to download the models.
NOTE3: If you use the cpu you might need to add --security-opt seccomp=unconfined to the docker command if performances are bad (note that this is not recommended)
mkdir -p /srv/chat-jme/cache
chown -Rf 1000:1000 /srv/chat-jme/cache
mkdir -p /srv/chat-jme/embeddings
chown -Rf 1000:1000 /srv/chat-jme/embeddings
docker run -d --restart=always \
-eOPENAI_API_KEY="XXXXXXXX" \
-eINDEX_PATH="/embeddings" \
-v/srv/chat-jme/cache:/home/nonroot/.cache \
-v/srv/chat-jme/embeddings:/embeddings \
--name="chat-jme" \
ghcr.io/riccardobl/chat-jme/chat-jme:snapshot ingest
POST /session
REQUEST
{
"sessionSecret":"", // sessionSecret of the session to maintain or nothing to create a new one
"lang":"en" // || "it" || etc... || "auto",
}
RESPONSE
{
"sessionSecret":"XYZ", // sessionSecret of the session
"helloText":"???", // Text that can be used to initiate a conversation with the bot (in the chosen language)
"welcomeText": "..." // Hardcoded welcome text in the specified language
}
POST /query
REQUEST
{
"sessionSecret":"",
"lang":"en",// || "it" || etc... || "auto",
"question":"Your question"
}
RESPONSE
{
"output_text":"???" // Answer to the question
}
GET /lang
RESPONSE
[
{
"name":"English",
"code":"en"
},
{
"name":"Italian",
"code":"it"
},
...
]
The frontend is server on the 8080 port by default.
It supports some configuration parameters that can be passed as document hash parameters.
Multiple parameters can be concatenated with the &
character.