2D pixel-art RPG with AI-powered NPCs. Learn about Movementlabs through gameplay!
MIT License
[!NOTE] This project was created as part of LLM Zoomcamp course.
Listen to an overview created by NotebookLM
https://github.com/user-attachments/assets/096f2b58-660e-4864-88a0-cedd308727e7
Parthenon is an immersive 2D top-down pixel-art game that incorporates a RAG system, allowing players to have dynamic, knowledge-based interactions while exploring the virtual world. Players learn about Movementlabs, a Move-based blockchain network, and its ecosystem through engaging NPC interactions.
This project integrates several technologies to create a robust system for querying a knowledge base, building prompts, and interacting with a LLM. It features:
The core challenge is creating an end-to-end RAG application that seamlessly integrates an AI-driven assistant into an interactive gamified environment. It aims to:
Parthenon is built on a modern, scalable architecture designed to deliver an engaging learning experience:
🛠️ Backend
🎮 Frontend
📊 Monitoring & DevOps
The dataset, containing information about Movementlabs and its ecosystem, is located in data/json
.
The backend of this application is structured to handle various aspects of the RAG flow, including data ingestion, retrieval, and interaction with the LLM. Below is an overview of the backend files:
app.py
: This is the main entry point of the FastAPI application. It defines the API endpoints for querying the knowledge base and submitting feedback. It also includes CORS middleware configuration to allow cross-origin requests (so that the frontend fetches data from the backend). API Endpoints:
/faq
: Retrieves FAQ questions from the ground truth file./question
: Handles RAG queries and returns AI-generated responses./feedback
: Receives and stores user feedback on conversations.rag.py
: Contains the core logic for the RAG process. It handles querying Elasticsearch for relevant documents, building prompts for the LLM, and evaluating the relevance of the generated answers.
db.py
: Manages database interactions using PostgreSQL. It includes functions to initialize the database schema and save conversation and feedback data.
prep.py
: Prepares the Elasticsearch index and initializes the database. It ingests documents into Elasticsearch and sets up the necessary index mappings.
ingest.py
: Responsible for loading and processing documents from the data directory. It cleans and chunks the text data before indexing it into Elasticsearch.
init.py
: Script for initializing Grafana by creating API keys, setting up data sources, and configuring dashboards.
git clone https://github.com/dimzachar/Parthenon-RAG-Game.git
cd Parthenon-RAG-Game
cp .env.example .env
or rename .env.example
in the root directory to .env
. Key environment variables:
ELASTIC_URL
: Elasticsearch connection URLPOSTGRES_DB
, POSTGRES_USER
, POSTGRES_PASSWORD
: PostgreSQL connection detailsOPENAI_API_KEY
: Your OpenAI API key for LLM interactionsINDEX_NAME
: Name of the Elasticsearch index for the knowledge baseReplace YOUR_KEY
with your Openai API key.
Install the project dependencies using pipenv
:
pip install pipenv
pipenv install --dev
For experiments, we use Jupyter notebook located in the notebooks
folder. Check notebook.md
for more details.
Note: If an error occurs below (e.g., services take some time to fully start), wait a few seconds and try running it again.
docker-compose up -d postgres elasticsearch
pipenv shell
cd backend/app
export POSTGRES_HOST=localhost
export ELASTIC_URL=http://localhost:9200
python prep.py
Upon success, you should see the following message:
You have two options for running the application:
Option 1: Full Docker Compose Setup (Recommended)
If you have any services running from the database initialization step, use Ctrl+C or run:
docker-compose down
Start all services:
docker-compose up
Option 2: Running Components Separately
If you have any services running, stop them:
docker-compose down
Start only the necessary services:
docker-compose up -d postgres grafana elasticsearch
Run the backend locally:
pipenv shell
cd backend/app
export POSTGRES_HOST=localhost
export ELASTIC_URL=http://localhost:9200
python app.py
You will see the following message:
Use the provided test.py
script or curl commands to interact with the API:
requests
When the application is running, you can use requests to send questions to the provided test.py script.
In a new terminal, in root dir, interact with the application:
pipenv run python test.py
It sends a random question from the ground truth dataset to the app
and outputs an API response that contains several fields, including a conversation_id
and other relevant details.
CURL
Use curl
to interact with the API:
To get a similar output install jq
. For Windows (using Chocolatey):
choco install jq
then run the following command:
curl -X POST http://localhost:5000/question -H "Content-Type: application/json" -d '{"question": "Which platforms are referenced for deploying EVM contracts using Hardhat?", "selected_model": "gpt-4o-mini"}' | jq '{
conversation_id,
question: .query,
answer,
response_time,
relevance,
model_used,
token_usage: {prompt_tokens, completion_tokens, total_tokens, eval_prompt_tokens, eval_completion_tokens, eval_total_tokens},
openai_cost
}'
The output will look like this:
Sending feedback:
After receiving an API response, you can send feedback on the conversation (copy-paste the following and hit enter):
ID="d4b27893-c978-4bb5-b51a-bc6c75c9f629"
URL=http://localhost:5000
FEEDBACK_DATA='{
"conversation_id": "'${ID}'",
"feedback": 1
}'
curl -X POST \
-H "Content-Type: application/json" \
-d "${FEEDBACK_DATA}" \
${URL}/feedback
Upon successful submission, you'll receive an acknowledgment message similar to this:
{
"message": "Feedback received for conversation d4b27893-c978-4bb5-b51a-bc6c75c9f629: 1"
}
To initialize the dashboard, first ensure Grafana is
running (it starts automatically when you do docker-compose up
).
pipenv shell
cd grafana
env | grep POSTGRES_HOST
python init.py
You'll receive a message confirming successful initialization.
Dashboard created successfully
Initialization complete. Datasource and dashboard created successfully.
Access Grafana at localhost:3000 with the default credentials (admin/admin).
Grafana dashboard provides visualizations for:
This allows you to monitor the performance and usage of the RAG system in real-time.
For more detailed information about the game, please see the Frontend documentation.
Make sure the backend services are up:
docker-compose up
cd frontend
npm install
npm run dev
Open your browser and navigate to http://localhost:3001
to load the landing page.
Click the PLAY NOW
or Launch Game
button to start.
The game will open at http://localhost:3001/game
Use the arrow keys to move the player character close to the NPC. Once a pop-up appears, press E
to interact.
A pop-up will appear with the following options:
Share on Twitter
Copy the fact
Get a new fact
Open the chat interface (InteractionPopup
)
The InteractionPopup
uses FAQ from the ground truth, where you can click on them and it pre-fills the chat.
For answers to commonly asked questions, please see the FAQ document.
git checkout -b feature/AmazingFeature
)git commit -m 'Add some AmazingFeature'
)git push origin feature/AmazingFeature
)This project is licensed under the MIT License - see the LICENSE file for details.
We accept donations to help sustain our project. If you would like to contribute, you can use the following options:
Ethereum Address:
0xeB16AdBa798C64CFdb9A0A70C95e1231e4ADe124
Your generosity helps us continue improving Parthenon and creating more exciting features. Thank you for your support! 🙏
For questions, suggestions, or collaboration opportunities: