Python backend (FastAPI) for majority of website. Uses celery workers with Redis queue to handle communications in background.
GPL-3.0 License
Note: This project is no longer maintained. Originally private, some parts may need updating for open-source use. It was part of a larger project, but only the backend is included here. Feel free to ask questions or report issues.
Scanbandz processed over $100,000 in payments and donations, and I’m open-sourcing it in case it's helpful to others. In production, the backend ran as a scalable cluster with PostgreSQL and Redis databases, and Celery workers handled background tasks like ticket sending. If revisiting, I would 1) add Redis-backed rate limiting to authentication endpoints and 2) move workers to a separate Docker image and separate workers by task type (e.g., payments, ticketing, communications).
I wrote this three times over my college career. The first time was when I learned to code (entirely in Django), the second time was fixing all my mistakes (still in Django), and the third was after much experience and separating the frontend and backend (FastAPI). It was a great learning experience, and I hope it helps you too. I will never open source my Django code as it haunts my sleep.
Note: Instructions for running may need troubleshooting due to the quick port to open source.
Clone the repository:
git clone https://github.com/yourusername/scanbandz-backend.git
cd scanbandz-backend
Install dependencies (recommended to use a virtual environment):
pip install -r requirements.txt
Make sure to set up the required environment variables in a .env
file, copy .env.template
to .env
and fill in the required values.
Environment file is expected in the backend/settings
directory.
To run the application using Gunicorn (no support for Celery workers):
gunicorn -c gunicorn.conf.py backend.main:app
Or use supervisord to manage processes (server and Celery workers):
supervisord -c supervisord.conf
For production, the app is containerized using Docker. To build and run the app with Docker Compose:
docker-compose up --build
The backend will be available at http://localhost:8080.
For local development, use the .devcontainer setup:
docker-compose -f docker-compose.dev.yml up --build
This will spin up the development environment with a PostgreSQL database.
docker-compose -f .devcontainer/dev-docker-compose.yml up
alembic upgrade head
The production Docker setup uses supervisord to manage both Gunicorn and Celery workers.
To build and run the production setup:
docker build -t scanbandz-backend .
docker run -p 8080:8080 scanbandz-backend
.
├── backend
│ ├── apis # API endpoints
│ ├── assets # Email templates
│ ├── communication # Azure email client interfaces
│ ├── entities # Database entities (SQLAlchemy)
│ ├── exceptions # Custom exceptions
│ ├── migrations # Alembic migrations
│ ├── models # Pydantic models
│ ├── scripts # Utility DB scripts
│ ├── services # Business logic for various services
│ ├── settings # Configs, logging, and celery workers
├── alembic.ini # Alembic configuration file
├── Dockerfile # Production Dockerfile
├── docker-compose.yml # Docker Compose for production
├── requirements.txt # Python dependencies
├── supervisord.conf # Supervisor configuration
├── gunicorn.conf.py # Gunicorn configuration
└── .devcontainer # Development container configuration
This project is licensed under the MIT License - see the LICENSE file for details.
gunicorn
, supervisord
, and Docker.Let me know if you need any modifications or additions!