Python backend template: FastAPI + SQLAlchemy/Postgres + Celery/Redis
Application template to quick start your API server.
Fully Dockerized local development environment.
In dev mode ($env=dev``) uses
uvicorn` with live-reload (sources mounted to the
container). And Celery worker with live reload.
. ./activate.sh
./build.sh
./up.sh
Install hurl
hurl docker/words.hurl
That will get auth token from backend running in the Docker, send words
request to API and check response
(it should be number of words in file docker/words.txt
).
You can debug backend and celery tasks code locally, outside containers.
Backend and Celery worker will connect to Postgres and Redis in containers.
For that you need in your /etc/hosts
:
127.0.0.1 postgres
127.0.0.1 redis
See docker/postgres/README.md.
# compare DB models and current DB and create DB upgrade script in alembic/versions
./alembic.sh revision --autogenerate -m "Schema changes."
# apply script to the DB so after that DB meta data will reflect DB models
./alembic.sh upgrade head
You only need Posgtres
to run test container.
We use fakeredis
to emulate Redis
, fastapi
test client
to emulate fastapi server.
./up.sh postgres
./run.sh tests # run all tests
./run.sh tests python -m pytest -v # run tests `verbosely`
in /etc/hosts
we need
127.0.0.1 postgres
in folder /backend
execute
./test.sh -k token # run tests with `token` in test name
./test.sh -m='unittest and not slow' # run all fast unittests (locally)
./test.sh -m=benchmark # run all tests that mesures speed using pytest-benchmark
./test.sh --markers # see all markers that we can use with `-m` key
./test.sh --cov # run tests with coverage report
./up.sh backend
./test.sh --host 127.0.0.1
This command run local server and test it.
It will skip unit tests that cannot be run for external server (marked as unittest
).
Run tests in parallel in loop as some kind of stress-test using nginx as proxy.
In folder backend/
run:
./stress.sh
Swagger UI available at localhost/docs
after server start (./up.sh
).
Nginx proxy at 8001
port.
Without nginx gunicorn server will drop a lot of incoming connections.
Because there are only <CPU number> + 1
workers in production mode.
Or even only one worker in live reload ($env=dev
) mode.
Nginx will buffer requests so your server will serve a lot of parallel clients.