💎 An opinionated Angular - Sanic RESTful seed
BSD-3-CLAUSE License
<<<<<<< HEAD
[![GitHub release][github-image]][github-url] [![Codacy Badge][codacy-image]][codacy-url] [![Maintenance][maintenance-image]][maintenance-url] [][donate-url]
This repo is a production ready seed project. The app shows a list of users.
client
contain an Angular app, built with Angular-Cli, and ngrx to handle state, Angular Material as a design library, and have service worker, and AOT
compiled. The app shows the users from the Sanic api.server
contain a simple Sanic app that expose an api
of users
. The Python serve through a gunicorn server installed in the container.postgres
service for the database. The database
directory contains the automatic backup script.stdout
and can be collected through any service.travis-ci
, and a code coverage analysis via codecov
.[][donate-url]
The server
directory contain a simple Django app that expose an api
of Django users
with Django REST framework. The client
directory contain an Angular simple app, built with Angular-Cli, ngrx to handle state, Angular Material as a design library, have service worker, and ready to AOT
compilation. The simple Angular app show the users from the Django api.
The repo is a production ready app, that uses nginx
to serve static files (the client app and static files from the server), and gunicorn
for the server (python) stuff. All the parts are in a separate Docker containers and we use kubernetes to manage them.
Automatic installation of the project with docker, for development.
$ docker-compose up
to build the docker images and run them.b72a69f (Update dependencies (#20))
The client
app is built via the cloud build CI on GCP and deployed to the GCP storage
.
The server
app is built via the cloud build CI as a docker image and deployed to a GKE cluster
on GCP (managed by Kubernetes).
The PostgreSQL database
is built via the cloud build CI as a docker image and deployed to a GKE cluster
on GCP (managed by Kubernetes).
Deploy the client
app:
_REGION_NAME
to the location of the bucket you created in the previous step).client
app by creating a new tag in the v0.0.1/prod/prod
format and push it to github (git push --tags
).Deploy the server
app:
GKE
cluster on GCP._REGION_NAME
to the location of the GKE
cluster you created in step 1).GKE
cluster using gcloud container clusters get-credentials prod
and then create a tiler
using the commands:kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
helm init --service-account tiller --upgrade
helm
permissions by navigate to server/kubernetes
in the command line and then write kubectl apply -f helm-permissions.yaml
.server
app by create a new tag in the v0.0.1/prod/prod
format and push it to github (git push --tags
).Create a Cloud DNS record:
A
record to the kubernetes cluster (the server) and place there your load balancer ip address you get in the "Deploy the server
app", and a CNAME
record to our Storage bucket (client app) screenshot.<<<<<<< HEAD
b72a69f (Update dependencies (#20))
Tools we use
<<<<<<< HEAD
There is already tests for the server
and the client
, we currently at +90 percent coverage.
To run the client
tests and lint run the commands below in the client
directory.
npm run lint
npm run test
To run the server
tests and lint run the commands below in the server
directory.
pycodestyle --show-source --max-line-length=120 --show-pep8 .
python manage.py test
We also write some tests for doing load test with locust, you can find it under server/locustfile.py
.
To do a load test just install locust (it's in the requirements.txt
file) go to server
directory and run
locust --host=http://localhost
Then open up Locust’s web interface http://localhost:8089.
To update any of the containers that are in a service with a new image just create a new image, for example
docker build -t server:v2 .
And then update the service with the new image
docker service update --image server:v2 prod_server
Each day a backup of the PostgreSQL database will be created. The daily backups are rotated weekly, so maximum 7 backup files will be at the daily directory at once.
Each Saturday morning a weekly backup will be created at the weekly directory. The weekly backups are rotated on a 5 week cycle.
Each month at the 1st of the month a monthly backup will be created at the monthly directory. Monthly backups are NOT rotated
/var/backups/postgres
at the host machine via a shared volume. It can be configured in the docker-compose.yml
at volumes
section of the database
service.b72a69f (Update dependencies (#20))
Just fork and do a pull request (;
b72a69f (Update dependencies (#20)) [donate-url]: https://www.paypal.me/nirgn/2