This is an example microservices architecture which uses GRPC communication between microservices for requests that need instant reply, and uses fault-tolerant messaging queue for HA.
Each microservice supports both GRPC communication and consumption through Kafka topics. If you need to get a response from a certain microservice, GRPC is recommended. If the task can be done asyncronously, please do it within Kafka's boundaries.
npm install
on the root directorydocker-compose up
on the root directorypackages/subscription
Public:
/metrics
endpoint where it exposes certain memory and cpu consumption metrics for Prometheus support./health
endpoint for readiness and liveliness probe checks.GRPC microservices:
grpc.health.v1.Health
grpc health checksFor further information regarding Kubernets deployment, please look into the k8s folder.
inter microservice communication
All microservices hide stacktraces and error messages from client by default.
Rate limiting is enabled on public microservice. the default configuration is max 100 requests per minute.
x-ratelimit-limit
, x-ratelimit-remaining
, x-ratelimit-reset
and retry-after
headers to client.CORS configuration: Before using this code on production make sure that the necessary CORS headers should only allow requests from your owned domains to disallow unwanted requests from browsers and websites.
You can access OpenAPI 3 based documentation generated by public
microservice using docs
next.js project which consumes the OpenAPI specification and creates a developer friendly website.
npm config set legacy-peer-deps true
. By default using npm install
triggers prepare-build.js
command which handles this error.