A Scalable Distributed Microservices Application
GPL-3.0 License
Welcome to WorkUp, a scalable distributed microservices application designed to replicate the core functionalities of Upwork. ๐ This project leverages a suite of modern technologies to ensure blazingly fast performance ๐ฅ, reliability, and scalability.
WorkUp is a microservices-based application that allows freelancers and clients to connect, collaborate, and complete projects, similar to Upwork. ๐ผ The system is designed with scalability and resilience in mind, utilizing various technologies to handle high traffic and large data volumes efficiently. ๐
For this project, RabbitMQ was used for asynchronous communication between different components ๐ฐ. A worker queue is assigned to every microservice, where all instances listen to it, while only one of them (the most available one) consumes the message and sends the response back if needed. In addition, a fanout exchange is created to allow the controller to send configuration messages that are consumed by all the running instances of a certain microservice.
The web server is considered the API gateway for the backend of the system. It has a defined REST endpoint for every command in the four microservices. The web server takes the HTTP requests and converts them to message objects that can be sent and consumed by the designated microservice. After the execution is done, the web server receives the response as a message object, converts it to an HTTP response, and sends it back to the client. The web server handles authentication, as it checks the JWT sent with the request to determine whether the user is currently logged in ๐. After that, it extracts the user ID from the JWT and attaches it to the message that is sent to the worker queues.
Welcome to the Workup API documentation. This API provides endpoints for managing jobs, proposals, contracts, payments, and user authentication.
All endpoints require authentication using a bearer token. Include the token in the Authorization header of your requests.
Authorization: Bearer <your_access_token>
Request Parameters
GET /api/v1/jobs/7ddda13b-8221-4766-983d-9068a6592eba
Authorization: Bearer <your_access_token>
Response
{
"id": "7ddda13b-8221-4766-983d-9068a6592eba",
"title": "Sample Job",
"description": "This is a sample job description.",
...
}
Description: Retrieves all proposals for a specific job.
Request Parameters
GET /api/v1/jobs/7ddda13b-8221-4766-983d-9068a6592eba/proposals
Authorization: Bearer <your_access_token>
Response
[
{
"id": "73fb1269-6e05-4756-93cc-947e10dac15e",
"job_id": "7ddda13b-8221-4766-983d-9068a6592eba",
"cover_letter": "Lorem ipsum dolor sit amet...",
...
},
...
]
Description: Retrieves details of a specific contract.
Request Parameters
GET /api/v1/contracts/702a6e9a-343b-4b98-a86b-0565ee6d8ea5
Authorization: Bearer <your_access_token>
Response
{
"id": "702a6e9a-343b-4b98-a86b-0565ee6d8ea5",
"client_id": "2d816b8f-592c-48c3-b66f-d7a1a4fd0c3a",
...
}
Description: Creates a new proposal for a specific job.
Request Body
{
"coverLetter": "I am interested in this job...",
"jobDuration": "LESS_THAN_A_MONTH",
"milestones": [
{
"description": "First milestone",
"amount": 500,
"dueDate": "2024-06-01"
},
...
]
}
Response
{
"id": "73fb1269-6e05-4756-93cc-947e10dac15e",
"job_id": "7ddda13b-8221-4766-983d-9068a6592eba",
...
}
Description: Retrieves details of a specific proposal.
Request Parameters
GET /api/v1/proposals/73fb1269-6e05-4756-93cc-947e10dac15e
Authorization: Bearer <your_access_token>
Response
{
"id": "73fb1269-6e05-4756-93cc-947e10dac15e",
"job_id": "7ddda13b-8221-4766-983d-9068a6592eba",
...
}
Description: Updates an existing proposal.
Request Parameters
Request Body
{
"coverLetter": "Updated cover letter...",
"jobDuration": "ONE_TO_THREE_MONTHS",
"milestones": [
{
"description": "Updated milestone",
"amount": 600,
"dueDate": "2024-06-15"
},
...
]
}
Response
{
"id": "73fb1269-6e05-4756-93cc-947e10dac15e",
"job_id": "7ddda13b-8221-4766-983d-9068a6592eba",
...
}
The controller provides a CLI that is used to broadcast configuration messages to all the running instances of a certain microservice ๐ฅ๏ธ. These messages configure the services in runtime without having to redeploy any instance, to achieve zero downtime updates โฒ๏ธ. Here are the messages sent by the controller:
setMaxThreads
: Sets the maximum size of the thread pool used by a microservice.setMaxDbConnections
: Sets the maximum number of database connections in the connections pool for a microservice. This is used only for the payments microservice since it is the only one using PostgreSQL, which allows connection pooling.freeze
: Makes the microservice stop consuming any new messages, finishes the execution of any messages received by them, then releases all resources.start
: Restarts a frozen microservice.deleteCommand
: Deletes a command in a certain microservice in runtime. So the microservice no longer accepts requests of a certain type without having to redeploy all instances.updateCommand
: Updates the logic performed by a certain command in runtime by byte code manipulation, which allows runtime updates while having zero downtime.addCommand
: Adds a command that was deleted before from the microservice, to allow it to accept requests of that type again.setLoggingLevel
: Sets the level of logs that should be logged (ERROR, INFO, etc.).spawnMachine
: Replicates the whole backend on a new machine (the web server, the 4 microservices, the messaging queue, etc.).A media server that serves static resources.
UPLOAD_USER
UPLOAD_PASSWORD
DISCOGS_API_KEY
For all static resources, this server will attempt to return a relevant resource ๐, or else if the resource does not exist, it will return a default 'placeholder' resource ๐ผ๏ธ. This prevents clients from having no resource to display at all; clients can make use of this media server's 'describe' endpoint to learn about what resources are available ๐.
GET
/static/icons/:icon.pngReturns an icon based on the filename.
:icon
can be any icon filename.
Example: return an icon with filename 'accept.png'.
curl --request GET \
--url http://path_to_server/static/icons/accept.png
GET
/static/resume/:resume.pdfReturns a resume based on the filename.
:resume
can be any resume filename.
Example: return a resume with filename 'resume.pdf'.
curl --request GET \
--url http://path_to_server/static/resume/resume.pdf
GET
/describeReturns a JSON representation of the media groups.
Example: return JSON containing of all groups present.
curl --request GET \
--url http://localhost:8910/describe
{
"success": true,
"path": "/static/",
"groups": ["icons", "resume"]
}
GET
/describe/:groupReturns a JSON representation of all the files current present for a given group.
:group
can be any valid group.
Example: return JSON containing all the media resources for a the exec resource group.
curl --request GET \
--url http://localhost:8910/describe/exec
{
"success": true,
"path": "/static/exec/",
"mimeType": "image/jpeg",
"files": []
}
Upload and convert media to any of the given static resource groups.
All upload routes are protected by basic HTTP auth. The credentials are defined by ENV variables UPLOAD_USER
and UPLOAD_PASSWORD
.
POST
/upload/:groupPOST a resource to a given group, assigning that resource a given filename.
:group
can be any valid group.
curl --location 'http://path-to-server/upload/resume/' \
--header 'Authorization: Basic dXNlcjpwYXNz' \
--form 'resource=@"/C:/Users/ibrah/Downloads/Ibrahim_Abou_Elenein_Resume.pdf"' \
--form 'filename="aboueleyes-reume-2"'
{
"success": true,
"path": "/static/resume/aboueleyes-resume-2.pdf"
}
A resource at http://path_to_server/static/resume/aboueleyes-reume-2.pdf
will now be available.
We were able to set up a tenant on Digital ocean ๐ Where we can create new instance (Droplet) ๐ง from the Controller as explained above and feeding scripts in the startup of this instance to join docker swarm ๐ณ of other machines.
We can monitor the performance of each running container on each instance using Portainer ๐ where we can manually run more containers of the same service, and configure it to auto-scale when the load is above the threshold.
We tested the app in the hosted state and even performed load testing on it.โ
his script utilizes Prometheus paired with cAdvisor metrics to determine CPU usage ๐. It then utilizes a manager node to determine if a service wants to be autoscaled and uses another manager node to scale the service.
Currently, the project only utilizes CPU to autoscale. If CPU usage reaches 85%, the service will scale up; if it reaches 25%, it will scale down โฌ๏ธโฌ๏ธ.
swarm.autoscaler=true
.deploy:
labels:
- "swarm.autoscaler=true"
This is best paired with resource constraints limits. This is also under the deploy key.
deploy:
resources:
reservations:
cpus: '0.25'
memory: 512M
limits:
cpus: '0.50'
Setting | Value | Description |
---|---|---|
swarm.autoscaler |
true |
Required. This enables autoscaling for a service. Anything other than true will not enable it |
swarm.autoscaler.minimum |
Integer | Optional. This is the minimum number of replicas wanted for a service. The autoscaler will not downscale below this number |
swarm.autoscaler.maximum |
Integer | Optional. This is the maximum number of replicas wanted for a service. The autoscaler will not scale up past this number |
We used a round-robin-based load balancing ๐ that is handled by Docker in the Swarm ๐ณ. Simply, it sends the first request to the first instance of the app (not for a machine) and the next request to the next instance, and so on until all instances have received a request. Then, it starts again from the beginning. ๐
To test our app functionality we created functional testing for each service on its own to test functionality in isolation as well as testing its functionality in integration with other services for example here is the tests implemented for Payments service.๐งช๐ง
We used JMeter to load test our app we configured it to simulate thousands of users' requests and the number grew gradually over the span of 10 seconds. here are some results for different endpoints. Here are a few examples of endpoint performance.
We also used Python
locust for load testing. check the test file here. Here are the results of this load test
This project is licensed under the GNU General Public License v3.0. See the LICENSE file for more details.