A Terraform module to quickly deploy a secure, persistent, highly available, self healing, efficient, cost effective and self managed single-node or multi-node MongoDB NoSQL document database cluster on AWS ECS cluster with monitoring and alerting enabled.
APACHE-2.0 License
A Terraform module to quickly deploy a secure, persistent, highly available, self healing, efficient, cost effective and self managed single-node or multi-node MongoDB NoSQL document database cluster on AWS ECS cluster with monitoring and alerting enabled.
awsvpc
as the network mode instead of AWS ELB to save cost on networking side and make it more secure. AWS ECS services' task IPs are updated on the AWS Route 53 private hosted zone by the bootstrapping script that runs on each AWS EC2 instance node as user data.Following are the resources that should exist already before starting the deployment:
/docker/[ENVIRONMENT_NAME]/MONGODB_USERNAME
under AWS SSM Parameter Store having username of MongoDB cluster./docker/[ENVIRONMENT_NAME]/MONGODB_PASSWORD
under AWS SSM Parameter Store having password of MongoDB cluster./docker/[ENVIRONMENT_NAME]/MONGODB_KEYFILE
under AWS SSM Parameter Store having contents of a keyfile created locally with the following commands for MongoDB cluster:
openssl rand -base64 756 > mongodb.key
chmod 400 mongodb.key
[PROJECT]-[ENVIRONMENT_NAME]-mongodb
under AWS EC2 Key Pairs.Simply deploy it from the terraform directory directly or either as a Terraform module by specifying the desired values for the variables. You can check terraform-usage-example.tf
file as an example.
Once the deployment is done, log into the MongoDB cluster via its 1st AWS EC2 instance node using AWS SSM Session Manager using the following command after replacing [USERNAME]
, [PASSWORD]
, [ENVIRONMENT_NAME]
and [AWS_ROUTE_53_PRIVATE_HOSTED_ZONE_NAME]
in it: mongosh "mongodb://[USERNAME]:[PASSWORD]@[ENVIRONMENT_NAME]-mongodb1.[AWS_ROUTE_53_PRIVATE_HOSTED_ZONE_NAME]:27017/admin?&retryWrites=false"
Then initiate the replica set using the following command after replacing [ENVIRONMENT_NAME]
and [AWS_ROUTE_53_PRIVATE_HOSTED_ZONE_NAME]
in it:
rs.initiate({
_id: "rs0",
members: [
{ _id: 0, host: "[ENVIRONMENT_NAME]-mongodb1.[AWS_ROUTE_53_PRIVATE_HOSTED_ZONE_NAME]:27017" },
{ _id: 1, host: "[ENVIRONMENT_NAME]-mongodb2.[AWS_ROUTE_53_PRIVATE_HOSTED_ZONE_NAME]:27017" },
{ _id: 2, host: "[ENVIRONMENT_NAME]-mongodb3.[AWS_ROUTE_53_PRIVATE_HOSTED_ZONE_NAME]:27017" }
]
})
You can now connect to the replica set using the following command after replacing [USERNAME]
, [PASSWORD]
, [ENVIRONMENT_NAME]
and [AWS_ROUTE_53_PRIVATE_HOSTED_ZONE_NAME]
in it: mongosh "mongodb://[USERNAME]:[PASSWORD]@[ENVIRONMENT_NAME]-mongodb1.[AWS_ROUTE_53_PRIVATE_HOSTED_ZONE_NAME]:27017,[ENVIRONMENT_NAME]-mongodb2.[AWS_ROUTE_53_PRIVATE_HOSTED_ZONE_NAME]:27017,[ENVIRONMENT_NAME]-mongodb3.[AWS_ROUTE_53_PRIVATE_HOSTED_ZONE_NAME]:27017/admin?replicaSet=rs0&readPreference=secondaryPreferred&retryWrites=true"
Note: The sample commands in the above example assumes that the cluster has 3 nodes.
If you lost the replica set, you can reconfigure it using the following commands after replacing [ENVIRONMENT_NAME]
and [AWS_ROUTE_53_PRIVATE_HOSTED_ZONE_NAME]
in them:
rs.reconfig({
_id: "rs0",
members: [
{ _id: 0, host: "[ENVIRONMENT_NAME]-mongodb1.[AWS_ROUTE_53_PRIVATE_HOSTED_ZONE_NAME]:27017" }
]
}, {"force":true})
rs.add({ _id: 1, host: "[ENVIRONMENT_NAME]-mongodb2.[AWS_ROUTE_53_PRIVATE_HOSTED_ZONE_NAME]:27017" })
rs.add({ _id: 2, host: "[ENVIRONMENT_NAME]-mongodb3.[AWS_ROUTE_53_PRIVATE_HOSTED_ZONE_NAME]:27017" })
Note: The sample commands in the above example assumes that the cluster has 3 nodes.