k8s_aws_project

Stars
0
Committers
2
  • Use S3 Bucket: Store user files in an S3 bucket.
  • Use AWS RDS: Store user credentials in an AWS RDS instance.
  • Use AWS Secrets Manager: Store environment variables such as MYSQL_DATABASE, MYSQL_HOST, MYSQL_PASSWORD, and MYSQL_USER as key-value pairs. Storing these secrets in this manner is crucial for keeping them secure, especially when code is placed under version control, e.g., GitHub.
  • Use Redis: Implement Redis master and replica containers for improved application performance and caching.
  • Use Ingress with Custom Domain: Configure Ingress with a custom domain.
    Integration with AWS Resources
    Connecting the EKS cluster with AWS resources and ensuring proper permissions was an intriguing experience. It was satisfying to see these resources work seamlessly with my pods in the cluster.

Kubeconfig Update: To communicate with the cluster, update the ~/.kube/config file with the cluster configuration using the command: aws eks update-kubeconfig --region reg-name --name cluster-name --profile profile_name.

Install Ingress Controller: I chose the NGINX Ingress Controller and installed it via a Helm chart. The following services and pods are installed in the ingress-nginx namespace:

  • Pod named ingress-nginx-controller (the controller)
  • Service named ingress-nginx-controller with type LoadBalancer, which provides a public IP address.
  • Service ingress-nginx-controller-admission, which exposes the controller to the cluster via a ClusterIP.

Connect NGINX Controller: After confirming that the deployment and Nextcloud application are exposed through the service, I connected the NGINX controller to the service using a YAML file in the GitHub repository. This is done through rules based on client requests (e.g., requests with host: mydomain1.academy.cyberlab.rs are directed to a specific service on port 8080).

  • Grant Permissions: Configure the cluster with permissions to access Route 53. I had a public domain (e.g., domain.ca) and created an A record to point the public IP address of my load balancer to this domain in my hosted zone. I set up a wildcard record *.domain.ca to cover all subdomains.
  • Configure Route 53 Access Policy: Using Terraform, I configured an AWS Route 53 Access Policy and attached it to a role. This policy includes permissions for route53:ListResourceRecordSets, route53:ListHostedZones, and route53:GetHostedZone. This role is used by the service account within the Nextcloud deployment.

ESO Components:

  • SecretStore: Provides the ESO with information on how to connect to the secrets manager, including access keys and secret access keys.
  • External Secret: Defines how to retrieve secrets from an external store. I created an External Secret object with a refresh interval and a reference to the SecretStore. The secret from AWS is periodically refreshed.
  • Integration: These external variables are named in the values.yaml file and passed as environment variables in the deployment.
  • Redis Configuration: Redis is configured with two pods (master and replica) and requires persistent volumes for caching. I set up two EBS volumes on AWS and configured Persistent Volume Claims (PVCs) in the deployment to use these EBS volumes. The storage class used is gp2. Also, attached ebs csi driver to the EKS cluster.
  • Troubleshooting: Initially, the EBS volumes were not utilized despite being bound to PVCs. By inspecting the aws-auth config map and assigning the IAM role for the cluster the ec2fullaccess policy, the volumes began to work correctly. Now, Redis master and replica pods cache data effectively.