terraform-kubestack

Kubestack is a framework for Kubernetes platform engineering teams to define the entire cloud native stack in one Terraform code base and continuously evolve the platform safely through GitOps.

APACHE-2.0 License

Stars
645
Committers
27

Bot releases are hidden (Show)

terraform-kubestack - v0.12.0-beta.0

Published by pst almost 4 years ago

  • AKS: Support configurable CNI #135 thanks @feend78
  • Modules provide current_config output #140 thanks @feend78
  • GKE: Add support for private nodes #132 thanks @Spazzy757
  • Update the default nginx ingress version to v0.40.2-kbst.0 #143

Upgrade Notes

EKS

  • No EKS specific changes.

GKE

  • GKE upstream changed the default for new clusters to private nodes. Kubestack is following the new default with this release. Existing GKE clusters need to set enable_private_nodes = false to retain the previous configuration. Changing the private nodes setting requires recreating the cluster.

AKS

  • The AKS module by default now uses calico for network policies. Previously created AKS clusters have to set network_policy = null in config.auto.tfvars to retain the previous configuration. Changing the network_policy requires recreating the cluster.
terraform-kubestack - v0.11.0-beta.0

Published by pst about 4 years ago

Upgrade Notes

This release updates Terraform to v0.13.4. There are no other changes in this release. This is on purpose, and following the upstream TF 0.13 upgrade instructions.

Changes required by the upgrade to modules are part of this release and the Dockerfile has been updated to include the new Terraform version. As part of the TF 0.13 upgrade, Hashicorp released namespaced providers on the official registry. The kustomization provider has been published on the registry. To reflect this change in your state, it is required to run the following command manually, before committing the change and running the CI/CD pipeline.

Manual step: terraform state replace-provider

  1. Update to version v0.11.0-beta.0 in clusters.tf and Dockerfile*

  2. Run a shell in the container

    docker build -t kbst-infra-automation:bootstrap .
    
    docker run --rm -ti \
        -v `pwd`:/infra \
        kbst-infra-automation:bootstrap
    
  3. Make sure you are authenticated with your cloud provider

  4. Run the state replace-provider command

    terraform init
    
    terraform workspace select ops
    
    terraform state replace-provider registry.terraform.io/-/kustomization registry.terraform.io/kbst/kustomization
    
    terraform init
    
    terraform apply
    
  5. Commit, push and merge this change into master to validate it against the ops environment

  6. Once this completed for ops, run the same commands also for apps and then tag the merge commit

    terraform init
    
    terraform workspace select apps
    
    terraform state replace-provider registry.terraform.io/-/kustomization registry.terraform.io/kbst/kustomization
    
    terraform init
    
    terraform apply
    
terraform-kubestack - v0.10.0-beta.0

Published by pst about 4 years ago

  • Adds cluster-local modules for the new development environment feature
  • Updates kind provider to v0.0.3
  • Adds disable_default_ingress support to kind module

Upgrade Notes

This release allows teams to update their existing repositories to the new local development environment feature. To make this straightforward, there are no changes to actual AKS, EKS or GKE clusters between this and the previous v0.9.4-beta.0 release.

AKS, EKS and GKE

To update existing repositories to support the local development environment update the version in clusters.tf and Dockerfile to v0.10.0-beta.0. Additionally, you need to install the kbst CLI and get the Dockerfile.loc delivered with the latest starters.

The easiest way to do this is to follow the first tutorial step and then copy the Dockerfile.loc over from the temporary starter directory into your actual repository.

terraform-kubestack - v0.9.4-beta.0

Published by pst about 4 years ago

  • Transitional release for AKS to upgrade to the new Terraform provider and AKS APIs. Has breaking changes.

Upgrade Notes

EKS and GKE

There are no changes to EKS or GKE in this release. This release only affects AKS clusters.

AKS

There are two changes that will destroy and re-create various resources, that are required due to upstream changes. The azurerm provider starting with version v2.14.0 uses a different AKS API and this causes two issues:

  1. It requires instances with min. 2 CPU cores and 4GB memory. Changing the instance type forces a destroy and recreate.
  2. Loadbalancer's created by Kubernetes use Standard SKU, previously created IPs are of Basic SKU. Changing the SKU forces a new IP.

Upgrade by migrating workloads

To upgrade without downtime users will need to create a second cluster with the new module version, migrate workloads and then remove the old cluster module from their configuration.

Upgrade with a maintenance window

Alternatively, users can upgrade existing clusters with the following steps. Following these steps destroys and recreates the cluster and the public IP and this is likely to cause disruptions to cluster ingress and stateful workloads. You can follow the documentation for adding a new cluster and then removing a cluster.

Steps to upgrade existing clusters:

  1. First upgrade to v0.9.3-beta.1 and disable the default ingress by setting disable_default_ingress = true in config.auto.tfvars. Then apply this change. This will destroy all Ingress related resources, otherwise the next step will fail with a circular dependency.
  2. Now upgrade to v0.9.4-beta.0. Applying this will recreate the cluster if your current instance type does not meet the minimum requirements.
  3. Finally, re-enable the default ingress by setting disable_default_ingress = false or removing the attribute. Applying this will create a new IP and DNS resources for cluster ingress again.
terraform-kubestack - v0.9.3-beta.1

Published by pst about 4 years ago

  • Small fixup release renaming base_key to configuration_base_key for the custom environments feature.

Upgrade Notes

  1. Update the version in Dockerfile and clusters.tf to v0.9.3-beta.1.

There are no changes to clusters in this release.

terraform-kubestack - v0.9.3-beta.0

Published by pst about 4 years ago

  • Configuration inheritance now supports both custom names instead of apps and ops for the Terraform workspaces as well as more than two environments. This enables various alternative cluster-environment architectures. #127
  • Default ingress can now be disabled by setting disable_default_ingress = true in config.auto.tfvars. This excludes all ingress related infrastructure from being provisioned. Additionally, users may want to remove Nginx ingress from manifests/. #128

Upgrade Notes

  1. Update the version in Dockerfile and clusters.tf to v0.9.3-beta.0.

There are no changes to clusters in this release.

terraform-kubestack - v0.9.2-beta.1

Published by pst about 4 years ago

  • Improve aws-auth configmap fix #120
  • Update Cloud provider CLI and Terraform provider versions #121

Upgrade Notes

  1. Update the version in Dockerfile and clusters.tf to v0.9.2-beta.1.

There are no changes to clusters in this release.

terraform-kubestack - v0.9.2-beta.0

Published by pst about 4 years ago

  • Update Kustomize provider to v0.2.0-beta.0 #117
    • Improved resiliency for a number of edge cases when creating and deleting K8s resources
  • AKS: Fix kube_dashboard flapping #118
  • EKS: Fix aws-auth configmap already exists #119

Upgrade Notes

  1. Update the version in Dockerfile and clusters.tf to v0.9.2-beta.0.

There are no changes to clusters in this release.

terraform-kubestack - v0.9.1-beta.0

Published by pst over 4 years ago

  • Update Kustomize and Kustomize provider

Upgrade Notes

  1. Update the version in Dockerfile and clusters.tf to v0.9.1-beta.0.

There are no changes to clusters in this release.

terraform-kubestack - v0.9.0-beta.0

Published by pst over 4 years ago

  • Updates dependency versions in Dockerfile including Terraform and provider CLIs
  • Updates Terraform provider versions in modules
  • GKE: Bump min_master_version to 1.16
  • EKS: Replace worker node aws_autoscaling_group with dedicated aws_eks_node_group resource.

Upgrade Notes

GKE and AKS

  1. Update the version in Dockerfile and clusters.tf to v0.9.0-beta.0.

EKS

To migrate from the previously used aws_autoscaling_group to the new dedicated aws_eks_node_group resource without interruptions to the cluster workloads, a transitional release is provided. This transitional release will have worker nodes started using both the aws_autoscaling_group and aws_eks_node_group, allowing you to cordon the old nodes before they are removed when finally updating to v0.9.0-beta.0.

  1. Update the version in Dockerfile and clusters.tf to the transitional release v0.8.1-beta.0.
  2. Apply the transitional release to your ops and apps environments.
  3. Manually cordon old nodes using kubectl and wait for workloads to be moved.
  4. Update the version in Dockerfile and clusters.tf to v0.9.0-beta.0.
terraform-kubestack - v0.8.0-beta.0

Published by pst over 4 years ago

  • Introduce local development environment using KinD.
  • Publish provider specific images in addition to the current multi-cloud default to reduce image size and speed up CI/CD runs for single cloud use-cases.
    • multi-cloud: kubestack/framework:v0.8.0-beta.0 (same tag as before)
    • AKS: kubestack/framework:v0.8.0-beta.0-aks
    • EKS: kubestack/framework:v0.8.0-beta.0-eks
    • GKE: kubestack/framework:v0.8.0-beta.0-gke
    • KinD: kubestack/framework:v0.8.0-beta.0-kind
  • AKS, EKS, GKE and KinD starters use the specific image variants.
  • Entrypoint now starts as root then drops to regular user to configure user and groups correctly. This removes the need for the previously used lib-nss-wrapper.
  • Updated default nginx ingress controller to v0.30.0-kbst.1 which adds an overlay for the local KinD clusters

Upgrade Notes

  1. Update the version in Dockerfile and clusters.tf. Optionally, switch Dockerfile to the provider specific variant.
  2. Remove -u parameter from docker run commands in your pipeline and when running the container manually.
terraform-kubestack - v0.7.1-beta.0

Published by pst over 4 years ago

Speed up automation runs by not rebuilding the container image every time

Instead of building the entire image on every automation run, the Dockerfile in the repository now specifies an upstream image using FROM. This approach gives a good balance between speeding up automation runs but keeping the ability to extend the container image with custom requirements like for example credential helpers or custom CAs to verify self-signed certificates.

Upgrade Notes

  1. Replace ci-cd/Dockerfile:

    echo "FROM kubestack/framework:v0.7.1-beta.0" >  ci-cd/Dockerfile
    
  2. Remove obsolete ci-cd/entrypoint:

    # before v0.7.0-beta.0 entrypoint was called nss-wrapper
    rm  ci-cd/entrypoint
    
terraform-kubestack - v0.7.0-beta.0

Published by pst over 4 years ago

  • Dockerfile is now Python 3.x based
  • EKS: Support root device encryption (thanks @cbek)
  • EKS: Use aws_eks_cluster_auth data source instead of previous shell script (thanks @darren-reddick)
  • Simplify default overlay layout and support custom layouts
  • Add authentication helper env vars KBST_AUTH_AWS, KBST_AUTH_AZ, KBST_AUTH_GCLOUD to Docker entrypoint to simplify automation.

Upgrade Notes

  1. Updated version in Dockerfile

    Update the Dockerfile to the one from this release to get the latest versions.

  2. Simplified overlay layout

    The overlay layout was simplified by removing the intermediate provider overlays (eks, aks and gke). When upgrading an existing repository, either consider adapting your overlay structure or overwriting the default using the manifest_path cluster module attribute.

    Examples:

    # AKS example
    module "aks_zero" {
      # [...]
      manifest_path = "manifests/overlays/aks/${terraform.workspace}"
    }
    
    # EKS example
    module "eks_zero" {
      # [...]
      manifest_path = "manifests/overlays/eks/${terraform.workspace}"
    }
    
    # GKE example
    module "gke_zero" {
      # [...]
      manifest_path = "manifests/overlays/gke/${terraform.workspace}"
    }
    
terraform-kubestack - v0.6.0-beta.1

Published by pst over 4 years ago

  • Bugfix release for the kustomize provider included in v0.6.0-beta.0 handling GroupVersionKind to GroupVersionResource conversion incorrectly, resulting in not found errors for ingress and most likely other resource kinds.

Upgrade Notes

Update the Dockerfile to the one from this release to get the fixed provider version.

Then, please refer to the upgrade notes of v0.6.0-beta.0.

terraform-kubestack - v0.6.0-beta.0

Published by pst over 4 years ago

  • Replace the provisioner for kustomize and kubectl integration used until now with the new terraform-provider-kustomize

Upgrade Notes

Remember to update both the version of the module in clusters.tf as well as the Dockerfile under ci-cd/.

Cluster services (AKS, EKS, GKE)

Replacing the previous provisioner based approach with a Terraform provider to integrate Kustomize with Terraform allows each Kubernetes resource to be tracked individually in Terraform state. This integrates resources fully into the Terraform lifecycle including in-place or re-create updates and purging.

To migrate existing clusters without downtime, two manual steps are required to import Kubernetes resources for the new provider.

  1. Remove ingress-kbst-default namespace from TF state

    Previously, the ingress-kbst-default namespace was managed both by kustomize as well as Terraform. Now the namespace is only managed by the new terraform-provider-kustomize.

    To prevent deletion and re-creation of the namespace resource and the service type loadbalancer which could cause downtime for applications, it's recommended to manually remove the namespace from Terraform state. So Terraform does not make any changes to it until it is reimported below.

  2. Import cluster service resources into TF state

    Finally, all Kubernetes resources from manifests/ need to be imported into Terraform state, otherwise the apply will fail with resource already exists errors.

After running below commands, the Terraform apply of the Kubestack version v0.6.0-beta.0 on a v0.5.0-beta.0 cluster will merely destroy the null_resource from TF state previously used to track changes to the manifests.

Migration instructions

Below commands work for clusters created using the quickstart. If your module is not called aks_zero, eks_zero or gke_zero you need to adapt the commands below. If you have additional resources you need to import them accordingly. Remember to use single quotes '' around resource names and IDs in the import command.

You can run below commands in the bootstrap container:

# Build the bootstrap container
docker build -t kbst-infra-automation:bootstrap ci-cd/

# Exec into the bootstrap container
docker run --rm -ti \
    -v `pwd`:/infra \
    -u `id -u`:`id -g` \
    kbst-infra-automation:bootstrap
AKS
# remove the namespace resource from TF state
terraform state rm module.aks_zero.module.cluster.kubernetes_namespace.current
# import the kubernetes resources into TF state
terraform import 'module.aks_zero.module.cluster.module.cluster_services.kustomization_resource.current["apps_v1_Deployment|ingress-kbst-default|nginx-ingress-controller"]' 'apps_v1_Deployment|ingress-kbst-default|nginx-ingress-controller'
terraform import 'module.aks_zero.module.cluster.module.cluster_services.kustomization_resource.current["rbac.authorization.k8s.io_v1beta1_ClusterRoleBinding|~X|nginx-ingress-clusterrole-nisa-binding"]' 'rbac.authorization.k8s.io_v1beta1_ClusterRoleBinding|~X|nginx-ingress-clusterrole-nisa-binding'
terraform import 'module.aks_zero.module.cluster.module.cluster_services.kustomization_resource.current["rbac.authorization.k8s.io_v1beta1_ClusterRole|~X|nginx-ingress-clusterrole"]' 'rbac.authorization.k8s.io_v1beta1_ClusterRole|~X|nginx-ingress-clusterrole'
terraform import 'module.aks_zero.module.cluster.module.cluster_services.kustomization_resource.current["rbac.authorization.k8s.io_v1beta1_RoleBinding|ingress-kbst-default|nginx-ingress-role-nisa-binding"]' 'rbac.authorization.k8s.io_v1beta1_RoleBinding|ingress-kbst-default|nginx-ingress-role-nisa-binding'
terraform import 'module.aks_zero.module.cluster.module.cluster_services.kustomization_resource.current["rbac.authorization.k8s.io_v1beta1_Role|ingress-kbst-default|nginx-ingress-role"]' 'rbac.authorization.k8s.io_v1beta1_Role|ingress-kbst-default|nginx-ingress-role'
terraform import 'module.aks_zero.module.cluster.module.cluster_services.kustomization_resource.current["~G_v1_ConfigMap|ingress-kbst-default|nginx-configuration"]' '~G_v1_ConfigMap|ingress-kbst-default|nginx-configuration'
terraform import 'module.aks_zero.module.cluster.module.cluster_services.kustomization_resource.current["~G_v1_ConfigMap|ingress-kbst-default|tcp-services"]' '~G_v1_ConfigMap|ingress-kbst-default|tcp-services'
terraform import 'module.aks_zero.module.cluster.module.cluster_services.kustomization_resource.current["~G_v1_ConfigMap|ingress-kbst-default|udp-services"]' '~G_v1_ConfigMap|ingress-kbst-default|udp-services'
terraform import 'module.aks_zero.module.cluster.module.cluster_services.kustomization_resource.current["~G_v1_Namespace|~X|ingress-kbst-default"]' '~G_v1_Namespace|~X|ingress-kbst-default'
terraform import 'module.aks_zero.module.cluster.module.cluster_services.kustomization_resource.current["~G_v1_ServiceAccount|ingress-kbst-default|nginx-ingress-serviceaccount"]' '~G_v1_ServiceAccount|ingress-kbst-default|nginx-ingress-serviceaccount'
EKS
# remove the namespace resource from TF state
terraform state rm module.eks_zero.module.cluster.kubernetes_namespace.current
# import the kubernetes resources into TF state
terraform import 'module.eks_zero.module.cluster.module.cluster_services.kustomization_resource.current["apps_v1_Deployment|ingress-kbst-default|nginx-ingress-controller"]' 'apps_v1_Deployment|ingress-kbst-default|nginx-ingress-controller'
terraform import 'module.eks_zero.module.cluster.module.cluster_services.kustomization_resource.current["rbac.authorization.k8s.io_v1beta1_ClusterRoleBinding|~X|nginx-ingress-clusterrole-nisa-binding"]' 'rbac.authorization.k8s.io_v1beta1_ClusterRoleBinding|~X|nginx-ingress-clusterrole-nisa-binding'
terraform import 'module.eks_zero.module.cluster.module.cluster_services.kustomization_resource.current["rbac.authorization.k8s.io_v1beta1_ClusterRole|~X|nginx-ingress-clusterrole"]' 'rbac.authorization.k8s.io_v1beta1_ClusterRole|~X|nginx-ingress-clusterrole'
terraform import 'module.eks_zero.module.cluster.module.cluster_services.kustomization_resource.current["rbac.authorization.k8s.io_v1beta1_RoleBinding|ingress-kbst-default|nginx-ingress-role-nisa-binding"]' 'rbac.authorization.k8s.io_v1beta1_RoleBinding|ingress-kbst-default|nginx-ingress-role-nisa-binding'
terraform import 'module.eks_zero.module.cluster.module.cluster_services.kustomization_resource.current["rbac.authorization.k8s.io_v1beta1_Role|ingress-kbst-default|nginx-ingress-role"]' 'rbac.authorization.k8s.io_v1beta1_Role|ingress-kbst-default|nginx-ingress-role'
terraform import 'module.eks_zero.module.cluster.module.cluster_services.kustomization_resource.current["~G_v1_ConfigMap|ingress-kbst-default|nginx-configuration"]' '~G_v1_ConfigMap|ingress-kbst-default|nginx-configuration'
terraform import 'module.eks_zero.module.cluster.module.cluster_services.kustomization_resource.current["~G_v1_ConfigMap|ingress-kbst-default|tcp-services"]' '~G_v1_ConfigMap|ingress-kbst-default|tcp-services'
terraform import 'module.eks_zero.module.cluster.module.cluster_services.kustomization_resource.current["~G_v1_ConfigMap|ingress-kbst-default|udp-services"]' '~G_v1_ConfigMap|ingress-kbst-default|udp-services'
terraform import 'module.eks_zero.module.cluster.module.cluster_services.kustomization_resource.current["~G_v1_Namespace|~X|ingress-kbst-default"]' '~G_v1_Namespace|~X|ingress-kbst-default'
terraform import 'module.eks_zero.module.cluster.module.cluster_services.kustomization_resource.current["~G_v1_ServiceAccount|ingress-kbst-default|nginx-ingress-serviceaccount"]' '~G_v1_ServiceAccount|ingress-kbst-default|nginx-ingress-serviceaccount'
GKE
# remove the namespace resource from TF state
terraform state rm module.gke_zero.module.cluster.kubernetes_namespace.current
# import the kubernetes resources into TF state
terraform import 'module.gke_zero.module.cluster.module.cluster_services.kustomization_resource.current["apps_v1_Deployment|ingress-kbst-default|nginx-ingress-controller"]' 'apps_v1_Deployment|ingress-kbst-default|nginx-ingress-controller'
terraform import 'module.gke_zero.module.cluster.module.cluster_services.kustomization_resource.current["rbac.authorization.k8s.io_v1beta1_ClusterRoleBinding|~X|nginx-ingress-clusterrole-nisa-binding"]' 'rbac.authorization.k8s.io_v1beta1_ClusterRoleBinding|~X|nginx-ingress-clusterrole-nisa-binding'
terraform import 'module.gke_zero.module.cluster.module.cluster_services.kustomization_resource.current["rbac.authorization.k8s.io_v1beta1_ClusterRole|~X|nginx-ingress-clusterrole"]' 'rbac.authorization.k8s.io_v1beta1_ClusterRole|~X|nginx-ingress-clusterrole'
terraform import 'module.gke_zero.module.cluster.module.cluster_services.kustomization_resource.current["rbac.authorization.k8s.io_v1beta1_RoleBinding|ingress-kbst-default|nginx-ingress-role-nisa-binding"]' 'rbac.authorization.k8s.io_v1beta1_RoleBinding|ingress-kbst-default|nginx-ingress-role-nisa-binding'
terraform import 'module.gke_zero.module.cluster.module.cluster_services.kustomization_resource.current["rbac.authorization.k8s.io_v1beta1_Role|ingress-kbst-default|nginx-ingress-role"]' 'rbac.authorization.k8s.io_v1beta1_Role|ingress-kbst-default|nginx-ingress-role'
terraform import 'module.gke_zero.module.cluster.module.cluster_services.kustomization_resource.current["~G_v1_ConfigMap|ingress-kbst-default|nginx-configuration"]' '~G_v1_ConfigMap|ingress-kbst-default|nginx-configuration'
terraform import 'module.gke_zero.module.cluster.module.cluster_services.kustomization_resource.current["~G_v1_ConfigMap|ingress-kbst-default|tcp-services"]' '~G_v1_ConfigMap|ingress-kbst-default|tcp-services'
terraform import 'module.gke_zero.module.cluster.module.cluster_services.kustomization_resource.current["~G_v1_ConfigMap|ingress-kbst-default|udp-services"]' '~G_v1_ConfigMap|ingress-kbst-default|udp-services'
terraform import 'module.gke_zero.module.cluster.module.cluster_services.kustomization_resource.current["~G_v1_Namespace|~X|ingress-kbst-default"]' '~G_v1_Namespace|~X|ingress-kbst-default'
terraform import 'module.gke_zero.module.cluster.module.cluster_services.kustomization_resource.current["~G_v1_ServiceAccount|ingress-kbst-default|nginx-ingress-serviceaccount"]' '~G_v1_ServiceAccount|ingress-kbst-default|nginx-ingress-serviceaccount'
terraform-kubestack - v0.5.0-beta.0

Published by pst almost 5 years ago

  • EKS: Allow configuring mapRoles, mapUsers and mapAccounts.
    See #69 for usage details.
  • EKS: Add security groups to allow apiserver webhook communication.
  • Update versions of Terraform and used providers.
  • Update versions of cloud provider CLIs.
  • Update version of Kustomize and add apiVersion to kustomization files.

Thanks @youngnicks, @piotrszlenk and @cbek for contributions to this release.

Upgrade Notes

Cluster services (AKS, EKS, GKE)

The previous release included a version of the nginx ingress controller cluster service which had the version set as a label and as a labelSelector. Since the labelSelectors are immutable, this causes applying the update to the deployment to fail. This issue has since been fixed in the nginx ingress controller base, however, for existing clusters this requires the deployment to be recreated manually to update to this release.

AKS

Upstream has added support for multiple node pools. This was implemented by switching from AvailabilitySets to VirtualMachineScaleSets. This change is reflected in the azurerm Terraform provider by renaming the agent_pool_profile attribute to default_node_pool. This requires recreating AKS clusters. While backwards compatibility is an important goal for Kubestack, it would require a lot of complexity to support the upstream changes in Terraform which isn't justified for an early beta release.

To avoid a service disruption consider creating a new cluster pair, migrate the workloads, then destroy the previous one by temporarily loading the old and new module version in clusters.tf.

terraform-kubestack - v0.4.0-beta.0

Published by pst over 5 years ago

  • No changes to clusters compared to v0.3.0-beta.0
  • Upgrades syntax to Terraform 0.12

Upgrade Notes

This release changes the upstream modules to the new Terraform 0.12 configuration language syntax. Likewise, repositories bootstrapped from the quickstart, need to be updated aswell. There are two small changes required.

  1. Change configuration variable syntax in clusters.tf like in the example below:
    -  configuration = "${var.clusters["eks_zero"]}"
    +  configuration = var.clusters["eks_zero"]
    
  2. Update the variable type definition in variables.tf like in the example below:
    -  type        = "map"
    +  type        = map(map(map(string)))
    

Last but not least, remember to upgrade the Terraform version in the Dockerfile. Depending on when you bootstrapped your repository, there may be additional changes in that Dockerfile worth copying.

terraform-kubestack - v0.3.0-beta.0

Published by pst over 5 years ago

  • GKE: Auto scaling - replace cluster default with separate node pool with auto scaling enabled
  • GKE and AWS: Add node pool modules in preparation for additional node pool support per cluster
  • GKE: Remove depracated node_medatadata feature and replace deprecated region with location parameter
  • Update versions of Terraform providers

Upgrade Notes

GKE

Temporarily set remove_default_node_pool = false in the cluster pair's config. Then apply once, to spawn the new node pool. Once that's done. Remove the variable again, and apply a second time to remove the now obsolete previous node pool. Compare below diff for an example of configuration changes from v0.2.1-beta.0 to v0.3.0-beta.0 for autoscaling and including the temporary remove_default_node_pool = false.

It is recommended to manually cordon the nodes of the old node pool and wait for workloads to be migrated by K8s, before applying the second time and removing the default node pool.

$ git diff v0.2.1-beta.0 -- tests/config.auto.tfvars
diff --git a/tests/config.auto.tfvars b/tests/config.auto.tfvars
index 2dd773e..4bbdae6 100644
--- a/tests/config.auto.tfvars
+++ b/tests/config.auto.tfvars
@@ -25,14 +25,16 @@ clusters = {
       name_prefix                = "testing"
       base_domain                = "infra.serverwolken.de"
       cluster_min_master_version = "1.13.6"
-      cluster_initial_node_count = 1
+      cluster_min_node_count     = 1
+      cluster_max_node_count     = 3
       region                     = "europe-west1"
-      cluster_additional_zones   = "europe-west1-b,europe-west1-c,europe-west1-d"
+      cluster_node_locations     = "europe-west1-b,europe-west1-c,europe-west1-d"
+      remove_default_node_pool   = false
     }
 
     # Settings for Ops-cluster
     ops = {
-      cluster_additional_zones = "europe-west1-b"
+      cluster_node_locations = "europe-west1-b"
     }
   }

EKS

Manually move the autoscaling group and launch configurations in the state to reflect the new module hierachy like in the example below. After that, there should be no changes to apply when upgrading from v0.2.1-beta.0 to v0.3.0-beta.0.

kbst@298d3d14f141:/infra$ terraform state mv module.eks_zero.module.cluster.aws_autoscaling_group.nodes module.eks_zero.module.cluster.module.node_pool.aws_autoscaling_group.nodes
Moved module.eks_zero.module.cluster.aws_autoscaling_group.nodes to module.eks_zero.module.cluster.module.node_pool.aws_autoscaling_group.nodes

kbst@298d3d14f141:/infra$ terraform state mv module.eks_zero.module.cluster.aws_launch_configuration.nodes module.eks_zero.module.cluster.module.node_pool.aws_launch_configuration.nodes
Moved module.eks_zero.module.cluster.aws_launch_configuration.nodes to module.eks_zero.module.cluster.module.node_pool.aws_launch_configuration.nodes

AKS

No changes in this release.

terraform-kubestack - v0.2.1-beta.0

Published by pst over 5 years ago

  • Add support for localhost labs using Kubernetes in Docker (kind)
  • Fix AWS correctly assuming cross account roles also for aws-iam-authenticator
  • Bump GKE min_master_version to 1.13.6
terraform-kubestack - v0.2.0-beta.1

Published by pst over 5 years ago

  • Fix az command line by locking version and installing dependencies
  • Expose AKS variables
Package Rankings
Top 6.75% on Proxy.golang.org
Badges
Extracted from project README
Status GitHub Issues GitHub Pull Requests
Related Projects