Kubestack is a framework for Kubernetes platform engineering teams to define the entire cloud native stack in one Terraform code base and continuously evolve the platform safely through GitOps.
APACHE-2.0 License
Bot releases are hidden (Show)
Published by pst almost 4 years ago
current_config
output #140 thanks @feend78enable_private_nodes = false
to retain the previous configuration. Changing the private nodes setting requires recreating the cluster.network_policy = null
in config.auto.tfvars
to retain the previous configuration. Changing the network_policy
requires recreating the cluster.Published by pst about 4 years ago
This release updates Terraform to v0.13.4
. There are no other changes in this release. This is on purpose, and following the upstream TF 0.13 upgrade instructions.
Changes required by the upgrade to modules are part of this release and the Dockerfile has been updated to include the new Terraform version. As part of the TF 0.13 upgrade, Hashicorp released namespaced providers on the official registry. The kustomization
provider has been published on the registry. To reflect this change in your state, it is required to run the following command manually, before committing the change and running the CI/CD pipeline.
Update to version v0.11.0-beta.0
in clusters.tf
and Dockerfile*
Run a shell in the container
docker build -t kbst-infra-automation:bootstrap .
docker run --rm -ti \
-v `pwd`:/infra \
kbst-infra-automation:bootstrap
Make sure you are authenticated with your cloud provider
Run the state replace-provider command
terraform init
terraform workspace select ops
terraform state replace-provider registry.terraform.io/-/kustomization registry.terraform.io/kbst/kustomization
terraform init
terraform apply
Commit, push and merge this change into master to validate it against the ops
environment
Once this completed for ops, run the same commands also for apps
and then tag the merge commit
terraform init
terraform workspace select apps
terraform state replace-provider registry.terraform.io/-/kustomization registry.terraform.io/kbst/kustomization
terraform init
terraform apply
Published by pst about 4 years ago
This release allows teams to update their existing repositories to the new local development environment feature. To make this straightforward, there are no changes to actual AKS, EKS or GKE clusters between this and the previous v0.9.4-beta.0
release.
To update existing repositories to support the local development environment update the version in clusters.tf
and Dockerfile
to v0.10.0-beta.0
. Additionally, you need to install the kbst
CLI and get the Dockerfile.loc
delivered with the latest starters.
The easiest way to do this is to follow the first tutorial step and then copy the Dockerfile.loc
over from the temporary starter directory into your actual repository.
Published by pst about 4 years ago
There are no changes to EKS or GKE in this release. This release only affects AKS clusters.
There are two changes that will destroy and re-create various resources, that are required due to upstream changes. The azurerm
provider starting with version v2.14.0
uses a different AKS API and this causes two issues:
Standard
SKU, previously created IPs are of Basic
SKU. Changing the SKU forces a new IP.To upgrade without downtime users will need to create a second cluster with the new module version, migrate workloads and then remove the old cluster module from their configuration.
Alternatively, users can upgrade existing clusters with the following steps. Following these steps destroys and recreates the cluster and the public IP and this is likely to cause disruptions to cluster ingress and stateful workloads. You can follow the documentation for adding a new cluster and then removing a cluster.
Steps to upgrade existing clusters:
v0.9.3-beta.1
and disable the default ingress by setting disable_default_ingress = true
in config.auto.tfvars
. Then apply this change. This will destroy all Ingress related resources, otherwise the next step will fail with a circular dependency.v0.9.4-beta.0
. Applying this will recreate the cluster if your current instance type does not meet the minimum requirements.disable_default_ingress = false
or removing the attribute. Applying this will create a new IP and DNS resources for cluster ingress again.Published by pst about 4 years ago
base_key
to configuration_base_key
for the custom environments feature.Dockerfile
and clusters.tf
to v0.9.3-beta.1
.There are no changes to clusters in this release.
Published by pst about 4 years ago
apps
and ops
for the Terraform workspaces as well as more than two environments. This enables various alternative cluster-environment architectures. #127disable_default_ingress = true
in config.auto.tfvars
. This excludes all ingress related infrastructure from being provisioned. Additionally, users may want to remove Nginx ingress from manifests/
. #128Dockerfile
and clusters.tf
to v0.9.3-beta.0
.There are no changes to clusters in this release.
Published by pst about 4 years ago
aws-auth
configmap fix #120Dockerfile
and clusters.tf
to v0.9.2-beta.1
.There are no changes to clusters in this release.
Published by pst about 4 years ago
v0.2.0-beta.0
#117
kube_dashboard
flapping #118aws-auth
configmap already exists #119Dockerfile
and clusters.tf
to v0.9.2-beta.0
.There are no changes to clusters in this release.
Published by pst over 4 years ago
Dockerfile
and clusters.tf
to v0.9.1-beta.0
.There are no changes to clusters in this release.
Published by pst over 4 years ago
aws_autoscaling_group
with dedicated aws_eks_node_group
resource.Dockerfile
and clusters.tf
to v0.9.0-beta.0
.To migrate from the previously used aws_autoscaling_group
to the new dedicated aws_eks_node_group
resource without interruptions to the cluster workloads, a transitional release is provided. This transitional release will have worker nodes started using both the aws_autoscaling_group
and aws_eks_node_group
, allowing you to cordon the old nodes before they are removed when finally updating to v0.9.0-beta.0
.
Dockerfile
and clusters.tf
to the transitional release v0.8.1-beta.0
.kubectl
and wait for workloads to be moved.Dockerfile
and clusters.tf
to v0.9.0-beta.0
.Published by pst over 4 years ago
kubestack/framework:v0.8.0-beta.0
(same tag as before)kubestack/framework:v0.8.0-beta.0-aks
kubestack/framework:v0.8.0-beta.0-eks
kubestack/framework:v0.8.0-beta.0-gke
kubestack/framework:v0.8.0-beta.0-kind
lib-nss-wrapper
.Dockerfile
and clusters.tf
. Optionally, switch Dockerfile
to the provider specific variant.-u
parameter from docker run
commands in your pipeline and when running the container manually.Published by pst over 4 years ago
Speed up automation runs by not rebuilding the container image every time
Instead of building the entire image on every automation run, the Dockerfile in the repository now specifies an upstream image using FROM
. This approach gives a good balance between speeding up automation runs but keeping the ability to extend the container image with custom requirements like for example credential helpers or custom CAs to verify self-signed certificates.
Replace ci-cd/Dockerfile
:
echo "FROM kubestack/framework:v0.7.1-beta.0" > ci-cd/Dockerfile
Remove obsolete ci-cd/entrypoint
:
# before v0.7.0-beta.0 entrypoint was called nss-wrapper
rm ci-cd/entrypoint
Published by pst over 4 years ago
aws_eks_cluster_auth
data source instead of previous shell script (thanks @darren-reddick)KBST_AUTH_AWS
, KBST_AUTH_AZ
, KBST_AUTH_GCLOUD
to Docker entrypoint
to simplify automation.Updated version in Dockerfile
Update the Dockerfile
to the one from this release to get the latest versions.
Simplified overlay layout
The overlay layout was simplified by removing the intermediate provider overlays (eks, aks and gke). When upgrading an existing repository, either consider adapting your overlay structure or overwriting the default using the manifest_path
cluster module attribute.
Examples:
# AKS example
module "aks_zero" {
# [...]
manifest_path = "manifests/overlays/aks/${terraform.workspace}"
}
# EKS example
module "eks_zero" {
# [...]
manifest_path = "manifests/overlays/eks/${terraform.workspace}"
}
# GKE example
module "gke_zero" {
# [...]
manifest_path = "manifests/overlays/gke/${terraform.workspace}"
}
Published by pst over 4 years ago
Update the Dockerfile
to the one from this release to get the fixed provider version.
Then, please refer to the upgrade notes of v0.6.0-beta.0.
Published by pst over 4 years ago
kustomize
and kubectl
integration used until now with the new terraform-provider-kustomize
Remember to update both the version of the module in clusters.tf
as well as the Dockerfile
under ci-cd/
.
Replacing the previous provisioner based approach with a Terraform provider to integrate Kustomize with Terraform allows each Kubernetes resource to be tracked individually in Terraform state. This integrates resources fully into the Terraform lifecycle including in-place or re-create updates and purging.
To migrate existing clusters without downtime, two manual steps are required to import Kubernetes resources for the new provider.
Remove ingress-kbst-default
namespace from TF state
Previously, the ingress-kbst-default
namespace was managed both by kustomize as well as Terraform. Now the namespace is only managed by the new terraform-provider-kustomize
.
To prevent deletion and re-creation of the namespace resource and the service type loadbalancer which could cause downtime for applications, it's recommended to manually remove the namespace from Terraform state. So Terraform does not make any changes to it until it is reimported below.
Import cluster service resources into TF state
Finally, all Kubernetes resources from manifests/
need to be imported into Terraform state, otherwise the apply will fail with resource already exists errors.
After running below commands, the Terraform apply of the Kubestack version v0.6.0-beta.0 on a v0.5.0-beta.0 cluster will merely destroy the null_resource from TF state previously used to track changes to the manifests.
Below commands work for clusters created using the quickstart. If your module is not called aks_zero
, eks_zero
or gke_zero
you need to adapt the commands below. If you have additional resources you need to import them accordingly. Remember to use single quotes ''
around resource names and IDs in the import command.
You can run below commands in the bootstrap container:
# Build the bootstrap container
docker build -t kbst-infra-automation:bootstrap ci-cd/
# Exec into the bootstrap container
docker run --rm -ti \
-v `pwd`:/infra \
-u `id -u`:`id -g` \
kbst-infra-automation:bootstrap
# remove the namespace resource from TF state
terraform state rm module.aks_zero.module.cluster.kubernetes_namespace.current
# import the kubernetes resources into TF state
terraform import 'module.aks_zero.module.cluster.module.cluster_services.kustomization_resource.current["apps_v1_Deployment|ingress-kbst-default|nginx-ingress-controller"]' 'apps_v1_Deployment|ingress-kbst-default|nginx-ingress-controller'
terraform import 'module.aks_zero.module.cluster.module.cluster_services.kustomization_resource.current["rbac.authorization.k8s.io_v1beta1_ClusterRoleBinding|~X|nginx-ingress-clusterrole-nisa-binding"]' 'rbac.authorization.k8s.io_v1beta1_ClusterRoleBinding|~X|nginx-ingress-clusterrole-nisa-binding'
terraform import 'module.aks_zero.module.cluster.module.cluster_services.kustomization_resource.current["rbac.authorization.k8s.io_v1beta1_ClusterRole|~X|nginx-ingress-clusterrole"]' 'rbac.authorization.k8s.io_v1beta1_ClusterRole|~X|nginx-ingress-clusterrole'
terraform import 'module.aks_zero.module.cluster.module.cluster_services.kustomization_resource.current["rbac.authorization.k8s.io_v1beta1_RoleBinding|ingress-kbst-default|nginx-ingress-role-nisa-binding"]' 'rbac.authorization.k8s.io_v1beta1_RoleBinding|ingress-kbst-default|nginx-ingress-role-nisa-binding'
terraform import 'module.aks_zero.module.cluster.module.cluster_services.kustomization_resource.current["rbac.authorization.k8s.io_v1beta1_Role|ingress-kbst-default|nginx-ingress-role"]' 'rbac.authorization.k8s.io_v1beta1_Role|ingress-kbst-default|nginx-ingress-role'
terraform import 'module.aks_zero.module.cluster.module.cluster_services.kustomization_resource.current["~G_v1_ConfigMap|ingress-kbst-default|nginx-configuration"]' '~G_v1_ConfigMap|ingress-kbst-default|nginx-configuration'
terraform import 'module.aks_zero.module.cluster.module.cluster_services.kustomization_resource.current["~G_v1_ConfigMap|ingress-kbst-default|tcp-services"]' '~G_v1_ConfigMap|ingress-kbst-default|tcp-services'
terraform import 'module.aks_zero.module.cluster.module.cluster_services.kustomization_resource.current["~G_v1_ConfigMap|ingress-kbst-default|udp-services"]' '~G_v1_ConfigMap|ingress-kbst-default|udp-services'
terraform import 'module.aks_zero.module.cluster.module.cluster_services.kustomization_resource.current["~G_v1_Namespace|~X|ingress-kbst-default"]' '~G_v1_Namespace|~X|ingress-kbst-default'
terraform import 'module.aks_zero.module.cluster.module.cluster_services.kustomization_resource.current["~G_v1_ServiceAccount|ingress-kbst-default|nginx-ingress-serviceaccount"]' '~G_v1_ServiceAccount|ingress-kbst-default|nginx-ingress-serviceaccount'
# remove the namespace resource from TF state
terraform state rm module.eks_zero.module.cluster.kubernetes_namespace.current
# import the kubernetes resources into TF state
terraform import 'module.eks_zero.module.cluster.module.cluster_services.kustomization_resource.current["apps_v1_Deployment|ingress-kbst-default|nginx-ingress-controller"]' 'apps_v1_Deployment|ingress-kbst-default|nginx-ingress-controller'
terraform import 'module.eks_zero.module.cluster.module.cluster_services.kustomization_resource.current["rbac.authorization.k8s.io_v1beta1_ClusterRoleBinding|~X|nginx-ingress-clusterrole-nisa-binding"]' 'rbac.authorization.k8s.io_v1beta1_ClusterRoleBinding|~X|nginx-ingress-clusterrole-nisa-binding'
terraform import 'module.eks_zero.module.cluster.module.cluster_services.kustomization_resource.current["rbac.authorization.k8s.io_v1beta1_ClusterRole|~X|nginx-ingress-clusterrole"]' 'rbac.authorization.k8s.io_v1beta1_ClusterRole|~X|nginx-ingress-clusterrole'
terraform import 'module.eks_zero.module.cluster.module.cluster_services.kustomization_resource.current["rbac.authorization.k8s.io_v1beta1_RoleBinding|ingress-kbst-default|nginx-ingress-role-nisa-binding"]' 'rbac.authorization.k8s.io_v1beta1_RoleBinding|ingress-kbst-default|nginx-ingress-role-nisa-binding'
terraform import 'module.eks_zero.module.cluster.module.cluster_services.kustomization_resource.current["rbac.authorization.k8s.io_v1beta1_Role|ingress-kbst-default|nginx-ingress-role"]' 'rbac.authorization.k8s.io_v1beta1_Role|ingress-kbst-default|nginx-ingress-role'
terraform import 'module.eks_zero.module.cluster.module.cluster_services.kustomization_resource.current["~G_v1_ConfigMap|ingress-kbst-default|nginx-configuration"]' '~G_v1_ConfigMap|ingress-kbst-default|nginx-configuration'
terraform import 'module.eks_zero.module.cluster.module.cluster_services.kustomization_resource.current["~G_v1_ConfigMap|ingress-kbst-default|tcp-services"]' '~G_v1_ConfigMap|ingress-kbst-default|tcp-services'
terraform import 'module.eks_zero.module.cluster.module.cluster_services.kustomization_resource.current["~G_v1_ConfigMap|ingress-kbst-default|udp-services"]' '~G_v1_ConfigMap|ingress-kbst-default|udp-services'
terraform import 'module.eks_zero.module.cluster.module.cluster_services.kustomization_resource.current["~G_v1_Namespace|~X|ingress-kbst-default"]' '~G_v1_Namespace|~X|ingress-kbst-default'
terraform import 'module.eks_zero.module.cluster.module.cluster_services.kustomization_resource.current["~G_v1_ServiceAccount|ingress-kbst-default|nginx-ingress-serviceaccount"]' '~G_v1_ServiceAccount|ingress-kbst-default|nginx-ingress-serviceaccount'
# remove the namespace resource from TF state
terraform state rm module.gke_zero.module.cluster.kubernetes_namespace.current
# import the kubernetes resources into TF state
terraform import 'module.gke_zero.module.cluster.module.cluster_services.kustomization_resource.current["apps_v1_Deployment|ingress-kbst-default|nginx-ingress-controller"]' 'apps_v1_Deployment|ingress-kbst-default|nginx-ingress-controller'
terraform import 'module.gke_zero.module.cluster.module.cluster_services.kustomization_resource.current["rbac.authorization.k8s.io_v1beta1_ClusterRoleBinding|~X|nginx-ingress-clusterrole-nisa-binding"]' 'rbac.authorization.k8s.io_v1beta1_ClusterRoleBinding|~X|nginx-ingress-clusterrole-nisa-binding'
terraform import 'module.gke_zero.module.cluster.module.cluster_services.kustomization_resource.current["rbac.authorization.k8s.io_v1beta1_ClusterRole|~X|nginx-ingress-clusterrole"]' 'rbac.authorization.k8s.io_v1beta1_ClusterRole|~X|nginx-ingress-clusterrole'
terraform import 'module.gke_zero.module.cluster.module.cluster_services.kustomization_resource.current["rbac.authorization.k8s.io_v1beta1_RoleBinding|ingress-kbst-default|nginx-ingress-role-nisa-binding"]' 'rbac.authorization.k8s.io_v1beta1_RoleBinding|ingress-kbst-default|nginx-ingress-role-nisa-binding'
terraform import 'module.gke_zero.module.cluster.module.cluster_services.kustomization_resource.current["rbac.authorization.k8s.io_v1beta1_Role|ingress-kbst-default|nginx-ingress-role"]' 'rbac.authorization.k8s.io_v1beta1_Role|ingress-kbst-default|nginx-ingress-role'
terraform import 'module.gke_zero.module.cluster.module.cluster_services.kustomization_resource.current["~G_v1_ConfigMap|ingress-kbst-default|nginx-configuration"]' '~G_v1_ConfigMap|ingress-kbst-default|nginx-configuration'
terraform import 'module.gke_zero.module.cluster.module.cluster_services.kustomization_resource.current["~G_v1_ConfigMap|ingress-kbst-default|tcp-services"]' '~G_v1_ConfigMap|ingress-kbst-default|tcp-services'
terraform import 'module.gke_zero.module.cluster.module.cluster_services.kustomization_resource.current["~G_v1_ConfigMap|ingress-kbst-default|udp-services"]' '~G_v1_ConfigMap|ingress-kbst-default|udp-services'
terraform import 'module.gke_zero.module.cluster.module.cluster_services.kustomization_resource.current["~G_v1_Namespace|~X|ingress-kbst-default"]' '~G_v1_Namespace|~X|ingress-kbst-default'
terraform import 'module.gke_zero.module.cluster.module.cluster_services.kustomization_resource.current["~G_v1_ServiceAccount|ingress-kbst-default|nginx-ingress-serviceaccount"]' '~G_v1_ServiceAccount|ingress-kbst-default|nginx-ingress-serviceaccount'
Published by pst almost 5 years ago
apiVersion
to kustomization files.Thanks @youngnicks, @piotrszlenk and @cbek for contributions to this release.
The previous release included a version of the nginx ingress controller cluster service which had the version set as a label and as a labelSelector. Since the labelSelectors are immutable, this causes applying the update to the deployment to fail. This issue has since been fixed in the nginx ingress controller base, however, for existing clusters this requires the deployment to be recreated manually to update to this release.
Upstream has added support for multiple node pools. This was implemented by switching from AvailabilitySet
s to VirtualMachineScaleSets
. This change is reflected in the azurerm
Terraform provider by renaming the agent_pool_profile
attribute to default_node_pool
. This requires recreating AKS clusters. While backwards compatibility is an important goal for Kubestack, it would require a lot of complexity to support the upstream changes in Terraform which isn't justified for an early beta release.
To avoid a service disruption consider creating a new cluster pair, migrate the workloads, then destroy the previous one by temporarily loading the old and new module version in clusters.tf.
Published by pst over 5 years ago
v0.3.0-beta.0
This release changes the upstream modules to the new Terraform 0.12 configuration language syntax. Likewise, repositories bootstrapped from the quickstart, need to be updated aswell. There are two small changes required.
clusters.tf
like in the example below:
- configuration = "${var.clusters["eks_zero"]}"
+ configuration = var.clusters["eks_zero"]
variables.tf
like in the example below:
- type = "map"
+ type = map(map(map(string)))
Last but not least, remember to upgrade the Terraform version in the Dockerfile. Depending on when you bootstrapped your repository, there may be additional changes in that Dockerfile worth copying.
Published by pst over 5 years ago
Temporarily set remove_default_node_pool = false
in the cluster pair's config. Then apply once, to spawn the new node pool. Once that's done. Remove the variable again, and apply a second time to remove the now obsolete previous node pool. Compare below diff for an example of configuration changes from v0.2.1-beta.0
to v0.3.0-beta.0
for autoscaling and including the temporary remove_default_node_pool = false
.
It is recommended to manually cordon the nodes of the old node pool and wait for workloads to be migrated by K8s, before applying the second time and removing the default node pool.
$ git diff v0.2.1-beta.0 -- tests/config.auto.tfvars
diff --git a/tests/config.auto.tfvars b/tests/config.auto.tfvars
index 2dd773e..4bbdae6 100644
--- a/tests/config.auto.tfvars
+++ b/tests/config.auto.tfvars
@@ -25,14 +25,16 @@ clusters = {
name_prefix = "testing"
base_domain = "infra.serverwolken.de"
cluster_min_master_version = "1.13.6"
- cluster_initial_node_count = 1
+ cluster_min_node_count = 1
+ cluster_max_node_count = 3
region = "europe-west1"
- cluster_additional_zones = "europe-west1-b,europe-west1-c,europe-west1-d"
+ cluster_node_locations = "europe-west1-b,europe-west1-c,europe-west1-d"
+ remove_default_node_pool = false
}
# Settings for Ops-cluster
ops = {
- cluster_additional_zones = "europe-west1-b"
+ cluster_node_locations = "europe-west1-b"
}
}
Manually move the autoscaling group and launch configurations in the state to reflect the new module hierachy like in the example below. After that, there should be no changes to apply when upgrading from v0.2.1-beta.0
to v0.3.0-beta.0
.
kbst@298d3d14f141:/infra$ terraform state mv module.eks_zero.module.cluster.aws_autoscaling_group.nodes module.eks_zero.module.cluster.module.node_pool.aws_autoscaling_group.nodes
Moved module.eks_zero.module.cluster.aws_autoscaling_group.nodes to module.eks_zero.module.cluster.module.node_pool.aws_autoscaling_group.nodes
kbst@298d3d14f141:/infra$ terraform state mv module.eks_zero.module.cluster.aws_launch_configuration.nodes module.eks_zero.module.cluster.module.node_pool.aws_launch_configuration.nodes
Moved module.eks_zero.module.cluster.aws_launch_configuration.nodes to module.eks_zero.module.cluster.module.node_pool.aws_launch_configuration.nodes
No changes in this release.
Published by pst over 5 years ago
min_master_version
to 1.13.6
Published by pst over 5 years ago