vCluster - Create fully functional virtual Kubernetes clusters - Each vcluster runs inside a namespace of the underlying k8s cluster. It's cheaper than creating separate full-blown clusters and it offers better multi-tenancy and isolation than regular namespaces.
APACHE-2.0 License
Bot releases are hidden (Show)
Published by FabianKramm over 2 years ago
vcluster now supports mapping services between host and virtual cluster. You can specify which services from the host cluster should be available inside the vcluster and which services inside the vcluster should be synced with the host cluster. You can configure this in the helm chart via the new section mapServices
:
mapServices:
# Services that should get mapped from the
# virtual cluster to the host cluster.
# vcluster will make sure to sync the service
# ip to the host cluster automatically as soon
# as the service exists.
fromVirtual:
- from: my-virtual-namespace/my-virtual-service
to: my-host-service
# Same as from virtual, but instead sync services
# from the host cluster into the virtual cluster.
# If the namespace does not exist, vcluster will
# also create the namespace for the service.
fromHost:
- from: my-host-namespace/my-host-service
to: my-virtual-namespace/my-virtual-service
For more information, please take a look at the vcluster docs
vcluster now supports creation with manifests that will be applied as soon as the vcluster has started. This can be useful to configure and deploy virtual cluster with certain resources that are then deployed into vcluster itself. You can configure these manifests inside the helm values:
init:
manifests: |-
apiVersion: v1
kind: Service
...
---
apiVersion: v1
kind: ConfigMap
...
vcluster now supports running a scheduler inside the virtual cluster. This is especially useful if you need to label and taint nodes within the vcluster and do not want to label or taint the actual host nodes. The scheduler can be enabled via:
sync:
nodes:
enabled: true
syncAllNodes: true # or use nodeSelector
enableScheduler: true
This will tell vcluster to now start the scheduler inside vcluster and will only sync pods that have a node assigned. For more information, please take a look at the vcluster docs.
vcluster version
commandcluster-autoscaler.kubernetes.io/daemonset-pod
on pods that belong to a daemon set inside the virtual clusterdefault/kubernetes
Endpoints object was referencing incorrect IPs in k8s and eks flavors.isolation.namespace
Published by FabianKramm over 2 years ago
vcluster now supports running a scheduler inside the virtual cluster. This is especially useful if you need to label and taint nodes within the vcluster and do not want to label or taint the actual host nodes. The scheduler can be enabled via:
sync:
nodes:
enabled: true
syncAllNodes: true # or use nodeSelector
enableScheduler: true
This will tell vcluster to now start the scheduler inside vcluster and will only sync pods that have a node assigned.
vcluster version
commandcluster-autoscaler.kubernetes.io/daemonset-pod
on pods that belong to a daemon set inside the virtual clusterdefault/kubernetes
Endpoints object was referencing incorrect IPs in k8s and eks flavors.isolation.namespace
Published by FabianKramm over 2 years ago
vcluster now supports running a scheduler inside the virtual cluster. This is especially useful if you need to label and taint nodes within the vcluster and do not want to label or taint the actual host nodes. The scheduler can be enabled via:
sync:
nodes:
enabled: true
syncAllNodes: true # or use nodeSelector
enableScheduler: true
This will tell vcluster to now start the scheduler inside vcluster and will only sync pods that have a node assigned.
vcluster version
commandcluster-autoscaler.kubernetes.io/daemonset-pod
on pods that belong to a daemon set inside the virtual clusterdefault/kubernetes
Endpoints object was referencing incorrect IPs in k8s and eks flavors.isolation.namespace
Published by FabianKramm over 2 years ago
vcluster now supports running a scheduler inside the virtual cluster. This is especially useful if you need to label and taint nodes within the vcluster and do not want to label or taint the actual host nodes. The scheduler can be enabled via:
sync:
nodes:
enabled: true
syncAllNodes: true # or use nodeSelector
enableScheduler: true
This will tell vcluster to now start the scheduler inside vcluster and will only sync pods that have a node assigned.
default/kubernetes
Endpoints object was referencing incorrect IPs in k8s and eks flavors.isolation.namespace
Published by FabianKramm over 2 years ago
--sync-service-selector
to let vcluster sync service selectors instead of endpoints. This has the advantage that it requires less permissions than the default, but does not work for services without a selector or leader election endpoints. (#281)Published by FabianKramm over 2 years ago
vcluster now includes the coredns manifests directly in the helm chart. If you are overriding the path /manifests/coredns
inside the syncer with your custom manifests, you'll need to disable coredns configmap via values.yaml
:
coredns:
enabled: false
Creating a secure multi tenancy environment is hard and vcluster is already able to provide an isolated control plane in a Kubernetes cluster, however actual vcluster workload isolation is currently still up to the users themselves to figure out. With v0.7.0 we want to introduce a new vcluster feature that automatically creates common defaults for workload isolation that can be enabled via the --isolate
flag in vcluster create
or through the helm value isolation.enabled: true
. This feature imposes a couple of restrictions on vcluster workloads to make sure they do not break out of their virtual environment:
Please take a look at the isolated mode docs for more information.
vcluster will now sync storage classes from the virtual cluster to the host cluster if sync of storage classes is enabled. This will replace the current behaviour where storage classes where only synced from host to virtual cluster.
We decided to replace the existing behaviour, because creating storage classes is a valid use case as long as the CSI driver is installed within the host cluster, but certain parameters for the CSI driver should get changed through a storage class. It also makes sense to not sync created storage classes from the host cluster anymore as this is not required to schedule persistent volume claims and currently just has informational purposes.
This is somewhat a breaking change as vclusters that currently have sync of storage classes enabled would now behave differently moving forward as changes to the host cluster storage classes are not propagated anymore. However migration should work as expected, as created storage classes within vcluster that mirrored host cluster storage classes before would just get created in the host cluster under a different name.
Old behaviour can restored by enabling the legacy storageclasses sync with:
sync:
legacy-storageclasses:
enabled: true
Added a new service account syncer that makes it possible to sync service accounts from the vcluster to the host cluster with certain annotations and labels. This is useful for features such as IAM Roles for ServiceAccounts where the service account needs a certain annotation to give AWS permissions to a pod
spec.loadBalancerSourceRanges
was not possiblePublished by FabianKramm over 2 years ago
vcluster now includes the coredns manifests directly in the helm chart. If you are overriding the path /manifests/coredns
inside the syncer with your custom manifests, you'll need to disable coredns configmap via values.yaml
:
coredns:
enabled: false
Creating a secure multi tenancy environment is hard and vcluster is already able to provide an isolated control plane in a Kubernetes cluster, however actual vcluster workload isolation is currently still up to the users themselves to figure out. With v0.7.0 we want to introduce a new vcluster feature that automatically creates common defaults for workload isolation that can be enabled via the --isolate
flag in vcluster create
or through the helm value isolation.enabled: true
. This feature imposes a couple of restrictions on vcluster workloads to make sure they do not break out of their virtual environment:
Please take a look at the isolated mode docs for more information.
vcluster will now sync storage classes from the virtual cluster to the host cluster if sync of storage classes is enabled. This will replace the current behaviour where storage classes where only synced from host to virtual cluster.
We decided to replace the existing behaviour, because creating storage classes is a valid use case as long as the CSI driver is installed within the host cluster, but certain parameters for the CSI driver should get changed through a storage class. It also makes sense to not sync created storage classes from the host cluster anymore as this is not required to schedule persistent volume claims and currently just has informational purposes.
This is somewhat a breaking change as vclusters that currently have sync of storage classes enabled would now behave differently moving forward as changes to the host cluster storage classes are not propagated anymore. However migration should work as expected, as created storage classes within vcluster that mirrored host cluster storage classes before would just get created in the host cluster under a different name.
Old behaviour can restored by enabling the legacy storageclasses sync with:
sync:
legacy-storageclasses:
enabled: true
Added a new service account syncer that makes it possible to sync service accounts from the vcluster to the host cluster with certain annotations and labels. This is useful for features such as IAM Roles for ServiceAccounts where the service account needs a certain annotation to give AWS permissions to a pod
spec.loadBalancerSourceRanges
was not possiblePublished by FabianKramm over 2 years ago
vcluster now includes the coredns manifests directly in the helm chart. If you are overriding the path /manifests/coredns
inside the syncer with your custom manifests, you'll need to disable coredns configmap via values.yaml
:
coredns:
enabled: false
Creating a secure multi tenancy environment is hard and vcluster is already able to provide an isolated control plane in a Kubernetes cluster, however actual vcluster workload isolation is currently still up to the users themselves to figure out. With v0.7.0 we want to introduce a new vcluster feature that automatically creates common defaults for workload isolation that can be enabled via the --isolate
flag in vcluster create
or through the helm value isolation.enabled: true
. This feature imposes a couple of restrictions on vcluster workloads to make sure they do not break out of their virtual environment:
Please take a look at the isolated mode docs for more information.
vcluster will now sync storage classes from the virtual cluster to the host cluster if sync of storage classes is enabled. This will replace the current behaviour where storage classes where only synced from host to virtual cluster.
We decided to replace the existing behaviour, because creating storage classes is a valid use case as long as the CSI driver is installed within the host cluster, but certain parameters for the CSI driver should get changed through a storage class. It also makes sense to not sync created storage classes from the host cluster anymore as this is not required to schedule persistent volume claims and currently just has informational purposes.
This is somewhat a breaking change as vclusters that currently have sync of storage classes enabled would now behave differently moving forward as changes to the host cluster storage classes are not propagated anymore. However migration should work as expected, as created storage classes within vcluster that mirrored host cluster storage classes before would just get created in the host cluster under a different name.
Added a new service account syncer that makes it possible to sync service accounts from the vcluster to the host cluster with certain annotations and labels. This is useful for features such as IAM Roles for ServiceAccounts where the service account needs a certain annotation to give AWS permissions to a pod
spec.loadBalancerSourceRanges
was not possiblePublished by FabianKramm over 2 years ago
vcluster now includes the coredns manifests directly in the helm chart. If you are overriding the path /manifests/coredns
inside the syncer with your custom manifests, you'll need to disable coredns configmap via values.yaml
:
coredns:
enabled: false
Creating a secure multi tenancy environment is hard and vcluster is already able to provide an isolated control plane in a Kubernetes cluster, however actual vcluster workload isolation is currently still up to the users themselves to figure out. With v0.7.0 we want to introduce a new vcluster feature that automatically creates common defaults for workload isolation that can be enabled via the --isolate
flag in vcluster create
or through the helm value isolation.enabled: true
. This feature imposes a couple of restrictions on vcluster workloads to make sure they do not break out of their virtual environment:
Please take a look at the isolated mode docs for more information.
vcluster will now sync storage classes from the virtual cluster to the host cluster if sync of storage classes is enabled. This will replace the current behaviour where storage classes where only synced from host to virtual cluster.
We decided to replace the existing behaviour, because creating storage classes is a valid use case as long as the CSI driver is installed within the host cluster, but certain parameters for the CSI driver should get changed through a storage class. It also makes sense to not sync created storage classes from the host cluster anymore as this is not required to schedule persistent volume claims and currently just has informational purposes.
This is somewhat a breaking change as vclusters that currently have sync of storage classes enabled would now behave differently moving forward as changes to the host cluster storage classes are not propagated anymore. However migration should work as expected, as created storage classes within vcluster that mirrored host cluster storage classes before would just get created in the host cluster under a different name.
Added a new service account syncer that makes it possible to sync service accounts from the vcluster to the host cluster with certain annotations and labels. This is useful for features such as IAM Roles for ServiceAccounts where the service account needs a certain annotation to give AWS permissions to a pod
spec.loadBalancerSourceRanges
was not possiblePublished by FabianKramm over 2 years ago
vcluster now includes the coredns manifests directly in the helm chart. If you are overriding the path /manifests/coredns
inside the syncer with your custom manifests, you'll need to disable coredns configmap via values.yaml
:
coredns:
enabled: false
Creating a secure multi tenancy environment is hard and vcluster is already able to provide an isolated control plane in a Kubernetes cluster, however actual vcluster workload isolation is currently still up to the users themselves to figure out. With v0.7.0 we want to introduce a new vcluster feature that automatically creates common defaults for workload isolation that can be enabled via the --isolate
flag in vcluster create
or through the helm value isolation.enabled: true
. This feature imposes a couple of restrictions on vcluster workloads to make sure they do not break out of their virtual environment:
Please take a look at the isolated mode docs for more information.
vcluster will now sync storage classes from the virtual cluster to the host cluster if sync of storage classes is enabled. This will replace the current behaviour where storage classes where only synced from host to virtual cluster.
We decided to replace the existing behaviour, because creating storage classes is a valid use case as long as the CSI driver is installed within the host cluster, but certain parameters for the CSI driver should get changed through a storage class. It also makes sense to not sync created storage classes from the host cluster anymore as this is not required to schedule persistent volume claims and currently just has informational purposes.
This is somewhat a breaking change as vclusters that currently have sync of storage classes enabled would now behave differently moving forward as changes to the host cluster storage classes are not propagated anymore. However migration should work as expected, as created storage classes within vcluster that mirrored host cluster storage classes before would just get created in the host cluster under a different name.
Added a new service account syncer that makes it possible to sync service accounts from the vcluster to the host cluster with certain annotations and labels. This is useful for features such as IAM Roles for ServiceAccounts where the service account needs a certain annotation to give AWS permissions to a pod
spec.loadBalancerSourceRanges
was not possiblePublished by FabianKramm over 2 years ago
vcluster now includes the coredns manifests directly in the helm chart. If you are overriding the path /manifests/coredns
inside the syncer with your custom manifests, you'll need to disable coredns configmap via values.yaml
:
coredns:
enabled: false
Creating a secure multi tenancy environment is hard and vcluster is already able to provide an isolated control plane in a Kubernetes cluster, however actual vcluster workload isolation is currently still up to the users themselves to figure out. With v0.7.0 we want to introduce a new vcluster feature that automatically creates common defaults for workload isolation that can be enabled via the --isolate
flag in vcluster create
or through the helm value isolation.enabled: true
. This feature imposes a couple of restrictions on vcluster workloads to make sure they do not break out of their virtual environment:
Please take a look at the isolated mode docs for more information.
vcluster will now sync storage classes from the virtual cluster to the host cluster if sync of storage classes is enabled. This will replace the current behaviour where storage classes where only synced from host to virtual cluster.
We decided to replace the existing behaviour, because creating storage classes is a valid use case as long as the CSI driver is installed within the host cluster, but certain parameters for the CSI driver should get changed through a storage class. It also makes sense to not sync created storage classes from the host cluster anymore as this is not required to schedule persistent volume claims and currently just has informational purposes.
This is somewhat a breaking change as vclusters that currently have sync of storage classes enabled would now behave differently moving forward as changes to the host cluster storage classes are not propagated anymore. However migration should work as expected, as created storage classes within vcluster that mirrored host cluster storage classes before would just get created in the host cluster under a different name.
Added a new service account syncer that makes it possible to sync service accounts from the vcluster to the host cluster with certain annotations and labels. This is useful for features such as IAM Roles for ServiceAccounts where the service account needs a certain annotation to give AWS permissions to a pod
spec.loadBalancerSourceRanges
was not possiblePublished by FabianKramm over 2 years ago
vcluster now includes the coredns manifests directly in the helm chart. If you are overriding the path /manifests/coredns
inside the syncer with your custom manifests, you'll need to disable coredns configmap via values.yaml
:
coredns:
enabled: false
Creating a secure multi tenancy environment is hard and vcluster is already able to provide an isolated control plane in a Kubernetes cluster, however actual vcluster workload isolation is currently still up to the users themselves to figure out. With v0.7.0 we want to introduce a new vcluster feature that automatically creates common defaults for workload isolation that can be enabled via the --isolate
flag in vcluster create
or through the helm value isolation.enabled: true
. This feature imposes a couple of restrictions on vcluster workloads to make sure they do not break out of their virtual environment:
Please take a look at the isolated mode docs for more information.
vcluster will now sync storage classes from the virtual cluster to the host cluster if sync of storage classes is enabled. This will replace the current behaviour where storage classes where only synced from host to virtual cluster.
We decided to replace the existing behaviour, because creating storage classes is a valid use case as long as the CSI driver is installed within the host cluster, but certain parameters for the CSI driver should get changed through a storage class. It also makes sense to not sync created storage classes from the host cluster anymore as this is not required to schedule persistent volume claims and currently just has informational purposes.
This is somewhat a breaking change as vclusters that currently have sync of storage classes enabled would now behave differently moving forward as changes to the host cluster storage classes are not propagated anymore. However migration should work as expected, as created storage classes within vcluster that mirrored host cluster storage classes before would just get created in the host cluster under a different name.
Added a new service account syncer that makes it possible to sync service accounts from the vcluster to the host cluster with certain annotations and labels. This is useful for features such as IAM Roles for ServiceAccounts where the service account needs a certain annotation to give AWS permissions to a pod
spec.loadBalancerSourceRanges
was not possiblePublished by FabianKramm over 2 years ago
Creating a secure multi tenancy environment is hard and vcluster is already able to provide an isolated control plane in a Kubernetes cluster, however actual vcluster workload isolation is currently still up to the users themselves to figure out. With v0.7.0 we want to introduce a new vcluster feature that automatically creates common defaults for workload isolation that can be enabled via the --isolate
flag in vcluster create
or through the helm value isolation.enabled: true
. This feature imposes a couple of restrictions on vcluster workloads to make sure they do not break out of their virtual environment:
Please take a look at the isolated mode docs for more information.
vcluster will now sync storage classes from the virtual cluster to the host cluster if sync of storage classes is enabled. This will replace the current behaviour where storage classes where only synced from host to virtual cluster.
We decided to replace the existing behaviour, because creating storage classes is a valid use case as long as the CSI driver is installed within the host cluster, but certain parameters for the CSI driver should get changed through a storage class. It also makes sense to not sync created storage classes from the host cluster anymore as this is not required to schedule persistent volume claims and currently just has informational purposes.
This is somewhat a breaking change as vclusters that currently have sync of storage classes enabled would now behave differently moving forward as changes to the host cluster storage classes are not propagated anymore. However migration should work as expected, as created storage classes within vcluster that mirrored host cluster storage classes before would just get created in the host cluster under a different name.
Added a new service account syncer that makes it possible to sync service accounts from the vcluster to the host cluster with certain annotations and labels. This is useful for features such as IAM Roles for ServiceAccounts where the service account needs a certain annotation to give AWS permissions to a pod
spec.loadBalancerSourceRanges
was not possiblePublished by FabianKramm over 2 years ago
Creating a secure multi tenancy environment is hard and vcluster is already able to provide an isolated control plane in a Kubernetes cluster, however actual vcluster workload isolation is currently still up to the users themselves to figure out. With v0.7.0 we want to introduce a new vcluster feature that automatically creates common defaults for workload isolation that can be enabled via the --isolate
flag in vcluster create
or through the helm value isolation.enabled: true
. This feature imposes a couple of restrictions on vcluster workloads to make sure they do not break out of their virtual environment:
Please take a look at the isolated mode docs for more information.
vcluster will now sync storage classes from the virtual cluster to the host cluster if sync of storage classes is enabled. This will replace the current behaviour where storage classes where only synced from host to virtual cluster.
We decided to replace the existing behaviour, because creating storage classes is a valid use case as long as the CSI driver is installed within the host cluster, but certain parameters for the CSI driver should get changed through a storage class. It also makes sense to not sync created storage classes from the host cluster anymore as this is not required to schedule persistent volume claims and currently just has informational purposes.
This is somewhat a breaking change as vclusters that currently have sync of storage classes enabled would now behave differently moving forward as changes to the host cluster storage classes are not propagated anymore. However migration should work as expected, as created storage classes within vcluster that mirrored host cluster storage classes before would just get created in the host cluster under a different name.
Added a new service account syncer that makes it possible to sync service accounts from the vcluster to the host cluster with certain annotations and labels. This is useful for features such as IAM Roles for ServiceAccounts where the service account needs a certain annotation to give AWS permissions to a pod
spec.loadBalancerSourceRanges
was not possiblePublished by FabianKramm over 2 years ago
vcluster will now sync storage classes from the virtual cluster to the host cluster if sync of storage classes is enabled. This will replace the current behaviour where storage classes where only synced from host to virtual cluster.
We decided to replace the existing behaviour, because creating storage classes is a valid use case as long as the CSI driver is installed within the host cluster, but certain parameters for the CSI driver should get changed through a storage class. It also makes sense to not sync created storage classes from the host cluster anymore as this is not required to schedule persistent volume claims and currently just has informational purposes.
This is somewhat a breaking change as vclusters that currently have sync of storage classes enabled would now behave differently moving forward as changes to the host cluster storage classes are not propagated anymore. However migration should work as expected, as created storage classes within vcluster that mirrored host cluster storage classes before would just get created in the host cluster under a different name.
Added a new service account syncer that makes it possible to sync service accounts from the vcluster to the host cluster with certain annotations and labels. This is useful for features such as IAM Roles for ServiceAccounts where the service account needs a certain annotation to give AWS permissions to a pod
spec.loadBalancerSourceRanges
was not possiblePublished by FabianKramm over 2 years ago
Plugins are a feature to extend the capabilities of vcluster. They allow you to add custom functionality, such as:
For more information, please take a look at the vcluster docs.
vcluster is now able to pause and resume. Pausing a vcluster means to temporarily scale down the vcluster and delete all its created workloads on the host cluster. This can be useful to save computing resources used by vcluster workloads in the host cluster.
For more information please checkout the vcluster docs
vcluster connect
in same shellvcluster now allows command execution with vcluster context in command vcluster connect
. For example:
# Retrieve vcluster namespaces
vcluster connect test -n test -- kubectl get ns
# New shell with vcluster kube context
vcluster connect test -n test -- bash
vcluster is now able to automatically create service account tokens for generated kube configs, which allow you to easily create kube configs for other vcluster users that should not be cluster admin. For example:
# Create a kube config for a cluster viewer
vcluster connect my-vcluster -n my-vcluster --service-account viewer --cluster-role view
# OR: create a kube config for a cluster admin
vcluster connect my-vcluster -n my-vcluster --service-account admin --cluster-role cluster-admin
# OR: create a kube config that expires after an hour
vcluster connect my-vcluster -n my-vcluster --service-account viewer --cluster-role view --token-expiration 3600
This makes it also possible to use vcluster more easily without ingresses that require ssl passthrough. For more information please checkout the vcluster access docs and vcluster ingress docs
vcluster now supports syncing of volume snapshots between the host and virtual cluster, that can be enabled via a values.yaml
:
sync:
volumesnapshots:
enabled: true
and then used via:
vcluster create ... -f values.yaml
vcluster now suports syncing of pod disruption budgets between the host and virtual cluster, that can be enabled via a values.yaml
:
sync:
poddisruptionbudgets:
enabled: true
and then used via:
vcluster create ... -f values.yaml
.rbac.clusterRole.create
, .rbac.role.extended
- both helm values will be removed in a future version of vcluster. Their function is replaced by the new .sync.*
helm values, which will ensure that minimal necessary RBAC role and clusterrole is created based on the resources that will be synced by vcluster..rbac.role.create
helm value will be removed in future version of vcluster and minimal standard role will always be created.--create-cluster-role
flag of the vcluster create
CLI command is deprecated for the same reasons as the .rbac.clusterRole.create
helm value, as described above.--insecure
for vcluster connect
to create a kube config with insecure-skip-tls-verify
vcluster create
can now use urls as value for -f
flagsvcluster get service-cidr
to print the current clusters service cidrvcluster create
vcluster connect
will now use a random port locally to avoid port conflicts if no --local-port
flag is specified.-s
for global flag --silent
--toleration
flag to add tolerations automatically to each pod (#330 thanks @kuuji)--sync
flag can now be passed to the syncer multiple times, and all the values will be combined. Disabling sync of a certain resource with a --sync=-resource
flag still takes precedence over any enabling --sync=resource
flags that might follow..sync.*
values have been added to control which resources are being synced, and which permissions are given to vcluster via RBAC role and cluster role. This way the RBAC permissions are controlled on a more granular level, and the old .rbac
helm values are deprecated. Using the .sync.RESOURCE.enabled
values is now the recommended way to enable/disable which resources are synced. See docs for usage examples - https://www.vcluster.com/docs/architecture/synced-resources
.sync.nodes.syncAllNodes
, .sync.nodes.nodeSelector
and .sync.nodes.syncNodeChanges
values have been added for easier control of node syncing behavior via helm charts and more precise RBAC permissions controll. See docs for usage examples - https://www.vcluster.com/docs/architecture/nodes . Direct use of the --sync-all-nodes
, --node-selector
and --enforce-node-selector
syncer args is not recommended because the associated RBAC permissions may be missing.externalIPs
& externalTrafficPolicy
(thanks @log1cb0mb)vcluster-images.txt
which holds all the needed images by vcluster. In addition, we include two scripts to download and push the needed images automaticallyPublished by FabianKramm over 2 years ago
Plugins are a feature to extend the capabilities of vcluster. They allow you to add custom functionality, such as:
For more information, please take a look at the vcluster docs.
vcluster is now able to pause and resume. Pausing a vcluster means to temporarily scale down the vcluster and delete all its created workloads on the host cluster. This can be useful to save computing resources used by vcluster workloads in the host cluster.
For more information please checkout the vcluster docs
vcluster connect
in same shellvcluster now allows command execution with vcluster context in command vcluster connect
. For example:
# Retrieve vcluster namespaces
vcluster connect test -n test -- kubectl get ns
# New shell with vcluster kube context
vcluster connect test -n test -- bash
vcluster is now able to automatically create service account tokens for generated kube configs, which allow you to easily create kube configs for other vcluster users that should not be cluster admin. For example:
# Create a kube config for a cluster viewer
vcluster connect my-vcluster -n my-vcluster --service-account viewer --cluster-role view
# OR: create a kube config for a cluster admin
vcluster connect my-vcluster -n my-vcluster --service-account admin --cluster-role cluster-admin
# OR: create a kube config that expires after an hour
vcluster connect my-vcluster -n my-vcluster --service-account viewer --cluster-role view --token-expiration 3600
This makes it also possible to use vcluster more easily without ingresses that require ssl passthrough. For more information please checkout the vcluster access docs and vcluster ingress docs
vcluster now supports syncing of volume snapshots between the host and virtual cluster, that can be enabled via a values.yaml
:
sync:
volumesnapshots:
enabled: true
and then used via:
vcluster create ... -f values.yaml
vcluster now suports syncing of pod disruption budgets between the host and virtual cluster, that can be enabled via a values.yaml
:
sync:
poddisruptionbudgets:
enabled: true
and then used via:
vcluster create ... -f values.yaml
.rbac.clusterRole.create
, .rbac.role.extended
- both helm values will be removed in a future version of vcluster. Their function is replaced by the new .sync.*
helm values, which will ensure that minimal necessary RBAC role and clusterrole is created based on the resources that will be synced by vcluster..rbac.role.create
helm value will be removed in future version of vcluster and minimal standard role will always be created.--create-cluster-role
flag of the vcluster create
CLI command is deprecated for the same reasons as the .rbac.clusterRole.create
helm value, as described above.--insecure
for vcluster connect
to create a kube config with insecure-skip-tls-verify
vcluster create
can now use urls as value for -f
flagsvcluster get service-cidr
to print the current clusters service cidrvcluster create
vcluster connect
will now use a random port locally to avoid port conflicts if no --local-port
flag is specified.-s
for global flag --silent
--toleration
flag to add tolerations automatically to each pod (#330 thanks @kuuji)--sync
flag can now be passed to the syncer multiple times, and all the values will be combined. Disabling sync of a certain resource with a --sync=-resource
flag still takes precedence over any enabling --sync=resource
flags that might follow..sync.*
values have been added to control which resources are being synced, and which permissions are given to vcluster via RBAC role and cluster role. This way the RBAC permissions are controlled on a more granular level, and the old .rbac
helm values are deprecated. Using the .sync.RESOURCE.enabled
values is now the recommended way to enable/disable which resources are synced. See docs for usage examples - https://www.vcluster.com/docs/architecture/synced-resources
.sync.nodes.syncAllNodes
, .sync.nodes.nodeSelector
and .sync.nodes.syncNodeChanges
values have been added for easier control of node syncing behavior via helm charts and more precise RBAC permissions controll. See docs for usage examples - https://www.vcluster.com/docs/architecture/nodes . Direct use of the --sync-all-nodes
, --node-selector
and --enforce-node-selector
syncer args is not recommended because the associated RBAC permissions may be missing.externalIPs
& externalTrafficPolicy
(thanks @log1cb0mb)vcluster-images.txt
which holds all the needed images by vcluster. In addition, we include two scripts to download and push the needed images automaticallyPublished by FabianKramm over 2 years ago
Plugins are a feature to extend the capabilities of vcluster. They allow you to add custom functionality, such as:
For more information, please take a look at the vcluster docs.
vcluster is now able to pause and resume. Pausing a vcluster means to temporarily scale down the vcluster and delete all its created workloads on the host cluster. This can be useful to save computing resources used by vcluster workloads in the host cluster.
For more information please checkout the vcluster docs
vcluster connect
in same shellvcluster now allows command execution with vcluster context in command vcluster connect
. For example:
# Retrieve vcluster namespaces
vcluster connect test -n test -- kubectl get ns
# New shell with vcluster kube context
vcluster connect test -n test -- bash
vcluster is now able to automatically create service account tokens for generated kube configs, which allow you to easily create kube configs for other vcluster users that should not be cluster admin. For example:
# Create a kube config for a cluster viewer
vcluster connect my-vcluster -n my-vcluster --service-account viewer --cluster-role view
# OR: create a kube config for a cluster admin
vcluster connect my-vcluster -n my-vcluster --service-account admin --cluster-role cluster-admin
# OR: create a kube config that expires after an hour
vcluster connect my-vcluster -n my-vcluster --service-account viewer --cluster-role view --token-expiration 3600
This makes it also possible to use vcluster more easily without ingresses that require ssl passthrough. For more information please checkout the vcluster access docs and vcluster ingress docs
vcluster now supports syncing of volume snapshots between the host and virtual cluster, that can be enabled via a values.yaml
:
sync:
volumesnapshots:
enabled: true
and then used via:
vcluster create ... -f values.yaml
vcluster now suports syncing of pod disruption budgets between the host and virtual cluster, that can be enabled via a values.yaml
:
sync:
poddisruptionbudgets:
enabled: true
and then used via:
vcluster create ... -f values.yaml
.rbac.clusterRole.create
, .rbac.role.extended
- both helm values will be removed in a future version of vcluster. Their function is replaced by the new .sync.*
helm values, which will ensure that minimal necessary RBAC role and clusterrole is created based on the resources that will be synced by vcluster..rbac.role.create
helm value will be removed in future version of vcluster and minimal standard role will always be created.--create-cluster-role
flag of the vcluster create
CLI command is deprecated for the same reasons as the .rbac.clusterRole.create
helm value, as described above.--insecure
for vcluster connect
to create a kube config with insecure-skip-tls-verify
vcluster create
can now use urls as value for -f
flagsvcluster get service-cidr
to print the current clusters service cidrvcluster create
vcluster connect
will now use a random port locally to avoid port conflicts if no --local-port
flag is specified.-s
for global flag --silent
--toleration
flag to add tolerations automatically to each pod (#330 thanks @kuuji)--sync
flag can now be passed to the syncer multiple times, and all the values will be combined. Disabling sync of a certain resource with a --sync=-resource
flag still takes precedence over any enabling --sync=resource
flags that might follow..sync.*
values have been added to control which resources are being synced, and which permissions are given to vcluster via RBAC role and cluster role. This way the RBAC permissions are controlled on a more granular level, and the old .rbac
helm values are deprecated. Using the .sync.RESOURCE.enabled
values is now the recommended way to enable/disable which resources are synced. See docs for usage examples - https://www.vcluster.com/docs/architecture/synced-resources
.sync.nodes.syncAllNodes
, .sync.nodes.nodeSelector
and .sync.nodes.syncNodeChanges
values have been added for easier control of node syncing behavior via helm charts and more precise RBAC permissions controll. See docs for usage examples - https://www.vcluster.com/docs/architecture/nodes . Direct use of the --sync-all-nodes
, --node-selector
and --enforce-node-selector
syncer args is not recommended because the associated RBAC permissions may be missing.externalIPs
& externalTrafficPolicy
(thanks @log1cb0mb)vcluster-images.txt
which holds all the needed images by vcluster. In addition, we include two scripts to download and push the needed images automaticallyPublished by FabianKramm over 2 years ago
Plugins are a feature to extend the capabilities of vcluster. They allow you to add custom functionality, such as:
For more information, please take a look at the vcluster docs.
vcluster is now able to pause and resume. Pausing a vcluster means to temporarily scale down the vcluster and delete all its created workloads on the host cluster. This can be useful to save computing resources used by vcluster workloads in the host cluster.
For more information please checkout the vcluster docs
vcluster connect
in same shellvcluster now allows command execution with vcluster context in command vcluster connect
. For example:
# Retrieve vcluster namespaces
vcluster connect test -n test -- kubectl get ns
# New shell with vcluster kube context
vcluster connect test -n test -- bash
vcluster is now able to automatically create service account tokens for generated kube configs, which allow you to easily create kube configs for other vcluster users that should not be cluster admin. For example:
# Create a kube config for a cluster viewer
vcluster connect my-vcluster -n my-vcluster --service-account viewer --cluster-role view
# OR: create a kube config for a cluster admin
vcluster connect my-vcluster -n my-vcluster --service-account admin --cluster-role cluster-admin
# OR: create a kube config that expires after an hour
vcluster connect my-vcluster -n my-vcluster --service-account viewer --cluster-role view --token-expiration 3600
This makes it also possible to use vcluster more easily without ingresses that require ssl passthrough. For more information please checkout the vcluster access docs and vcluster ingress docs
vcluster now supports syncing of volume snapshots between the host and virtual cluster, that can be enabled via a values.yaml
:
sync:
volumesnapshots:
enabled: true
and then used via:
vcluster create ... -f values.yaml
vcluster now suports syncing of pod disruption budgets between the host and virtual cluster, that can be enabled via a values.yaml
:
sync:
poddisruptionbudgets:
enabled: true
and then used via:
vcluster create ... -f values.yaml
.rbac.clusterRole.create
, .rbac.role.extended
- both helm values will be removed in a future version of vcluster. Their function is replaced by the new .sync.*
helm values, which will ensure that minimal necessary RBAC role and clusterrole is created based on the resources that will be synced by vcluster..rbac.role.create
helm value will be removed in future version of vcluster and minimal standard role will always be created.--create-cluster-role
flag of the vcluster create
CLI command is deprecated for the same reasons as the .rbac.clusterRole.create
helm value, as described above.--insecure
for vcluster connect
to create a kube config with insecure-skip-tls-verify
vcluster create
can now use urls as value for -f
flagsvcluster get service-cidr
to print the current clusters service cidrvcluster create
vcluster connect
will now use a random port locally to avoid port conflicts if no --local-port
flag is specified.-s
for global flag --silent
--toleration
flag to add tolerations automatically to each pod (#330 thanks @kuuji)--sync
flag can now be passed to the syncer multiple times, and all the values will be combined. Disabling sync of a certain resource with a --sync=-resource
flag still takes precedence over any enabling --sync=resource
flags that might follow..sync.*
values have been added to control which resources are being synced, and which permissions are given to vcluster via RBAC role and cluster role. This way the RBAC permissions are controlled on a more granular level, and the old .rbac
helm values are deprecated. Using the .sync.RESOURCE.enabled
values is now the recommended way to enable/disable which resources are synced. See docs for usage examples - https://www.vcluster.com/docs/architecture/synced-resources
.sync.nodes.syncAllNodes
, .sync.nodes.nodeSelector
and .sync.nodes.syncNodeChanges
values have been added for easier control of node syncing behavior via helm charts and more precise RBAC permissions controll. See docs for usage examples - https://www.vcluster.com/docs/architecture/nodes . Direct use of the --sync-all-nodes
, --node-selector
and --enforce-node-selector
syncer args is not recommended because the associated RBAC permissions may be missing.externalIPs
& externalTrafficPolicy
(thanks @log1cb0mb)vcluster-images.txt
which holds all the needed images by vcluster. In addition, we include two scripts to download and push the needed images automaticallyPublished by FabianKramm over 2 years ago
Plugins are a feature to extend the capabilities of vcluster. They allow you to add custom functionality, such as:
For more information, please take a look at the vcluster docs.
vcluster is now able to pause and resume. Pausing a vcluster means to temporarily scale down the vcluster and delete all its created workloads on the host cluster. This can be useful to save computing resources used by vcluster workloads in the host cluster.
For more information please checkout the vcluster docs
vcluster connect
in same shellvcluster now allows command execution with vcluster context in command vcluster connect
. For example:
# Retrieve vcluster namespaces
vcluster connect test -n test -- kubectl get ns
# New shell with vcluster kube context
vcluster connect test -n test -- bash
vcluster is now able to automatically create service account tokens for generated kube configs, which allow you to easily create kube configs for other vcluster users that should not be cluster admin. For example:
# Create a kube config for a cluster viewer
vcluster connect my-vcluster -n my-vcluster --service-account viewer --cluster-role view
# OR: create a kube config for a cluster admin
vcluster connect my-vcluster -n my-vcluster --service-account admin --cluster-role cluster-admin
# OR: create a kube config that expires after an hour
vcluster connect my-vcluster -n my-vcluster --service-account viewer --cluster-role view --token-expiration 3600
This makes it also possible to use vcluster more easily without ingresses that require ssl passthrough. For more information please checkout the vcluster access docs and vcluster ingress docs
vcluster now supports syncing of volume snapshots between the host and virtual cluster, that can be enabled via a values.yaml
:
rbac:
clusterRole:
enabled: true
role:
extended: true
syncer:
extraArgs:
- --sync=volumesnapshots
and then used via:
vcluster create ... -f values.yaml
vcluster now suports syncing of pod disruption budgets between the host and virtual cluster, that can be enabled via a values.yaml
:
rbac:
role:
extended: true
syncer:
extraArgs:
- --sync=poddisruptionbudgets
and then used via:
vcluster create ... -f values.yaml
--insecure
for vcluster connect
to create a kube config with insecure-skip-tls-verify
vcluster create
can now use urls as value for -f
flagsvcluster get service-cidr
to print the current clusters service cidrvcluster create
vcluster connect
will now use a random port locally to avoid port conflicts if no --local-port
flag is specified.-s
for global flag --silent
vcluster-images.txt
which holds all the needed images by vcluster. In addition, we include two scripts to download and push the needed images automatically