Kubernetes-based, scale-to-zero, request-driven compute
APACHE-2.0 License
Bot releases are visible (Hide)
Published by knative-prow-releaser-robot almost 3 years ago
Release Notes
Serving
Published by knative-prow-releaser-robot almost 3 years ago
Allow users to set container[*].securityContext.runAsGroup (#12003, @dprotaso)
A new setting, mesh-compatibility-mode
, in the networking config map allows an administrator
to explicitly tell Activator and Autoscaler to use Direct Pod IP (most efficient, but not compatible
with mesh being enabled), Cluster IP (less efficient, but needed if mesh is enabled), or to
Autodetect (the current behaviour, and the default, causes Activator and Autoscaler to first attempt
Direct Pod IP communication, and then fall back to Cluster IP if it sees a mesh-related error status
code). (#11999, @julz)
Published by knative-prow-releaser-robot almost 3 years ago
Related issue: https://github.com/knative/serving/issues/11448
Our webhook parser no longer rejects unknown fields in an object's metadata
. New fields were introduced in K8s 1.22 which caused Knative's webhook to reject certain operations.
Related issue: knative/networking#448
As part of our efforts to GA/1.0 we've standardized on the naming of our networking plugins that are installed along side Serving. If you're managing your Knative deployment manually with kubectl this will require a two-phase upgrade process. In order to upgrade net-kourier to v0.25.0 using kubectl
please follow the steps:
# Apply the new release
$ kubectl apply -f net-kourier.yaml
# Once the deployment is ready apply the same file but
# prune the old resources
$ kubectl apply -f net-kourier.yaml \
--prune -l networking.knative.dev/ingress-provider=kourier
The namespace label networking.internal.knative.dev/disableWildcardCert
has been deprecated since v0.15.0 release in favour of networking.knative.dev/disableWildcardCert
. We have dropped support for this legacy label. (#11626, @nak3)
hpa.autoscaling.knative.dev
(#11668, @zhaojizhuang)app.kubernetes.io/name
labels to resources. It will be replacing app labels in the future. (#11655, @upodroid)Containers[*].securityContext.runAsNonRoot
can be set to true without a feature flag (#11606, @senthilnathan)spec.template.spec.automountServiceAccountToken
to false
in a PodSpec in order to opt-out of Kubenetes' default behaviour of mounting a ServiceAccount token in that Pod's containers. (#11723, @psschwei)ENABLE_HTTP2_AUTO_DETECTION
to false
by default if the feature is not enabled. (#11760, @psschwei)Published by knative-prow-releaser-robot almost 3 years ago
Related issue: knative/networking#448
As part of our efforts to GA/1.0 we've standardized on the naming of our networking plugins that are installed along side Serving. If you're managing your Knative deployment manually with kubectl
this will require a two-phase upgrade process. Please see the below sections:
# Apply the new release
$ kubectl apply -f net-http01.yaml
# Once the deployment is ready delete the old resources
$ kubectl delete deployment http01-controller -n knative-serving
$ kubectl delete service challenger -n knative-serving
# Apply the new release
$ kubectl apply -f net-certmanager.yaml
# Once the deployment is ready apply the same file but
# prune the old resources
$ kubectl apply -f net-certmanager.yaml \
--prune -l networking.knative.dev/certificate-provider=cert-manager
# Apply the new release
$ kubectl apply -f net-istio.yaml
# Once the deployment is ready apply the same file but
# prune the old resources
$ kubectl apply -f net-istio.yaml \
--prune -l networking.knative.dev/ingress-provider=istio
# Apply the new release
$ kubectl apply -f net-contour.yaml
# Once the deployment is ready apply the same file but
# prune the old resources
$ kubectl apply -f net-contour.yaml -f contour.yaml \
--prune -l networking.knative.dev/ingress-provider=contour
# Apply the new release
$ kubectl apply -f serving-nscert.yaml
# Once the deployment is ready apply the same file but
# prune the old resources
$ kubectl apply -f serving-nscert.yaml \
--prune -l networking.knative.dev/wildcard-certificate-provider=nscert
At this point we've defered the renaming to net-kourier until the next release. We're looking to ensure there is no traffic disruption as part of the upgrade. Thus upgrading to v0.24.0 requires no special instructions.
As part of our Kubernetes Minimum Version Principle we now have a hard requirement on Kubernetes Version 1.19.
The recommended way to delete a Knative installation is to run kubectl delete -f serving-core.yaml
and other release YAMLs you may have applied. There's been a misconception that deleting the knative-serving
namespace will perform a similar cleanup but this does not remove cluster scoped resources. In prior releases the cluster state would have prevented the reinstall of Knative Serving. We've addressed this problem but it will require some RBAC permissions on namespaces & finalizers.
Please see the relevant issues & PRs:
knative-serving-core
cluster role has requires permission for namespaces/finalizers. (#11517, @nak3)This means it is built in to the main serving-core
yaml by default. It is still possible to opt out of the feature by setting replica count of the domainmapping-controller to zero.
As part of this transition the default value for autocreateClusterDomainClaims in the config-network
config map has been changed to false
meaning cluster-wide permissions are required to delegate the ability to create particular DomainMappings to namespaces.Single tenant clusters may wish to allow arbitrary users to create Domain Mappings by changing this value back to true
. (#11573, @julz)
defaultExternalScheme
can now be used to default routes to surface a URL scheme of your choice rather than the default "http". (#11480, @markusthoemmes)config-kourier
, with the initial enable-service-access-logging
setting (net-kourier#523, @markusthoemmes)_example
section of the features configmap is not accidentally modified. (#11391, @julz)Published by knative-prow-releaser-robot almost 3 years ago
The per-namespace wildcard certificate provisioner has been integrated into the base controllers
and is now controlled by the namespace-wildcard-cert-selector field. This field allows you
to use a Kubernetes LabelSelector to choose which namespaces should have certificates
provisioned.
To migrate existing usage of the serving-nscert controller, do the following:
Set the namespace-wildcard-cert-selector to the value:
matchExpressions:
- key: "networking.knative.dev/disableWildcardCert"
operator: "NotIn"
values: ["true"]
Remove the Deployment, Service and ClusterRole defined by the serving-nscert.yaml resources
in the previous release. (#12174, @evankanderson)
Nothing has changed.
Nothing has changed.
Published by knative-prow-releaser-robot almost 3 years ago
The per-namespace wildcard certificate provisioner has been integrated into the base controllers
and is now controlled by the namespace-wildcard-cert-selector field. This field allows you
to use a Kubernetes LabelSelector to choose which namespaces should have certificates
provisioned.
To migrate existing usage of the serving-nscert controller, do the following:
Set the namespace-wildcard-cert-selector to the value:
matchExpressions:
- key: "networking.knative.dev/disableWildcardCert"
operator: "NotIn"
values: ["true"]
Remove the Deployment, Service and ClusterRole defined by the serving-nscert.yaml resources
in the previous release. (#12174, @evankanderson)
Nothing has changed.
Nothing has changed.
Published by knative-prow-releaser-robot about 3 years ago
Nothing has changed.
Published by knative-prow-releaser-robot about 3 years ago
Allow users to set container[*].securityContext.runAsGroup (#12003, @dprotaso)
A new setting, mesh-compatibility-mode
, in the networking config map allows an administrator
to explicitly tell Activator and Autoscaler to use Direct Pod IP (most efficient, but not compatible
with mesh being enabled), Cluster IP (less efficient, but needed if mesh is enabled), or to
Autodetect (the current behaviour, and the default, causes Activator and Autoscaler to first attempt
Direct Pod IP communication, and then fall back to Cluster IP if it sees a mesh-related error status
code). (#11999, @julz)
Published by knative-prow-releaser-robot about 3 years ago
Nothing has changed.
Published by knative-prow-releaser-robot about 3 years ago
Related issue: knative/networking#448
As part of our efforts to GA/1.0 we've standardized on the naming of our networking plugins that are installed along side Serving. If you're managing your Knative deployment manually with kubectl
this will require a two-phase upgrade process. Please see the below sections:
# Apply the new release
$ kubectl apply -f net-http01.yaml
# Once the deployment is ready delete the old resources
$ kubectl delete deployment http01-controller -n knative-serving
$ kubectl delete service challenger -n knative-serving
# Apply the new release
$ kubectl apply -f net-certmanager.yaml
# Once the deployment is ready apply the same file but
# prune the old resources
$ kubectl apply -f net-certmanager.yaml \
--prune -l networking.knative.dev/certificate-provider=cert-manager
# Apply the new release
$ kubectl apply -f net-istio.yaml
# Once the deployment is ready apply the same file but
# prune the old resources
$ kubectl apply -f net-istio.yaml \
--prune -l networking.knative.dev/ingress-provider=istio
# Apply the new release
$ kubectl apply -f net-contour.yaml
# Once the deployment is ready apply the same file but
# prune the old resources
$ kubectl apply -f net-contour.yaml -f contour.yaml \
--prune -l networking.knative.dev/ingress-provider=contour
# Apply the new release
$ kubectl apply -f serving-nscert.yaml
# Once the deployment is ready apply the same file but
# prune the old resources
$ kubectl apply -f serving-nscert.yaml \
--prune -l networking.knative.dev/wildcard-certificate-provider=nscert
At this point we've defered the renaming to net-kourier until the next release. We're looking to ensure there is no traffic disruption as part of the upgrade. Thus upgrading to v0.24.0 requires no special instructions.
As part of our Kubernetes Minimum Version Principle we now have a hard requirement on Kubernetes Version 1.19.
The recommended way to delete a Knative installation is to run kubectl delete -f serving-core.yaml
and other release YAMLs you may have applied. There's been a misconception that deleting the knative-serving
namespace will perform a similar cleanup but this does not remove cluster scoped resources. In prior releases the cluster state would have prevented the reinstall of Knative Serving. We've addressed this problem but it will require some RBAC permissions on namespaces & finalizers.
Please see the relevant issues & PRs:
knative-serving-core
cluster role has requires permission for namespaces/finalizers. (#11517, @nak3)This means it is built in to the main serving-core
yaml by default. It is still possible to opt out of the feature by setting replica count of the domainmapping-controller to zero.
As part of this transition the default value for autocreateClusterDomainClaims in the config-network
config map has been changed to false
meaning cluster-wide permissions are required to delegate the ability to create particular DomainMappings to namespaces.Single tenant clusters may wish to allow arbitrary users to create Domain Mappings by changing this value back to true
. (#11573, @julz)
defaultExternalScheme
can now be used to default routes to surface a URL scheme of your choice rather than the default "http". (#11480, @markusthoemmes)config-kourier
, with the initial enable-service-access-logging
setting (net-kourier#523, @markusthoemmes)_example
section of the features configmap is not accidentally modified. (#11391, @julz)Published by knative-prow-releaser-robot about 3 years ago
Related issue: https://github.com/knative/serving/issues/11448
Our webhook parser no longer rejects unknown fields in an object's metadata
. New fields were introduced in K8s 1.22 which caused Knative's webhook to reject certain operations.
Related issue: knative/networking#448
As part of our efforts to GA/1.0 we've standardized on the naming of our networking plugins that are installed along side Serving. If you're managing your Knative deployment manually with kubectl this will require a two-phase upgrade process. In order to upgrade net-kourier to v0.25.0 using kubectl
please follow the steps:
# Apply the new release
$ kubectl apply -f net-kourier.yaml
# Once the deployment is ready apply the same file but
# prune the old resources
$ kubectl apply -f net-kourier.yaml \
--prune -l networking.knative.dev/ingress-provider=kourier
The namespace label networking.internal.knative.dev/disableWildcardCert
has been deprecated since v0.15.0 release in favour of networking.knative.dev/disableWildcardCert
. We have dropped support for this legacy label. (#11626, @nak3)
hpa.autoscaling.knative.dev
(#11668, @zhaojizhuang)app.kubernetes.io/name
labels to resources. It will be replacing app labels in the future. (#11655, @upodroid)Containers[*].securityContext.runAsNonRoot
can be set to true without a feature flag (#11606, @senthilnathan)spec.template.spec.automountServiceAccountToken
to false
in a PodSpec in order to opt-out of Kubenetes' default behaviour of mounting a ServiceAccount token in that Pod's containers. (#11723, @psschwei)ENABLE_HTTP2_AUTO_DETECTION
to false
by default if the feature is not enabled. (#11760, @psschwei)Published by knative-prow-releaser-robot about 3 years ago
Related issue: https://github.com/knative/serving/issues/11448
Our webhook parser no longer rejects unknown fields in an object's metadata
. New fields were introduced in K8s 1.22 which caused Knative's webhook to reject certain operations.
autocreateClusterDomainClaims
flag to network config map. (networking#330, @julz)Nothing has changed.
Nothing has changed.
Published by knative-prow-releaser-robot about 3 years ago
Related issue: knative/networking#448
As part of our efforts to GA/1.0 we've standardized on the naming of our networking plugins that are installed along side Serving. If you're managing your Knative deployment manually with kubectl this will require a two-phase upgrade process. In order to upgrade net-kourier to v0.25.0 using kubectl
please follow the steps:
# Apply the new release
$ kubectl apply -f net-kourier.yaml
# Once the deployment is ready apply the same file but
# prune the old resources
$ kubectl apply -f net-kourier.yaml \
--prune -l networking.knative.dev/ingress-provider=kourier
The namespace label networking.internal.knative.dev/disableWildcardCert
has been deprecated since v0.15.0 release in favour of networking.knative.dev/disableWildcardCert
. We have dropped support for this legacy label. (#11626, @nak3)
hpa.autoscaling.knative.dev
(#11668, @zhaojizhuang)app.kubernetes.io/name
labels to resources. It will be replacing app labels in the future. (#11655, @upodroid)Containers[*].securityContext.runAsNonRoot
can be set to true without a feature flag (#11606, @senthilnathan)spec.template.spec.automountServiceAccountToken
to false
in a PodSpec in order to opt-out of Kubenetes' default behaviour of mounting a ServiceAccount token in that Pod's containers. (#11723, @psschwei)ENABLE_HTTP2_AUTO_DETECTION
to false
by default if the feature is not enabled. (#11760, @psschwei)Published by knative-prow-releaser-robot over 3 years ago
Related issue: knative/networking#448
As part of our efforts to GA/1.0 we've standardized on the naming of our networking plugins that are installed along side Serving. If you're managing your Knative deployment manually with kubectl
this will require a two-phase upgrade process. Please see the below sections:
# Apply the new release
$ kubectl apply -f net-http01.yaml
# Once the deployment is ready delete the old resources
$ kubectl delete deployment http01-controller -n knative-serving
$ kubectl delete service challenger -n knative-serving
# Apply the new release
$ kubectl apply -f net-certmanager.yaml
# Once the deployment is ready apply the same file but
# prune the old resources
$ kubectl apply -f net-certmanager.yaml \
--prune -l networking.knative.dev/certificate-provider=cert-manager
# Apply the new release
$ kubectl apply -f net-istio.yaml
# Once the deployment is ready apply the same file but
# prune the old resources
$ kubectl apply -f net-istio.yaml \
--prune -l networking.knative.dev/ingress-provider=istio
# Apply the new release
$ kubectl apply -f net-contour.yaml
# Once the deployment is ready apply the same file but
# prune the old resources
$ kubectl apply -f net-contour.yaml \
--prune -l networking.knative.dev/ingress-provider=contour
At this point we've defered the renaming to net-kourier until the next release. We're looking to ensure there is no traffic disruption as part of the upgrade. Thus upgrading to v0.24.0 requires no special instructions.
# Apply the new release
$ kubectl apply -f serving-nscert.yaml
# Once the deployment is ready apply the same file but
# prune the old resources
$ kubectl apply -f serving-nscert.yaml \
--prune -l networking.knative.dev/wildcard-certificate-provider=nscert
As part of our Kubernetes Minimum Version Principle we now have a hard requirement on Kubernetes Version 1.19.
The recommended way to delete a Knative installation is to run kubectl delete -f serving-core.yaml
and other release YAMLs you may have applied. There's been a misconception that deleting the knative-serving
namespace will perform a similar cleanup but this does not remove cluster scoped resources. In prior releases the cluster state would have prevented the reinstall of Knative Serving. We've addressed this problem but it will require some RBAC permissions on namespaces & finalizers.
Please see the relevant issues & PRs:
knative-serving-core
cluster role has requires permission for namespaces/finalizers. (#11517, @nak3)This means it is built in to the main serving-core
yaml by default. It is still possible to opt out of the feature by setting replica count of the domainmapping-controller to zero.
As part of this transition the default value for autocreateClusterDomainClaims in the config-network
config map has been changed to false
meaning cluster-wide permissions are required to delegate the ability to create particular DomainMappings to namespaces.Single tenant clusters may wish to allow arbitrary users to create Domain Mappings by changing this value back to true
. (#11573, @julz)
defaultExternalScheme
can now be used to default routes to surface a URL scheme of your choice rather than the default "http". (#11480, @markusthoemmes)config-kourier
, with the initial enable-service-access-logging
setting (net-kourier#523, @markusthoemmes)_example
section of the features configmap is not accidentally modified. (#11391, @julz)Published by knative-prow-releaser-robot over 3 years ago
Nothing has changed.
Published by knative-prow-releaser-robot over 3 years ago
{name}.{namespace}.svc.{cluster-suffix}
. (#10210, @julz)serving.knative.dev/domainmapping
label for Ingress generated by DomainMapping. (#10370, @nak3)PROFILING_PORT
(knative/pkg#1950, @mattmoor)rolloutDuration
entry in the config-network
configmap. When positive this setting will move the traffic gradually from the previous to the current revision over this period of time. It can handle several rollouts at the same time in three dimensions:
Published by knative-prow-releaser-robot over 3 years ago
autocreateClusterDomainClaims
flag to network config map. (networking#330, @julz)Nothing has changed.
Nothing has changed.
Published by knative-prow-releaser-robot over 3 years ago
Nothing has changed.
Published by knative-prow-releaser-robot over 3 years ago
ReadOnlyRootFilesystem
on the container's SecurityContext
(#10560, @senthilnathan)FailureThreshold
& TimeoutSeconds
are now defaulted to 3
and 1
respectively when a user opts into non-aggressive probing (ie. PeriodTimeout
> 1) (#10700, @shinigambit)serving.knative.dev/rolloutDuration
annotation, (#10561, @vagababov et al.)Published by knative-prow-releaser-robot over 3 years ago
autocreateClusterDomainClaims
flag to network config map. (networking#330, @julz)Nothing has changed.
Nothing has changed.