Bot releases are visible (Hide)
Published by jnoller over 5 years ago
This release is currently rolling out to all regions
New Features
Bug Fixes
klog
Preview Features
Published by jnoller over 5 years ago
This release is rolling out to all regions
Kubernetes 1.14 is now in Preview
aks-preview
CLINew Features
Bug fixes
--network-plugin=azure
--network-plugin=azure
with Azure CNI / Advanced NetworkingPreview Features
kubectl delete -f https://github.com/Azure/aks-engine/raw/master/docs/topics/calico-3.3.1-cleanup-after-upgrade.yaml
Published by jnoller over 5 years ago
Kubernetes 1.13 is GA
The Kubernetes 1.9.x releases are now deprecated. All clusters
on version 1.9 must be upgraded to a later release (1.10, 1.11, 1.12, 1.13)
within 30 days. Clusters still on 1.9.x after 30 days (2019-05-25)
will no longer be supported.
(Region) North Central US is now available
(Region) Japan West is now available
New Features
Bug fixes
maxPods
maxPods or maxPods * vm_count > managed add-on pods
Behavioral Changes
* AKS cluster creation now properly pre-checks the assigned service CIDR
range to block against possible conflicts with the dns-service CIDR.
* As an example, a user could use 10.2.0.1/24 instead of 10.2.0.0/24 which
would lead to IP conflicts. This is now validated/checked and if there is
a conflict, a clear error is returned.
* AKS now correctly blocks/validates users who accidentally attempt an
upgrade to a previous release (eg downgrade).
Preview Features
Component Updates
Published by jnoller over 5 years ago
This release is rolling out to all regions
exec
, get cluster logs (kubectl get logs
) or otherwise pass required health checks.az aks get-credentials
while a cluster is in creation resulting in an unclear error ('Could not find role name') has been resolved.Published by jnoller over 5 years ago
This release fixes one AKS product regression and an issue identified with the Azure Jenkins plugin.
Published by jnoller over 5 years ago
Published by jnoller over 5 years ago
The following regions are now GA: South Central US, Korea Central and Korea South
Bug fixes
Behavioral Changes
Published by jnoller over 5 years ago
This release is actively rolling out to all regions
The Central India region is now GA
Known Issues
Bug fixes
failed state
VMs/worker nodestenant-id
is now correctly defaulted if not passed in for AAD enabled clusters.Behavioral Changes
terminated-pod-gc-threshold
has been lowered to 6000 (previously 12500)
The "View Kubernetes Dashboard" has been removed from the Azure Portal
Published by jnoller over 5 years ago
The Azure Monitor for containers Agent has been updated to 3.0.0-4 for newly built or upgraded clusters
The Azure CLI now properly defaults to N-1 for Kubernetes versions, for example N is the current latest (1.12) release - the CLI will correctly pick 1.11.x. When 1.13 is released, the default will move to 1.12.
Bug Fixes:
cachingmode: ReadOnly
flag was not always being correctly applied to the managed premium storage class, this has been resolved.The preview feature for Calico/Network Security Policies has been updated to repair a bug where ip-forwarding was not enabled by default.
Published by jnoller over 5 years ago
Release 2019-03-01
This release is currently rolling out to all regions
Published by jnoller over 5 years ago
jq is missing
) on the nodes, all new nodes should now contain the jq
utility.Published by jnoller over 5 years ago
At this time, all regions now have the CVE hotfix release. The simplest way to consume it is to perform a Kubernetes version upgrade, which will cordon, drain, and replace all nodes with a new base image that includes the patched version of Moby. In conjunction with this release, we have enabled new patch versions for Kubernetes 1.11 and 1.12. However, as there are no new patch versions available for Kubernetes versions 1.9 and 1.10, customers are recommended to move forward to a later minor release.
If that is not possible and you must remain on 1.9.x/1.10.x, you can perform the following steps to get the patched runtime:
Once this is complete, all nodes should reflect the new Moby runtime version.
We apologize for the confusion and are working to improve this process.
Note: All newly created 1.9, 1.10, 1.11 and 1.12 clusters will have the new Moby runtime and will not need to be upgraded to get the patch.
Published by jnoller over 5 years ago
Hotfix releases follow an accelerated rollout schedule - this release should be in all regions by 12am PST 2019-02-13
CVE-2019-5736 notes and mitigation
Microsoft has built a new version of the Moby container runtime that includes the OCI update to address this vulnerability. In order to consume the updated container runtime release, you will need to upgrade your Kubernetes cluster.
Any upgrade will suffice as it will ensure that all existing nodes are removed and replaced with new nodes that include the patched runtime. You can see the upgrade paths/versions available to you by running the following command with the Azure CLI:
az aks get-upgrades -n myClusterName -g myResourceGroup
To upgrade to a given version, run the following command:
az aks upgrade -n myClusterName -g myResourceGroup -k <new Kubernetes version>
You can also upgrade from the Azure portal.
When the upgrade is complete, you can verify that you are patched by running the following command:
kubectl get nodes -o wide
If all of the nodes list docker://3.0.4 in the Container Runtime column, you have successfully upgraded to the new release.
Published by jnoller over 5 years ago
This hotfix release fixes the root-cause of several bugs / regressions introduced in the 2019-01-31 release. This release does not add new features, functionality or other improvements.
Hotfix releases follow an accelerated rollout schedule - this release should be in all regions within 24-48 hours barring unforeseen issues
With kube-dns, there was an undocumented feature where it supported two config maps allowing users to perform DNS overrides/stub domains, and other customizations. With the conversion to CoreDNS, this functionality was lost - CoreDNS only supports a single config map. With the hotfix above, AKS now has a work around to meet the same level of customization.
You can see the pre-CoreDNS conversion customization instructions here
Here is the equivalent ConfigMap for CoreDNS:
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns-custom
namespace: kube-system
data:
azurestack.server: |
azurestack.local:53 {
errors
cache 30
proxy . 172.16.0.4
}
After create the config map, you will need to delete the CoreDNS deployment to force-load the new config.
kubectl -n kube-system delete po -l k8s-app=kube-dns
Published by jnoller over 5 years ago
reconcile
which means modifications to the deployments will be discarded.calico
as a valid entry in addition to azure
when creating clusters using Advanced Networkingexec
into the cluster containers which will be fixed in the next releaseFor additional information or extended release notes, please see the CHANGELOG