openebs

Most popular & widely deployed Open Source Container Native Storage platform for Stateful Persistent Applications on Kubernetes.

APACHE-2.0 License

Stars
8.6K
Committers
221

Bot releases are hidden (Show)

openebs - v2.6.0

Published by kmova over 3 years ago

Release Summary

OpenEBS v2.6 contains some key enhancements and several fixes for the issues reported by the user community across all 9 types of OpenEBS volumes.

Here are some of the key highlights in this release.

New capabilities

  • OpenEBS is introducing a new CSI driver for dynamic provisioning of Jiva volumes. This driver is released as alpha and currently supports the following additional features compared to the non-CSI jiva volumes.

    • Jiva Replicas are backed by OpenEBS host path volumes
    • Auto-remount of volumes that are marked read-only by iSCSI client due to intermittent network issues
    • Handle the case of multi-attach error sometimes seen on on-premise clusters
    • A custom resource for Jiva volumes to help with easy access to the volume status

    For instructions on how to set up and use the Jiva CSI driver, please see. https://github.com/openebs/jiva-operator.

Key Improvements

  • Several bug fixes to the Mayastor volumes along with improvements to the API documentation. See Mayastor release notes.
  • Enhanced the NFS Dynamic Provisioner to support using Cluster IP for dynamically provisioned NFS server. It was observed that on some of the Kubernetes clusters the kubelet or the node trying to mount the NFS volume was unable to resolve the cluster local service.
  • ZFS Local PV added support for resizing of the raw block volumes.
  • LVM Local PV is enhanced with additional features and some key bug fixes like:
    • Raw block volume support
    • Snapshot support
    • Ability to schedule based on the capacity of the volumes provisioned
    • Ensure that LVM volume creation and deletion functions are idempotent
  • NDM partition discovery was updated to fetch the device details from its parent block device.

Key Bug Fixes

Backward Incompatibilities

  • Kubernetes 1.17 or higher release is recommended as this release contains the following updates that will not be compatible with older Kubernetes releases.

    • The CSI components have been upgraded to:
      • k8s.gcr.io/sig-storage/csi-attacher:v3.1.0
      • k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0
      • k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0
      • k8s.gcr.io/sig-storage/csi-resizer:v1.1.0
      • k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0
      • k8s.gcr.io/sig-storage/snapshot-controller:v4.0.0
      • k8s.gcr.io/sig-storage/csi-snapshotter:v3.0.3 (for cStor CSI volumes)
      • k8s.gcr.io/sig-storage/snapshot-controller:v3.0.3 (for cStor CSI volumes)
  • If you are upgrading from an older version of cStor operators to this version, you will need to manually delete the cStor CSI driver object prior to upgrading. kubectl delete csidriver cstor.csi.openebs.io. For complete details on how to upgrade your cStor operators, see https://github.com/openebs/upgrade/blob/master/docs/upgrade.md#cspc-pools.

  • The CRD API version has been updated for the cStor custom resources to v1. If you are upgrading via the helm chart, you might have to make sure that the new CRDs are updated. https://github.com/openebs/cstor-operators/tree/master/deploy/helm/charts/crds

  • The e2e pipelines include upgrade testing only from 1.5 and higher releases to 2.6. If you are running on release older than 1.5, OpenEBS recommends you upgrade to the latest version as soon as possible.

Other notable updates

  • OpenEBS has applied for becoming a CNCF incubation project and is currently undergoing a Storage SIG review of the project and addressing the review comment provided. One of the significant efforts we are taking in this direction is to upstream the changes done in uZFS to OpenZFS.
  • Automation of further Day 2 operations like - automatically detecting a node deletion from the cluster, and re-balancing the volume replicas onto the next available node.
  • Migrating the CI pipelines from Travis to GitHub actions.
  • Several enhancements to the cStor Operators documentation with a lot of help from @survivant.
  • PSP support has been added to ZFS Local PV and cStor helm charts.
  • Improving the OpenEBS Rawfile Local PV in preparation for its beta release. In the current release, fixed some issues as well as added support for setting resource limits on the sidecar and few other optimizations.
  • Sample Grafana dashboards for managing OpenEBS are being developed here: https://github.com/openebs/charts/tree/gh-pages/grafana-charts

Show your Support

Thank you @coboluxx (IDNT) for becoming a public reference and supporter of OpenEBS by sharing your use case on ADOPTERS.md

Are you using or evaluating OpenEBS? You can help OpenEBS in its journey towards becoming a CNCF Incubation project by sharing your OpenEBS story and join the league of OpenEBS Adopters.

Shout outs!

MANY THANKS to our existing contributors and for everyone helping OpenEBS Community going.

A very special thanks to our first-time contributors to code, tests, and docs: @luizcarlosfaria, @Z0Marlin, @iyashu, @dyasny, @hanieh-m, @si458, @Ab-hishek

Getting Started

Prerequisite to install

  • Kubernetes 1.17+ or newer release is installed.
  • Make sure that you run the below installation steps with the cluster-admin context. The installation will involve creating a new Service Account and assigning it to OpenEBS components.
  • Make sure iSCSI Initiator is installed on the Kubernetes nodes.
  • Node-Disk-Manager (NDM) helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters in NDM Config Map to exclude paths before installing OpenEBS.
  • NDM runs as a privileged pod since it needs to access the device information. Please make the necessary changes to grant access to run in privileged mode. For example, when running in RHEL/CentOS, you may need to set the security context appropriately. Refer Configuring OpenEBS with selinux=on

Install using kubectl

kubectl apply -f https://openebs.github.io/charts/2.6.0/openebs-operator.yaml

Install using Helm stable charts

helm repo add openebs https://openebs.github.io/charts
helm repo update
helm install --namespace openebs --name openebs openebs/openebs --version 2.6.0

For more details refer to the documentation at https://docs.openebs.io/

Upgrade

Upgrade to 2.6 is supported only from 1.0 or higher and follows a similar process as earlier releases. Detailed steps are provided here.

  • Upgrade OpenEBS Control Plane components.
  • Upgrade Jiva PVs to 2.6, either one at a time or multiple volumes.
  • Upgrade CStor Pools to 2.6 and its associated Volumes, either one at a time or multiple volumes.

For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.

Support

If you are having issues in setting up or upgrade, you can contact:

Major Limitations and Notes

For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you are using the cStor Storage Engine, please review the following before upgrading to this release.

  • The recommended approach for deploying cStor Pools is using the new custom resource called cStorPoolCluster (CSPC). Even though the provisioning of cStor Pools using StoragePoolClaim(SPC) is supported, it will be deprecated in future releases. The pools provisioned using SPC can be easily migrated to CSPC.
  • When using cStor Pools, make sure that raw block devices are available on the nodes. If the block devices are formatted with a filesystem or mounted, then cStor Pool will not be created on the block device. In the current release, there are manual steps that could be followed to clear the filesystem or use partitions for creating cStor Pools, please reach out to the community (#openebs) at https://slack.k8s.io.
  • If you are using cStor pools with ephemeral devices, starting with 1.2 - upon node restart, cStor Pool will not be automatically re-created on the new devices. This check has been put to make sure that nodes are not accidentally restarted with new disks. The steps to recover from such a situation are provided here, which involves changing the status of the corresponding CSP to Init.
  • Capacity over-provisioning is enabled by default on the cStor pools. If you don’t have alerts set up for monitoring the usage of the cStor pools, the pools can be fully utilized and the volumes can get into a read-only state. To avoid this, set up resource quotas as described in https://github.com/openebs/openebs/issues/2855.
openebs - v2.5.0

Published by kmova almost 4 years ago

Release Summary

A warm and happy new year to all our users, contributors, and supporters. 🎉 🎉 🎉.

Keeping up with our tradition of monthly releases, OpenEBS v2.5 is now GA with some key enhancements and several fixes for the issues reported by the user community. Here are some of the key highlights in this release:

New capabilities

  • OpenEBS has support for multiple storage engines, and the feedback from users has shown that users tend to only use a few of these engines on any given cluster depending on the workload requirements. As a way to provide more flexibility for users, we are introducing separate helm charts per engine. With OpenEBS 2.5 the following helm charts are supported.

    • openebs - This is the most widely deployed that has support for Jiva, cStor, and Local PV hostpath and device volumes.
    • zfs-localpv - Helm chart for ZFS Local PV CSI driver.
    • cstor-operators - Helm chart for cStor CSPC Pools and CSI driver.
    • dynamic-localpv-provisioner - Helm chart for only installing Local PV hostpath and device provisioners.

    (Special shout out to @sonasingh46, @shubham14bajpai, @prateekpandey14, @xUnholy, @akhilerm for continued efforts in helping to build the above helm charts.)

  • OpenEBS is introducing a new CSI driver for dynamic provisioning to Kubernetes Local Volumes backed by LVM. This driver is released as alpha and currently supports the following features.

    • Create and Delete Persistent Volumes
    • Resize Persistent Volume

    For instructions on how to set up and use the LVM CSI driver, please see. https://github.com/openebs/lvm-localpv

Key Improvements

  • Enhanced the ZFS Local PV scheduler to support spreading the volumes across the nodes based on the capacity of the volumes that are already provisioned. After upgrading to this release, capacity-based spreading will be used by default. In the previous releases, the volumes were spread based on the number of volumes provisioned per node. https://github.com/openebs/zfs-localpv/pull/266.

  • Added support to configure image pull secrets for the pods launched by OpenEBS Local PV Provisioner and cStor (CSPC) operators. The image pull secrets (comma separated strings) can be passed as an environment variable (OPENEBS_IO_IMAGE_PULL_SECRETS) to the deployments that launch these additional pods. The following deployments need to be updated.

  • Added support to allow users to specify custom node labels for allowedTopologies under the cStor CSI StorageClass. https://github.com/openebs/cstor-csi/pull/135

Key Bug Fixes

  • Fixed an issue that could cause Jiva replica to fail to initialize if there was an abrupt shutdown of the node where the replica pod is scheduled during the Jiva replica initialization. https://github.com/openebs/jiva/pull/337.
  • Fixed an issue that was causing Restore (with automatic Target IP configuration enabled) to fail if cStor volumes are created with Target Affinity with application pod. https://github.com/openebs/velero-plugin/issues/141.
  • Fixed an issue where Jiva and cStor volumes will remain in pending state on Kubernetes 1.20 and above clusters. K8s 1.20 has deprecated SelfLink option which causes this failure with older Jiva and cStor Provisioners. https://github.com/openebs/openebs/issues/3314
  • Fixed an issue with cStor CSI Volumes that caused the Pods using cStor CSI Volumes on unmanaged Kubernetes clusters to remain in the pending state due to multi-attach error. This was caused due to cStor being dependent on the CSI VolumeAttachment object to determine where to attach the volume. In the case of unmanaged Kubernetes clusters, the VolumeAttachment object was not cleared by Kubernetes from the failed node and hence the cStor would assume that volume was still attached to the old node.

Backward Incompatibilities

  • Kubernetes 1.17 or higher release is recommended as this release contains the following updates that will not be compatible with older Kubernetes releases.

    • The CRD version has been upgraded to v1. (Thanks to the efforts from @RealHarshThakur, @prateekpandey14, @akhilerm)
    • The CSI components have been upgraded to:
      • quay.io/k8scsi/csi-node-driver-registrar:v2.1.0
      • quay.io/k8scsi/csi-provisioner:v2.1.0
      • quay.io/k8scsi/snapshot-controller:v4.0.0
      • quay.io/k8scsi/csi-snapshotter:v4.0.0
      • quay.io/k8scsi/csi-resizer:v1.1.0
      • quay.io/k8scsi/csi-attacher:v3.1.0
      • k8s.gcr.io/sig-storage/csi-snapshotter:v3.0.3 (for cStor CSI volumes)
      • k8s.gcr.io/sig-storage/snapshot-controller:v3.0.3 (for cStor CSI volumes)
  • If you are upgrading from an older version of cStor Operators to this version, you will need to manually delete the cstor CSI driver object prior to upgrade. kubectl delete csidriver cstor.csi.openebs.io. For complete details on how to upgrade your cStor Operators, see https://github.com/openebs/upgrade/blob/master/docs/upgrade.md#cspc-pools.

Other notable updates

  • OpenEBS has applied for becoming a CNCF incubation project and is currently undergoing a Storage SIG review of the project and addressing the review comment provided. One of the significant efforts we are taking in this direction is to upstream the changes done in uZFS to OpenZFS.
  • Automation of further Day 2 operations like - automatically detecting a node deletion from the cluster, and re-balancing the volume replicas onto the next available node.
  • Migrating the CI pipelines from Travis to GitHub actions.
  • Several enhancements to the cStor Operators documentation with a lot of help from @survivant.

Show your Support

Thank you @laimison (Renthopper) for becoming a public reference and supporter of OpenEBS by sharing your use case on ADOPTERS.md

Are you using or evaluating OpenEBS? You can help OpenEBS in its journey towards becoming CNCF Incubation project by sharing your OpenEBS story and join the league of OpenEBS Adopters.

Shout outs!

MANY THANKS to our existing contributors and for everyone helping OpenEBS Community going.

A very special thanks to our first-time contributors to code, tests, and docs: @allenhaozi, @anandprabhakar0507, @Hoverbear, @kaushikp13, @praveengt

Getting Started

Prerequisite to install

  • Kubernetes 1.17+ or newer release is installed.
  • Make sure that you run the below installation steps with the cluster-admin context. The installation will involve creating a new Service Account and assigning it to OpenEBS components.
  • Make sure iSCSI Initiator is installed on the Kubernetes nodes.
  • Node-Disk-Manager (NDM) helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters in NDM Config Map to exclude paths before installing OpenEBS.
  • NDM runs as a privileged pod since it needs to access the device information. Please make the necessary changes to grant access to run in privileged mode. For example, when running in RHEL/CentOS, you may need to set the security context appropriately. Refer Configuring OpenEBS with selinux=on

Install using kubectl

kubectl apply -f https://openebs.github.io/charts/2.5.0/openebs-operator.yaml

Install using Helm stable charts

helm repo add openebs https://openebs.github.io/charts
helm repo update
helm install --namespace openebs --name openebs openebs/openebs --version 2.5.0

For more details refer to the documentation at https://docs.openebs.io/

Upgrade

Upgrade to 2.5 is supported only from 1.0 or higher and follows a similar process as earlier releases. Detailed steps are provided here.

  • Upgrade OpenEBS Control Plane components.
  • Upgrade Jiva PVs to 2.5, either one at a time or multiple volumes.
  • Upgrade CStor Pools to 2.5 and its associated Volumes, either one at a time or multiple volumes.

For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.

Support

If you are having issues in setting up or upgrade, you can contact:

Major Limitations and Notes

For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you are using the cStor Storage Engine, please review the following before upgrading to this release.

  • The recommended approach for deploying cStor Pools is using the new custom resource called cStorPoolCluster (CSPC). Even though the provisioning of cStor Pools using StoragePoolClaim(SPC) is supported, it will be deprecated in future releases. The pools provisioned using SPC can be easily migrated to CSPC.
  • When using cStor Pools, make sure that raw block devices are available on the nodes. If the block devices are formatted with a filesystem or mounted, then cStor Pool will not be created on the block device. In the current release, there are manual steps that could be followed to clear the filesystem or use partitions for creating cStor Pools, please reach out to the community (#openebs) at https://slack.k8s.io.
  • If you are using cStor pools with ephemeral devices, starting with 1.2 - upon node restart, cStor Pool will not be automatically re-created on the new devices. This check has been put to make sure that nodes are not accidentally restarted with new disks. The steps to recover from such a situation are provided here, which involves changing the status of the corresponding CSP to Init.
  • Capacity over-provisioning is enabled by default on the cStor pools. If you don’t have alerts set up for monitoring the usage of the cStor pools, the pools can be fully utilized and the volumes can get into a read-only state. To avoid this, set up resource quotas as described in https://github.com/openebs/openebs/issues/2855.
openebs - v2.4.0

Published by kmova almost 4 years ago

Release Summary

OpenEBS v2.4 is our last monthly release for the year with some key enhancements and several fixes for the issues reported by the user community.

Note: With Kubernetes 1.20, SelfLink option has been removed which is used by the OpenEBS jiva and cStor (non-csi) provisioners. This causes the PVCs to remain in a pending state. The workaround and fix for this are being tracked under this issue. A patch release will be made available as soon as the fix has been verified on 1.20 platforms.

Here are some of the key highlights in this release:

New capabilities

  • ZFS Local PV has now been graduated to stable with all the supported features and upgrade tests automated via e2e testing. ZFS Local PV is best suited for distributed workloads that require resilient local volumes that can sustain local disk failures. You can read more about using the ZFS Local volumes at https://github.com/openebs/zfs-localpv and check out how ZFS Local PVs are used in production at Optoro.

  • OpenEBS is introducing a new NFS dynamic provisioner to allow the creation and deletion of NFS volumes using Kernel NFS backed by block storage. This provisioner is being actively developed and released as alpha. This new provisioner allows users to provision OpenEBS RWX volumes where each volume gets its own NFS server instance. In the previous releases, OpenEBS RWX volumes were supported via the Kubernetes NFS Ganesha and External Provisioner - where multiple RWX volumes share the same NFS Ganesha Server. You can read more about the new OpenEBS Dynamic Provisioner at https://github.com/openebs/dynamic-nfs-provisioner.

Key Improvements

  • Added support for specifying a custom node affinity label for OpenEBS Local Hostpath volumes. By default, OpenEBS Local Hostpath volumes use kubenetes.io/hostname for setting the PV Node Affinity. Users can now specify a custom label to use for PV Node Affinity. Custom node affinity can be specified in the Local PV storage class as follows:
    kind: StorageClass
    metadata:
      name: openebs-hostpath
      annotations:
        openebs.io/cas-type: local
        cas.openebs.io/config: |
          - name: StorageType
            value: "hostpath"
          - name: NodeAffinityLabel
            value: "openebs.io/custom-node-id"
    provisioner: openebs.io/local
    volumeBindingMode: WaitForFirstConsumer
    reclaimPolicy: Delete
    
    This will help with use cases like:
    • Deployments where kubenetes.io/hostname is not unique across the cluster (Ref: https://github.com/openebs/openebs/issues/2875)
    • Deployments, where an existing Kubernetes node in the cluster running Local volumes is replaced with a new node, and storage attached to the old node, is moved to a new node. Without this feature, the Pods using the older node will remain in the pending state.
  • Added a configuration option to the Jiva volume provisioner to skip adding replica node affinity. This will help in deployments where replica nodes are frequently replaced with new nodes causing the replica to remain in the pending state. The replica node affinity should be used in cases where replica nodes are not replaced with new nodes or the new node comes back with the same node-affinity label. (Ref: https://github.com/openebs/openebs/issues/3226). The node affinity for jiva volumes can be skipped by specifying the following ENV variable in the OpenEBS Provisioner Deployment.
         - name: OPENEBS_IO_JIVA_PATCH_NODE_AFFINITY
           value: "disabled"
    
  • Enhanced OpenEBS Velero plugin (cStor) to automatically set the target IP once the cStor volumes is restored from a backup. (Ref: https://github.com/openebs/velero-plugin/pull/131). This feature can be enabled by updating the VolumeSnapshotLocal using configuration option autoSetTargetIP as follows:
    apiVersion: velero.io/v1
    kind: VolumeSnapshotLocation
    metadata:
      ...
    spec:
      config:
        ...
        ...
        autoSetTargetIP: "true"
    
    (Huge thanks to @zlymeda for working on this feature which involved co-ordinating this fix across multiple repositories).
  • Enhanced the OpenEBS Velero plugin used to automatically create the target namespace during restore, if the target namespace doesn't exist. (Ref: https://github.com/openebs/velero-plugin/issues/137).
  • Enhanced the OpenEBS helm chart to support Image pull secrets. https://github.com/openebs/charts/pull/174
  • Enhance OpenEBS helm chart to allow specifying resource limits on OpenEBS control plane pods. https://github.com/openebs/charts/issues/151
  • Enhanced the NDM filters to support discovering LVM devices both with /dev/dm-X and /dev/mapper/x patterns. (Ref: https://github.com/openebs/openebs/issues/3310).

Key Bug Fixes

Backward Incompatibilities

  • Velero has updated the configuration for specifying a different node selector during restore. The configuration changes from velero.io/change-pvc-node to velero.io/change-pvc-node-selector. ( Ref: https://github.com/openebs/velero-plugin/pull/139)

Other notable updates

  • OpenEBS ZFS Local PV CI has been updated to include CSI Sanity tests and fixed some minor issue to confirm with CSI test suite. ( Ref: https://github.com/openebs/zfs-localpv/pull/232).
  • OpenEBS has applied for becoming a CNCF incubation project and is currently undergoing a Storage SIG review of the project and addressing the review comment provided.
  • Significant work is underway to make it easier to install only the components that the users finally decide to use for their workloads. These features will allow users to run different flavors of OpenEBS in K8s clusters optimized for the workloads they intend to run in the cluster. This can be achieved in the current version using a customized helm values file or using a modified Kubernetes manifest file. We have continued to make some significant progress with the help of the community towards supporting individual helm charts for each of the storage engines. The location for the various helm charts are as follows:
    • Dynamic Local PV ( host path and device)
    • Dynamic Local PV CSI ( ZFS )
    • Dynamic Local PV CSI ( Rawfile )
    • cStor
    • Mayastor
  • Automation of further Day 2 operations like - automatically detecting a node deletion from the cluster, and re-balancing the volume replicas onto the next available node.
  • Keeping the OpenEBS generated Kubernetes custom resources in sync with the upstream Kubernetes versions, like moving CRDs from v1beta1 to v1

Show your Support

Thank you @FeynmanZhou (KubeSphere) for becoming a public reference and supporter of OpenEBS by sharing your use case on ADOPTERS.md

Are you using or evaluating OpenEBS? You can help OpenEBS in its journey towards becoming CNCF Incubation project by sharing your OpenEBS story and join the league of OpenEBS Adopters.

Shout outs!

MANY THANKS to our existing contributors and for everyone helping OpenEBS Community going.

A very special thanks to our first-time contributors to code, tests, and docs: @alexppg, @arne-rusek, @Atharex, @bobek, @Mosibi, @mpartel, @nareshdesh, @rahulkrishnanfs, @ssytnikov18, @survivant

Getting Started

Prerequisite to install

  • Kubernetes 1.14+ or newer release is installed.
  • Kubernetes 1.17+ is recommended for using cStor CSI drivers.
  • Make sure that you run the below installation steps with the cluster-admin context. The installation will involve creating a new Service Account and assigning it to OpenEBS components.
  • Make sure iSCSI Initiator is installed on the Kubernetes nodes.
  • Node-Disk-Manager (NDM) helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters in NDM Config Map to exclude paths before installing OpenEBS.
  • NDM runs as a privileged pod since it needs to access the device information. Please make the necessary changes to grant access to run in privileged mode. For example, when running in RHEL/CentOS, you may need to set the security context appropriately. Refer Configuring OpenEBS with selinux=on

Install using kubectl

kubectl apply -f https://openebs.github.io/charts/2.4.0/openebs-operator.yaml

Install using Helm stable charts

helm repo add openebs https://openebs.github.io/charts
helm repo update
helm install --namespace openebs --name openebs openebs/openebs --version 2.4.0

For more details refer to the documentation at https://docs.openebs.io/

Upgrade

Upgrade to 2.4 is supported only from 1.0 or higher and follows a similar process as earlier releases. Detailed steps are provided here.

  • Upgrade OpenEBS Control Plane components.
  • Upgrade Jiva PVs to 2.4, either one at a time or multiple volumes.
  • Upgrade CStor Pools to 2.4 and its associated Volumes, either one at a time or multiple volumes.

For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.

Support

If you are having issues in setting up or upgrade, you can contact:

Major Limitations and Notes

For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you are using the cStor Storage Engine, please review the following before upgrading to this release.

  • The recommended approach for deploying cStor Pools is using the new custom resource called cStorPoolCluster (CSPC). Even though the provisioning of cStor Pools using StoragePoolClaim(SPC) is supported, it will be deprecated in future releases. The pools provisioned using SPC can be easily migrated to CSPC.
  • When using cStor Pools, make sure that raw block devices are available on the nodes. If the block devices are formatted with a filesystem or mounted, then cStor Pool will not be created on the block device. In the current release, there are manual steps that could be followed to clear the filesystem or use partitions for creating cStor Pools, please reach out to the community (#openebs) at https://slack.k8s.io.
  • If you are using cStor pools with ephemeral devices, starting with 1.2 - upon node restart, cStor Pool will not be automatically re-created on the new devices. This check has been put to make sure that nodes are not accidentally restarted with new disks. The steps to recover from such a situation are provided here, which involve changing the status of the corresponding CSP to Init.
  • Capacity over-provisioning is enabled by default on the cStor pools. If you don’t have alerts set up for monitoring the usage of the cStor pools, the pools can be fully utilized and the volumes can get into a read-only state. To avoid this, set up resource quotas as described in https://github.com/openebs/openebs/issues/2855.
openebs - v2.3.0

Published by kmova almost 4 years ago

Release Summary

OpenEBS v2.3 is our Hacktoberfest release with 40+ new contributors added to the project and ships with ARM64 support for cStor, Jiva, Dynamic Local PV. Mayastor seeing higher adoption rates resulting in further fixes and enhancements.

Here are some of the key highlights in this release:

New capabilities

  • ARM64 support (declared beta) for OpenEBS Data Engines - cStor, Jiva, Local PV (hostpath and device), ZFS Local PV.

    • A significant improvement in this release is the support for multi-arch container images for amd64 and arm64. The multi-arch images are available on the docker hub and will enable the users to run OpenEBS in the Kubernetes cluster that has a mix of arm64 and amd64 nodes.
    • In addition to ARM64 support, Local PV (hostpath and device) multi-arch container images include support for arm32 and power system.
    • The arch-specific container images like <image name>-amd64:<image-tag>, are also made available from docker hub and quay to support backward compatibility to users running OpenEBS deployments with arch-specific images.
    • To upgrade your volumes to multi-arch containers, make sure you specify the to-image to the multi-arch image available from docker or your own copy of it.
    • A special shout and many thanks to @xUnholy @shubham14bajpai, @akhilerm, and @prateekpandey14 for adding the multi-arch support to 27 OpenEBS container images generated from 14+ GitHub repositories. @wangzihao3, @radicand, @sgielen, @Pensu, and many more users from our slack community for helping with testing, feedback, and fixes by using the early versions of ARM64 builds in dev and production.
  • Enhanced the cStor Velero Plugin to help with automating the restore from an incremental backup. Restoring an incremental backup involves restoring the full backup (also called base backup and subsequent incremental backups till the desired incremental backup. With this release, the user can set a new parameter(restoreAllIncrementalSnapshots) in the VolumeSnapshotLocation to automate the restore of the required base and incremental backups. For detailed instructions to try this feature, please refer to this doc.

  • OpenEBS Mayastor is seeing tremendous growth in terms of users trying it out and providing feedback. A lot of work in this release has gone into fixing issues, enhancing the e2e tests and control plane, and adding initial support for snapshots. For further details on enhancements and bug fixes in Mayastor, please refer to Mayastor.

Key Improvements

  • Enhanced Node Disk Manager (NDM) to discover and create Block Device custom resources for device mapper(dm) devices like loopback devices, luks encrypted devices, and LVM devices. Prior to this release, if users had to use dm devices, they would have to manually create the corresponding Block Device CRs.
  • Enhanced the NDM block device tagging feature to reserve a block device from being assigned to Local PV (device) or cStor data engines. The block device can be reserved by specifying an empty value for the block device tag.
  • Added support to install ZFS Local PV using Kustomize. Also updated the default upgrade strategy for the ZFS CSI driver to run in parallel instead of rolling upgrades.
  • Several enhancements and fixes from the Community towards OpenEBS documentation, build and release scripts from the Hacktoberfest participation.

Key Bug Fixes

  • Fixed an issue with the upgrade of cStor and Jiva volumes in cases where volumes are provisioned without enabling monitoring side car.
  • Fixed an issue with the upgrade that would always set the image registry asquay.io/openebs, when upgrade job doesn't specify the registry location. The upgrade job will now fallback to the registry that is already configured on the existing pods.

Other notable updates

  • OpenEBS has applied for becoming a CNCF incubation project and is currently undergoing a Storage SIG review of the project and addressing the review comment provided.
  • Significant work is underway to make it easier to install only the components that the users finally decide to use for their workloads. These features will allow users to run different flavors of OpenEBS in K8s clusters optimized for the workloads they intend to run in the cluster. This can be achieved in the current version using a customized helm values file or using a modified Kubernetes manifest file.
  • Repositories are being re-factored to help simplify the contributor onboarding process. For instance, with this release, the dynamic-localpv-provisioner has been moved from openebs/maya to its own repository as openebs/dynamic-localpv-provisioner. This refactoring of the source code will also help with the simplified build and faster release process per data engine.
  • Automation of further Day 2 operations like - setting of cStor target IP after the cstor volume has been restored from a backup (Thanks to @zlymeda), automatically detecting a node deletion from the cluster, and re-balancing the volume replicas onto the next available node.
  • Keeping the OpenEBS generated Kubernetes custom resources in sync with the upstream Kubernetes versions, like moving CRDs from v1beta1 to v1

Show your Support

Thank you @shock0572 (ExactLab), @yydzhou (ByteDance), @kuja53, and @darioneto for becoming a public reference and supporter of OpenEBS by sharing your use case on ADOPTERS.md

Are you using or evaluating OpenEBS? You can help OpenEBS in its journey towards becoming CNCF Incubation project by sharing your OpenEBS story and join the league of OpenEBS Adopters.

Shout outs!

MANY THANKS to our existing contributors and for everyone helping OpenEBS Community going.

A very special thanks to our first-time contributors to code, tests, and docs: @filip-lebiecki, @hack3r-0m, @mtzaurus, @niladrih, @Akshay-Nagle, @Aman1440, @AshishMhrzn10, @Hard-Coder05, @ItsJulian, @KaranSinghBisht, @Naveenkhasyap, @Nelias, @Shivam7-1, @ShyamGit01, @Sumindar, @Taranzz25, @archit041198, @aryanrawlani28, @codegagan, @de-sh, @harikrishnajiju, @heygroot, @hnifmaghfur, @iTechsTR, @iamrajiv, @infiniteoverflow, @invidian, @kichloo, @lambda2, @lucasqueiroz, @prakhargurunani, @prakharshreyash15, @rafael-rosseto, @sabbatum, @salonigoyal2309, @sparkingdark, @sudhinm, @trishitapingolia, @vijay5158, @vmr1532.

Getting Started

Prerequisite to install

  • Kubernetes 1.14+ or newer release is installed.
  • Kubernetes 1.17+ is recommended for using cStor CSI drivers.
  • Make sure that you run the below installation steps with the cluster-admin context. The installation will involve creating a new Service Account and assigning it to OpenEBS components.
  • Make sure iSCSI Initiator is installed on the Kubernetes nodes.
  • Node-Disk-Manager (NDM) helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters in NDM Config Map to exclude paths before installing OpenEBS.
  • NDM runs as a privileged pod since it needs to access the device information. Please make the necessary changes to grant access to run in privileged mode. For example, when running in RHEL/CentOS, you may need to set the security context appropriately. Refer Configuring OpenEBS with selinux=on

Install using kubectl

kubectl apply -f https://openebs.github.io/charts/2.3.0/openebs-operator.yaml

Install using Helm stable charts

helm repo add openebs https://openebs.github.io/charts
helm repo update
helm install --namespace openebs --name openebs openebs/openebs --version 2.3.0

For more details refer to the documentation at https://docs.openebs.io/

Upgrade

Upgrade to 2.3 is supported only from 1.0 or higher and follows a similar process as earlier releases. Detailed steps are provided here.

  • Upgrade OpenEBS Control Plane components.
  • Upgrade Jiva PVs to 2.3, either one at a time or multiple volumes.
  • Upgrade CStor Pools to 2.3 and its associated Volumes, either one at a time or multiple volumes.

For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.

Support

If you are having issues in setting up or upgrade, you can contact:

Major Limitations and Notes

For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you are using the cStor Storage Engine, please review the following before upgrading to this release.

  • The recommended approach for deploying cStor Pools is using the new custom resource called cStorPoolCluster (CSPC). Even though the provisioning of cStor Pools using StoragePoolClaim(SPC) is supported, it will be deprecated in future releases. The pools provisioned using SPC can be easily migrated to CSPC.
  • When using cStor Pools, make sure that raw block devices are available on the nodes. If the block devices are formatted with a filesystem or mounted, then cStor Pool will not be created on the block device. In the current release, there are manual steps that could be followed to clear the filesystem or use partitions for creating cStor Pools, please reach out to the community (#openebs) at https://slack.k8s.io.
  • If you are using cStor pools with ephemeral devices, starting with 1.2 - upon node restart, cStor Pool will not be automatically re-created on the new devices. This check has been put to make sure that nodes are not accidentally restarted with new disks. The steps to recover from such a situation are provided here, which involve changing the status of the corresponding CSP to Init.
  • Capacity over-provisioning is enabled by default on the cStor pools. If you don’t have alerts set up for monitoring the usage of the cStor pools, the pools can be fully utilized and the volumes can get into a read-only state. To avoid this, set up resource quotas as described in https://github.com/openebs/openebs/issues/2855.
openebs - v2.2.0

Published by kmova about 4 years ago

Release Summary

OpenEBS v2.2 comes with a critical fix to NDM and several enhancements to cStor, ZFS Local PV and Mayastor. Here are some of the key highlights in this release:

New capabilities

  • OpenEBS ZFS Local PV adds support for Incremental Backup and Restore by enhancing the OpenEBS Velero Plugin. For detailed instructions to try this feature, please refer to this doc.

  • OpenEBS Mayastor instances now expose a gRPC API which is used to enumerate block disk devices attached to the host node, as an aid to the identification of suitable candidates for inclusion within storage Pools during configuration. This functionality is also accessible within the mayastor-client diagnostic utility. For further details on enhancements and bug fixes in Mayastor, please refer to Mayastor release notes.

Key Improvements

Key Bug Fixes

  • Fixes an issue where NDM could cause data loss by creating a partition table on an uninitialized iSCSI volume. This can happen due to a race condition between NDM pod initializing and iSCSI volume initializing after a node reboot and if the iSCSI volume is not fully initialized when NDM probes for device details. This issue has been observed with NDM 0.8.0 released with OpenEBS 2.0 and has been fixed in OpenEBS 2.1.1 and OpenEBS 2.2.0 (latest) release.

Shout outs!

MANY THANKS to our existing contributors and for everyone helping OpenEBS Community going.

A very special thanks to our first-time contributors to code, tests, and docs: @didier-durand, @zlymeda, @avats-dev, and many more contributing via Hacktoberfest.

Show your Support

Thank you @danielsand for becoming a public reference and supporter of OpenEBS by sharing their use case on ADOPTERS.md

Are you using or evaluating OpenEBS? You can help OpenEBS in its journey towards becoming CNCF Incubation project by sharing your OpenEBS story and join the league of OpenEBS Adopters.

Getting Started

Prerequisite to install

  • Kubernetes 1.14+ or newer release is installed.
  • Kubernetes 1.17+ is recommended for using cStor CSI drivers.
  • Make sure that you run the below installation steps with the cluster-admin context. The installation will involve creating a new Service Account and assigning it to OpenEBS components.
  • Make sure iSCSI Initiator is installed on the Kubernetes nodes.
  • Node-Disk-Manager (NDM) helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters in NDM Config Map to exclude paths before installing OpenEBS.
  • NDM runs as a privileged pod since it needs to access the device information. Please make the necessary changes to grant access to run in privileged mode. For example, when running in RHEL/CentOS, you may need to set the security context appropriately. Refer Configuring OpenEBS with selinux=on

Install using kubectl

kubectl apply -f https://openebs.github.io/charts/2.2.0/openebs-operator.yaml

Install using Helm stable charts

helm repo add openebs https://openebs.github.io/charts
helm repo update
helm install --namespace openebs --name openebs openebs/openebs --version 2.2.0

For more details refer to the documentation at https://docs.openebs.io/

Upgrade

Upgrade to 2.2 is supported only from 1.0 or higher and follows a similar process as earlier releases. Detailed steps are provided here.

  • Upgrade OpenEBS Control Plane components.
  • Upgrade Jiva PVs to 2.2, either one at a time or multiple volumes.
  • Upgrade CStor Pools to 2.2 and its associated Volumes, either one at a time or multiple volumes.

For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.

Support

If you are having issues in setting up or upgrade, you can contact:

Major Limitations and Notes

For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you are using the cStor Storage Engine, please review the following before upgrading to this release.

  • The recommended approach for deploying cStor Pools is using the new custom resource called cStorPoolCluster (CSPC). Even though the provisioning of cStor Pools using StoragePoolClaim(SPC) is supported, it will be deprecated in future releases. The pools provisioned using SPC can be easily migrated to CSPC.
  • When using cStor Pools, make sure that raw block devices are available on the nodes. If the block devices are formatted with a filesystem or mounted, then cStor Pool will not be created on the block device. In the current release, there are manual steps that could be followed to clear the filesystem or use partitions for creating cStor Pools, please reach out to the community (#openebs) at https://slack.k8s.io.
  • If you are using cStor pools with ephemeral devices, starting with 1.2 - upon node restart, cStor Pool will not be automatically re-created on the new devices. This check has been put to make sure that nodes are not accidentally restarted with new disks. The steps to recover from such a situation are provided here, which involve changing the status of the corresponding CSP to Init.
  • Capacity over-provisioning is enabled by default on the cStor pools. If you don’t have alerts set up for monitoring the usage of the cStor pools, the pools can be fully utilized and the volumes can get into a read-only state. To avoid this, set up resource quotas as described in https://github.com/openebs/openebs/issues/2855.
openebs - v2.1.0

Published by kmova about 4 years ago

Release Summary

OpenEBS v2.1 is a developer release focused on code, tests and build refactoring along with some critical bug fixes and user enhancements. This release also introduces support for remote Backup and Restore of ZFS Local PV using OpenEBS Velero plugin.

Here are some of the key highlights in this release:

New capabilities:

  • OpenEBS ZFS Local PV adds support for Backup and Restore by enhancing the OpenEBS Velero Plugin. For detailed instructions to try this feature, please refer to this doc.
  • OpenEBS Mayastor continues its momentum by enhancing support for Rebuild and other fixes. For detailed instructions on how to get started with Mayastor please refer to this Quickstart guide.

Key Improvements:

  • Enhanced the Velero Plugin to perform Backup of a volume and Restore of another volume to run simultaneously.
  • Added a validation to restrict OpenEBS Namespace deletion if there are pools or volumes configured. The validation is added via Kubernetes admission webhook.
  • Added support to restrict creation of cStor Pools (via CSPC) on Block Devices that are tagged (or reserved).
  • Enhanced NDM to automatically create a block device tag on the discovered device if the device matches a certain path name pattern.

Key Bug Fixes:

  • Fixes an issue where local backup and restore of cStor volumes provisioned via CSI were failing.
  • Fixes an issue where cStor CSI Volume remount would fail intermittently when application pod is restarted or after recovering from a network loss between application pod and target node.
  • Fixes an issue where BDC cleanup by NDM would cause a panic, if the bound BD was manually deleted.

Shout outs!

MANY THANKS to our existing contributors and for everyone helping OpenEBS Community going.

A very special thanks to our first-time contributors to code, tests, and docs: @rohansadale, @AJEETRAI707, @smijolovic, @jlcox1970

Thanks, also to @sonasingh46 for being the 2.1 release coordinator.

Show your Support

Thank you @SeMeKh (Hamravesh), @tobg(TOBG Services Ltd) for becoming a public reference and supporter of OpenEBS by sharing their use case on ADOPTERS.md

Are you using or evaluating OpenEBS? You can help OpenEBS in its journey towards becoming CNCF Incubation project by sharing your OpenEBS story and join the league of OpenEBS Adopters.

Getting Started

Prerequisite to install

  • Kubernetes 1.14+ or newer release is installed.
  • Kubernetes 1.17+ is recommended for using cStor CSI drivers.
  • Make sure that you run the below installation steps with the cluster-admin context. The installation will involve creating a new Service Account and assigning it to OpenEBS components.
  • Make sure iSCSI Initiator is installed on the Kubernetes nodes.
  • Node-Disk-Manager (NDM) helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters in NDM Config Map to exclude paths before installing OpenEBS.
  • NDM runs as a privileged pod since it needs to access the device information. Please make the necessary changes to grant access to run in privileged mode. For example, when running in RHEL/CentOS, you may need to set the security context appropriately. Refer Configuring OpenEBS with selinux=on

Install using kubectl

kubectl apply -f https://openebs.github.io/charts/2.1.0/openebs-operator.yaml

Install using Helm stable charts

helm repo add openebs https://openebs.github.io/charts
helm repo update
helm install --namespace openebs --name openebs openebs/openebs --version 2.1.0

For more details refer to the documentation at https://docs.openebs.io/

Upgrade

Upgrade to 2.1 is supported only from 1.0 or higher and follows a similar process as earlier releases. Detailed steps are provided here.

  • Upgrade OpenEBS Control Plane components.
  • Upgrade Jiva PVs to 2.1, either one at a time or multiple volumes.
  • Upgrade CStor Pools to 2.1 and its associated Volumes, either one at a time or multiple volumes.

For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.

Support

If you are having issues in setting up or upgrade, you can contact:

Major Limitations and Notes

For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you are using the cStor Storage Engine, please review the following before upgrading to this release.

  • The recommended approach for deploying cStor Pools is using the new custom resource called cStorPoolCluster (CSPC). Even though the provisioning of cStor Pools using StoragePoolClaim(SPC) is supported, it will be deprecated in future releases. The pools provisioned using SPC can be easily migrated to CSPC.
  • When using cStor Pools, make sure that raw block devices are available on the nodes. If the block devices are formatted with a filesystem or mounted, then cStor Pool will not be created on the block device. In the current release, there are manual steps that could be followed to clear the filesystem or use partitions for creating cStor Pools, please reach out to the community (#openebs) at https://slack.k8s.io.
  • If you are using cStor pools with ephemeral devices, starting with 1.2 - upon node restart, cStor Pool will not be automatically re-created on the new devices. This check has been put to make sure that nodes are not accidentally restarted with new disks. The steps to recover from such a situation are provided here, which involve changing the status of the corresponding CSP to Init.
  • Capacity over-provisioning is enabled by default on the cStor pools. If you don’t have alerts set up for monitoring the usage of the cStor pools, the pools can be fully utilized and the volumes can get into a read-only state. To avoid this, set up resource quotas as described in https://github.com/openebs/openebs/issues/2855.
openebs - v2.0.0

Published by kmova about 4 years ago

Release Summary

OpenEBS has reached a significant milestone with v2.0 with support for cStor CSI drivers graduating to beta, improved NDM capabilities to manage virtual and partitioned block devices, and much more.

OpenEBS v2.0 includes the following Storage Engines that are currently deployed in production by various organizations:

  • Jiva
  • cStor (CSI Driver available from 2.0 onwards)
  • ZFS Local PV
  • Dynamic Local PV hostpath
  • Dynamic Local PV Block

OpenEBS v2.0 also includes the following Storage Engines, going through alpha testing at a few organizations. Please get in touch with us, if you would like to participate in the alpha testing of these engines.

  • Mayastor
  • Dynamic Local PV - Rawfile

For a change summary since v1.12, please refer to Release 2.0 Change Summary.


Here are some of the key highlights in this release:

New capabilities:

  • OpenEBS cStor provisioning with the new schema and CSI drivers has been declared as beta. For detailed instructions on how to get started with new cStor Operators please refer to the Quickstart guide. The new version of cStor Schema addresses the user feedback in terms of ease of use for cStor provisioning as well as to making it easier to perform Day 2 Operations on cStor Pools using GitOps. Note that existing StoragePoolClaim (SPC) pools will continue to function as-is and there is support available to migrate from SPC schema to new schema. In addition to supporting all the features of SPC based cStor pools, the CSPC ( cStor Storage Pool Cluster) enables the following:
    • cStor Pool expansion by adding block devices to CSPC YAML
    • Replace a block device used within cStor pool via editing the CSPC YAML
    • Scale-up or down the cStor volume replicas via editing cStor Volume Config YAML
    • Expand Volume by updating the PVC YAML
  • Significant improvements to NDM in supporting (and better handling) of partitions and virtual block devices across reboots.
  • OpenEBS Mayastor continues its momentum by adding support for Rebuild, NVMe-oF Support, enhanced supportability, and several other fixes. For detailed instructions on how to get started with Mayastor please refer to this Quickstart guide.
  • Continuing the focus on additional integration and e2e tests for all engines, more documentation.

Key Improvements:

  • Enhanced the Jiva target controller to track the internal snapshots and re-claim the space.
  • Support for enabling/disabling leader election mechanism which involves interacting with kube-apiserver. In deployments where provisioners are configured with single replicas, the leader election can be disabled. The default value is enabled. The configuration is controlled via environment variable "LEADER_ELECTION" in operator yaml or via helm value (enableLeaderElection).

Key Bug Fixes:

  • Fixes an issue where NDM would fail to wipe the filesystem of the released sparse block device.
  • Fixes an issue with the mounting of XFS cloned volume.
  • Fixes an issue when PV with fsType: ZFS will fail if the capacity is not a multiple of record size specified in StorageClass.

Shout outs!

MANY THANKS to our existing contributors and for everyone helping OpenEBS Community going.

A very special thanks to our first-time contributors to code, tests, and docs: @silentred, @whoan, @sonicaj, @dhoard, @akin-ozer, @alexppg, @FestivalBobcats

Thanks, also to @akhilerm for being the 2.0 release coordinator.

Show your Support

Thank you @nd2014-public(D-Rating), @baskinsy(Stratus5), @evertmulder(KPN) for becoming a public reference and supporter of OpenEBS by sharing their use case on ADOPTERS.md

A very special thanks to @yhrenlee for sharing the story in DoK Community, about how OpenEBS helped Arista with migrating their services to Kubernetes.

Are you using or evaluating OpenEBS? You can help OpenEBS in its journey towards becoming CNCF Incubation project by sharing your OpenEBS story and join the league of OpenEBS Adopters.

Getting Started

Prerequisite to install

  • Kubernetes 1.14+ or newer release is installed.
  • Kubernetes 1.17+ is recommended for using cStor CSI drivers.
  • Make sure that you run the below installation steps with the cluster-admin context. The installation will involve creating a new Service Account and assigning it to OpenEBS components.
  • Make sure iSCSI Initiator is installed on the Kubernetes nodes.
  • Node-Disk-Manager (NDM) helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters in NDM Config Map to exclude paths before installing OpenEBS.
  • NDM runs as a privileged pod since it needs to access the device information. Please make the necessary changes to grant access to run in privileged mode. For example, when running in RHEL/CentOS, you may need to set the security context appropriately. Refer Configuring OpenEBS with selinux=on

Install using kubectl

kubectl apply -f https://openebs.github.io/charts/2.0.0/openebs-operator.yaml

Install using Helm stable charts

helm repo add openebs https://openebs.github.io/charts
helm repo update
helm install --namespace openebs --name openebs openebs/openebs --version 2.0.0

For more details refer to the documentation at https://docs.openebs.io/

Upgrade

Upgrade to 2.0 is supported only from 1.0 or higher and follows a similar process as earlier releases. Detailed steps are provided here.

  • Upgrade OpenEBS Control Plane components.
  • Upgrade Jiva PVs to 2.0, either one at a time or multiple volumes.
  • Upgrade CStor Pools to 2.0 and its associated Volumes, either one at a time or multiple volumes.

For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.

Support

If you are having issues in setting up or upgrade, you can contact:

Major Limitations and Notes

For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you are using the cStor Storage Engine, please review the following before upgrading to this release.

  • The recommended approach for deploying cStor Pools is using the new custom resource called cStorPoolCluster (CSPC). Even though the provisioning of cStor Pools using StoragePoolClaim(SPC) is supported, it will be deprecated in future releases. The pools provisioned using SPC can be easily migrated to CSPC.
  • When using cStor Pools, make sure that raw block devices are available on the nodes. If the block devices are formatted with a filesystem or mounted, then cStor Pool will not be created on the block device. In the current release, there are manual steps that could be followed to clear the filesystem or use partitions for creating cStor Pools, please reach out to the community (#openebs) at https://slack.k8s.io.
  • If you are using cStor pools with ephemeral devices, starting with 1.2 - upon node restart, cStor Pool will not be automatically re-created on the new devices. This check has been put to make sure that nodes are not accidentally restarted with new disks. The steps to recover from such a situation are provided here, which involve changing the status of the corresponding CSP to Init.
  • Capacity over-provisioning is enabled by default on the cStor pools. If you don’t have alerts set up for monitoring the usage of the cStor pools, the pools can be fully utilized and the volumes can get into a read-only state. To avoid this, set up resource quotas as described in https://github.com/openebs/openebs/issues/2855.
openebs - v1.12.0

Published by kmova over 4 years ago

Release Summary

The theme for OpenEBS v1.12 continues to be about polishing OpenEBS Storage engines Mayastor, cStor CSI Driver, and preparing them for Beta. A lot of efforts from the contributors were around evaluating more CI/CD and testing frameworks.

For a detailed change summary, please refer to Release 1.12 Change Summary.

Before getting into the release summary,


Important Announcement: OpenEBS Community Slack channels have migrated to Kubernetes Slack Workspace by Jun 22nd

The OpenEBS channels on Kubernetes Slack are:

More details about this migration can be found here.


Here are some of the key highlights in this release:

Breaking Change/Deprecation

  • Important Note for OpenEBS Helm Users: The repository https://github.com/helm/charts is being deprecated. All the charts are now being moved to Helm Hub or to project-specific repositories. OpenEBS charts have migrated to openebs/charts repository. Starting with 1.12.0, openebs can be installed via the following helm commands:
    helm repo add openebs https://openebs.github.io/charts
    helm repo update
    helm install --namespace openebs --name openebs openebs/openebs
    

Key Improvements:

  • [Build] Refactor and add multi-arch image generation support on the NDM repo. node-disk-manager#428 (@xUnholy)
  • [Install] Support specifying the webhook validation policy to fail/ignore via ENV (ADMISSION_WEBHOOK_FAILURE_POLICY) on admission server deployment. maya#1726 (@prateekpandey14)
  • [NDM] Enhanced NDM Operator to attach events to BDC CR while processing BDC operations. node-disk-manager#425 (@rahulchheda)
  • [ZFS Local PV] Add support for btrfs as an additional FS Type. zfs-localpv#170 (@pawanpraka1, @mikroskeem)
  • [ZFS Local PV] Add support for a shared mount on ZFS Volume to support RWX use cases. zfs-localpv#164 (@pawanpraka1, @stevefan1999-personal)

Key Bug Fixes:

  • [Provisioners] Fixes a panic on maya-apiserver caused due to PVC names longer than 63 chars. maya#1720 (@kmova @stuartpb)
  • [Upgrade] Fixes an issue where the upgrade was failing some pre-flight checks when the maya-apiserver was deployed in HA mode. maya#1720 (@shubham14bajpai @utkudarilmaz)
  • [Upgrade] Fixes an issue where the upgrade was failing if the deployment rollout was taking longer than 5 min. maya#1719 (@shubham14bajpai @sgielen)

Alpha and Beta Engine updates

  • OpenEBS Mayastor continues its momentum by adding support for Rebuild, NVMe-oF Support, enhanced supportability, and several other fixes. For detailed instructions on how to get started with Mayastor please refer to this Quickstart guide.
  • OpenEBS ZFS Local PV has been declared as beta. For detailed instructions on how to get started with ZFS Local PV please refer to the Quick start guide.
  • OpenEBS cStor CSI support is marked as feature-complete and further releases will focus on additional integration and e2e tests. For detailed instructions on getting started with CSI driver for cStor, please refer to the Quick start guide

Shout outs!

MANY THANKS to our existing contributors and for everyone helping OpenEBS Community going.

A very special thanks to our first-time contributors to code, tests, and docs: @mikroskeem, @stuartpb, @utkudarilmaz

Thanks, also to @mittachaitu for being the 1.12 release coordinator.

Announcing new Maintainers/Reviewers

With gratitude and joy, we welcome the following members to the OpenEBS organization as reviewers for their continued contributions and commitment to help the OpenEBS project and community.

  • "Mehran Kholdi",@SeMeKh,Hamravesh #control-plane-maintainers
  • "Michael Fornaro",@xUnholy,Independent-Raspbernetes #control-plane-maintainers
  • "Peeyush Gupta",@Pensu,DigitalOcean #control-plane-maintainers

Check out our full list of maintainers and reviewers here. Our Governance policy is here.

Show your Support

Thank you @dstathos and @mikroskeem for becoming a public reference and supporter of OpenEBS by sharing your use case on ADOPTERS.md

Are you using or evaluating OpenEBS? You can help OpenEBS in its journey towards becoming CNCF Incubation project by sharing your OpenEBS story and join the league of OpenEBS Adopters.

Getting Started

Prerequisite to install

  • Kubernetes 1.14+ or newer release is installed
  • Make sure that you run the below installation steps with the cluster-admin context. The installation will involve creating a new Service Account and assigning it to OpenEBS components.
  • Make sure iSCSI Initiator is installed on the Kubernetes nodes.
  • Node-Disk-Manager (NDM) helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters in NDM Config Map to exclude paths before installing OpenEBS.
  • NDM runs as a privileged pod since it needs to access the device information. Please make the necessary changes to grant access to run in privileged mode. For example, when running in RHEL/CentOS, you may need to set the security context appropriately. Refer Configuring OpenEBS with selinux=on

Install using kubectl

kubectl apply -f https://openebs.github.io/charts/1.12.0/openebs-operator.yaml

Install using Helm stable charts

helm repo update
helm install --namespace openebs --name openebs stable/openebs --version 1.12.1

For more details refer to the documentation at https://docs.openebs.io/

Upgrade

Upgrade to 1.12 is supported only from 1.0 or higher and follows a similar process as earlier releases. Detailed steps are provided here.

  • Upgrade OpenEBS Control Plane components.
  • Upgrade Jiva PVs to 1.12, either one at a time or multiple volumes.
  • Upgrade CStor Pools to 1.12 and its associated Volumes, either one at a time or multiple volumes.

For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.

Support

If you are having issues in setting up or upgrade, you can contact:

Major Limitations and Notes

For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you are using the cStor Storage Engine, please review the following before upgrading to this release.

  • The recommended approach for deploying cStor Pools is to specify the list of block devices to be used in the StoragePoolClaim (SPC). The automatic selection of block devices has very limited support. Automatic provisioning of cStor pools with block devices of different capacities is not recommended.
  • When using cStor Pools, make sure that raw block devices are available on the nodes. If the block devices are formatted with a filesystem, partitioned, or mounted, then cStor Pool will not be created on the block device. In the current release, there are manual steps that could be followed to clear the filesystem or use partitions for creating cStor Pools, please reach out to the community (#openebs) at https://slack.k8s.io.
  • If you are using cStor pools with ephemeral devices, starting with 1.2 - upon node restart, cStor Pool will not be automatically re-created on the new devices. This check has been put to make sure that nodes are not accidentally restarted with new disks. The steps to recover from such a situation are provided here, which involve changing the status of the corresponding CSP to Init.
  • Capacity over-provisioning is enabled by default on the cStor pools. If you don’t have alerts set up for monitoring the usage of the cStor pools, the pools can be fully utilized and the volumes can get into a read-only state. To avoid this, set up resource quotas as described in https://github.com/openebs/openebs/issues/2855.
  • The new version of cStor Schema is being worked on to address the user feedback in terms of ease of use for cStor provisioning as well as to make it easier to perform Day 2 Operations on cStor Pools using GitOps. Note that existing StoragePoolClaim pools will continue to function as-is. Along with stabilizing the new schema, we have also started working on migration features - which will easily migrate the clusters to the new schema in the upcoming releases. Once the proposed changes are complete, seamless migration from older CRs to new will be supported. To track the progress of the proposed changes, please refer to this design proposal. Note: We recommend users to try out the new schema on greenfield clusters to provide feedback. Get started with these instructions.
openebs - v1.11.0

Published by kmova over 4 years ago

Release Summary

The theme for OpenEBS v1.11 has been about polishing OpenEBS Storage engines Mayastor, ZFS Local PV, cStor CSI Driver, and preparing them for Beta. This release also includes several supportability enhancements and fixes for the existing engines.

For a detailed change summary, please refer to Release 1.11 Change Summary.

Before getting into the release details,

Important Announcement: OpenEBS Community Slack channels will be migrated to Kubernetes Slack Workspace by Jun 22nd

In the interest of neutral governance, the OpenEBS community support via slack is being migrated from openebs-community slack (a free version of slack with limited support for message retention) to the following OpenEBS channels on Kubernetes Slack owned by CNCF.

The #openebs-users channel will be marked as read-only by June 22nd.

More details about this migration can be found here.

Given that openebs-community slack has been a neutral home for many vendors that are offering free and commercial support/products on top of OpenEBS, the workspace will continue to live on. These vendors are requested to create their own public channels and the information about those channels can be communicated to users via the OpenEBS website by raising an issue/pr to https://github.com/openebs/website.


Here are some of the key highlights in this release:

New capabilities:

  • OpenEBS Mayastor continues its momentum by adding support for Rebuild, NVMe-oF Support, enhanced supportability and several other fixes. For detailed instructions on how to get started with Mayastor please refer to this Quickstart guide.
  • OpenEBS ZFS Local PV has been declared as beta. For detailed instructions on how to get started with ZFS Local PV please refer to the Quick start guide.
  • OpenEBS cStor CSI support is marked as feature-complete and further releases will focus on additional integration and e2e tests.

Key Improvements:

  • Enhanced helm charts to make NDM filterconfigs.state configurable. charts#107 (@fukuta-tatsuya-intec)
  • Added configuration to exclude rbd devices from being used for creating Block Devices charts#111 (@GTB3NW)
  • Added support to display FSType information in Block Devices node-disk-manager#438 (@harshthakur9030)
  • Add support to mount ZFS datasets using legacy mount property to allow for multiple mounts on a single node. zfs-localpv#151 (@pawanpraka1)
  • Add additional automation tests for validating ZFS Local PV and cStor Backup/Restore. (@w3aman @shashank855)

Key Bug Fixes:

  • Fixes an issue where volumes meant to be filesystem datasets got created as zvols due to misspelled case for StorageClass parameter. The fix makes the StorageClass parameters case insensitive zfs-localpv#144 (@cruwe)
  • Fixes an issue where the read-only option was not being set of ZFS volumes. zfs-localpv#137 (@pawanpraka1)
  • Fixes an issue where incorrect pool name or other parameters in Storage Class would result in stale ZFS Volume CRs being created. zfs-localpv#121 zfs-localpv#145 (@pawanpraka1)
  • Fixes an issue where the user configured ENV variable for MAX_CHAIN_LENGTH was not being read by Jiva. jiva#309 (@payes)
  • Fixes an issue where cStor Pool was being deleted forcefully before the replicas on cStor Pool were deleted. This can cause data loss in situations where SPCs are incorrectly edited by the user, and a cStor Pool deletion is attempted. maya#1710 (@mittachaitu)
  • Fixes an issue where a failure to delete the cStor Pool on the first attempt will leave an orphaned cStor custom resource (CSP) in the cluster. maya#1595 (@mittachaitu)

Shout outs!

MANY THANKS to our existing contributors and for everyone helping OpenEBS Community going.

A very special thanks to our first-time contributors to code, tests and docs: @cruwe, @sgielen, @ShubhamB99, @GTB3NW, @Icedroid, @fukuta-tatsuya-intec, @mtmn, @nrusinko, @radicand, @zadunn, @xUnholy,

We also are delighted to have @harshthakur9030, @semekh, @vaniisgh contributing to OpenEBS via the CNCF Community Bridge Program.

Thanks, also to @shubham14bajpai for being the 1.11 release co-ordinator.

Show your Support

Thank you @zadunn (Optoro), @meyskens, @stevefan1999-personal, @darias1986(DISID) for becoming a public reference and supporter of OpenEBS by sharing your use case on ADOPTERS.md

Are you using or evaluating OpenEBS? You can help OpenEBS in its journey towards becoming CNCF Incubation project by sharing your OpenEBS story and join the league of OpenEBS Adopters.

Getting Started

Prerequisite to install

  • Kubernetes 1.14+ or newer release is installed
  • Make sure that you run the below installation steps with the cluster-admin context. The installation will involve creating a new Service Account and assigning it to OpenEBS components.
  • Make sure iSCSI Initiator is installed on the Kubernetes nodes.
  • Node-Disk-Manager (NDM) helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters in NDM Config Map to exclude paths before installing OpenEBS.
  • NDM runs as a privileged pod since it needs to access the device information. Please make the necessary changes to grant access to run in privileged mode. For example, when running in RHEL/CentOS, you may need to set the security context appropriately. Refer Configuring OpenEBS with selinux=on

Install using kubectl

kubectl apply -f https://openebs.github.io/charts/1.11.0/openebs-operator.yaml

Install using Helm stable charts

helm repo update
helm install --namespace openebs --name openebs stable/openebs --version 1.11.0

For more details refer to the documentation at https://docs.openebs.io/

Upgrade

Upgrade to 1.11 is supported only from 1.0 or higher and follows a similar process as earlier releases. Detailed steps are provided here.

  • Upgrade OpenEBS Control Plane components.
  • Upgrade Jiva PVs to 1.11, either one at a time or multiple volumes.
  • Upgrade CStor Pools to 1.11 and its associated Volumes, either one at a time or multiple volumes.

For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.

Support

If you are having issues in setting up or upgrade, you can contact:

Major Limitations and Notes

For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you are using the cStor Storage Engine, please review the following before upgrading to this release.

  • The recommended approach for deploying cStor Pools is to specify the list of block devices to be used in the StoragePoolClaim (SPC). The automatic selection of block devices has very limited support. Automatic provisioning of cStor pools with block devices of different capacities is not recommended.
  • When using cStor Pools, make sure that raw block devices are available on the nodes. If the block devices are formatted with a filesystem, partitioned, or mounted, then cStor Pool will not be created on the block device. In the current release, there are manual steps that could be followed to clear the filesystem or use partitions for creating cStor Pools, please reach out to the community (#openebs) at https://slack.k8s.io.
  • If you are using cStor pools with ephemeral devices, starting with 1.2 - upon node restart, cStor Pool will not be automatically re-created on the new devices. This check has been put to make sure that nodes are not accidentally restarted with new disks. The steps to recover from such a situation are provided here, which involve changing the status of the corresponding CSP to Init.
  • Capacity over-provisioning is enabled by default on the cStor pools. If you don’t have alerts set up for monitoring the usage of the cStor pools, the pools can be fully utilized and the volumes can get into a read-only state. To avoid this, set up resource quotas as described in https://github.com/openebs/openebs/issues/2855.
  • The new version of cStor Schema is being worked on to address the user feedback in terms of ease of use for cStor provisioning as well as to make it easier to perform Day 2 Operations on cStor Pools using GitOps. Note that existing StoragePoolClaim pools will continue to function as-is. Along with stabilizing the new schema, we have also started working on migration features - which will easily migrate the clusters to the new schema in the upcoming releases. Once the proposed changes are complete, seamless migration from older CRs to new will be supported. To track the progress of the proposed changes, please refer to this design proposal. Note: We recommend users to try out the new schema on greenfield clusters to provide feedback. Get started with these instructions.
openebs - v1.10.0

Published by kmova over 4 years ago

Release Summary

The theme for OpenEBS v1.10 has been about polishing the new OpenEBS Storage engines Mayastor, ZFS Local PV, and preparing them for Beta. This release also includes several supportability enhancements and fixes for the existing engines.

For a detailed change summary, please refer to Release 1.10 Change Summary.

Here are some of the key highlights in this release:

New capabilities:

  • The first release of OpenEBS Mayastor developed using NVMe based architecture, targetted at addressing performance requirements of IO-intensive workloads is ready for alpha testing. For detailed instructions on how to get started with Mayastor please refer to this Quickstart guide.
  • Enhancements to OpenEBS ZFS Local PV that includes resolving issues found during scale testing, fully functional CSI driver, and sample Grafana Dashboard for monitoring metrics on ZFS Volumes and Pools. For detailed instructions on how to get started with ZFS Local PV please refer to the Quick start guide.

Key Improvements:

Key Bug Fixes:

Shout outs!

MANY THANKS for everyone helping OpenEBS Community Slack going and very special thanks to the following people who joined us on GitHub for this release:

  • As first-time contributions: @AntonioCarlini, @blaisedias, @chriswldenyer, @cjones1024, @filippobosi, @gahag, @GlennBullingham, @jamie-0, @jonathan-teh, @paulyoong, @tiagolobocastro, @tjoshum, @yannis218
  • As users finding issues and testing the fixes: @chornlgscout, @cortopy, @freym, @Icedroid, @ppodolsky, @spencergilbert, @surajssd @erbiao3k, @sgielen, @vishal-biyani, @willzhang, @xUnholy
  • As contributors to code, tests, and docs: @akhilerm, @gila, @gprasath, @IsAmrish, @jkryl, @kmova, @ksatchit, @mittachaitu, @muratkars, @mynktl, @obeyler, @nsathyaseelan, @pawanpraka1, @payes, @prateekpandey14, @ranjithwingrider, @slalwani97, @somesh2905, @sonasingh46, @utkarshmani1997, @vishnuitta, @w3aman

Show your Support

You can help OpenEBS in its journey towards becoming CNCF Incubation project by sharing your OpenEBS story and join the league of OpenEBS Adopters.

Thank you @aretakisv @alexjmbarton for adding your OpenEBS usage story to ADOPTERS.md

Getting Started

Prerequisite to install

  • Kubernetes 1.14+ or newer release is installed
  • Make sure that you run the below installation steps with the cluster-admin context. The installation will involve creating a new Service Account and assigning it to OpenEBS components.
  • Make sure iSCSI Initiator is installed on the Kubernetes nodes.
  • Node-Disk-Manager (NDM) helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters in NDM Config Map to exclude paths before installing OpenEBS.
  • NDM runs as a privileged pod since it needs to access the device information. Please make the necessary changes to grant access to run in privileged mode. For example, when running in RHEL/CentOS, you may need to set the security context appropriately. Refer Configuring OpenEBS with selinux=on

Install using kubectl

kubectl apply -f https://openebs.github.io/charts/openebs-operator-1.10.0.yaml

Install using Helm stable charts

helm repo update
helm install --namespace openebs --name openebs stable/openebs --version 1.10.0

For more details refer to the documentation at https://docs.openebs.io/

Upgrade

Upgrade to 1.10 is supported only from 1.0 or higher and follows a similar process as earlier releases. Detailed steps are provided here.

  • Upgrade OpenEBS Control Plane components.
  • Upgrade Jiva PVs to 1.10, either one at a time or multiple volumes.
  • Upgrade CStor Pools to 1.10 and its associated Volumes, either one at a time or multiple volumes.

For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.

Support

If you are having issues in setting up or upgrade, you can contact us via:

Major Limitations and Notes

For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you are using the cStor Storage Engine, please review the following before upgrading to this release.

  • The recommended approach for deploying cStor Pools is to specify the list of block devices to be used in the StoragePoolClaim (SPC). The automatic selection of block devices has very limited support. Automatic provisioning of cStor pools with block devices of different capacities is not recommended.
  • When using cStor Pools, make sure that raw block devices are available on the nodes. If the block devices are formatted with a filesystem, partitioned, or mounted, then cStor Pool will not be created on the block device. In the current release, there are manual steps that could be followed to clear the filesystem or use partitions for creating cStor Pools, please reach out to the community at https://slack.openebs.io.
  • If you are using cStor pools with ephemeral devices, starting with 1.2 - upon node restart, cStor Pool will not be automatically re-created on the new devices. This check has been put to make sure that nodes are not accidentally restarted with new disks. The steps to recover from such a situation are provided here, which involve changing the status of the corresponding CSP to Init.
  • Capacity over-provisioning is enabled by default on the cStor pools. If you don’t have alerts set up for monitoring the usage of the cStor pools, the pools can be fully utilized and the volumes can get into a read-only state. To avoid this, set up resource quotas as described in https://github.com/openebs/openebs/issues/2855.
  • The new version of cStor Schema is being worked on to address the user feedback in terms of ease of use for cStor provisioning as well as to make it easier to perform Day 2 Operations on cStor Pools using GitOps. Note that existing StoragePoolClaim pools will continue to function as-is. Along with stabilizing the new schema, we have also started working on migration features - which will easily migrate the clusters to the new schema in the upcoming releases. Once the proposed changes are complete, seamless migration from older CRs to new will be supported. To track the progress of the proposed changes, please refer to this design proposal. Note: We recommend users to try out the new schema on greenfield clusters to provide feedback. Get started with these (instructions)[https://blog.mayadata.io/openebs/cstor-pool-operations-via-cspc-in-openebs].
openebs - 1.9.0

Published by kmova over 4 years ago

Change Summary

OpenEBS v1.9 includes several enhancements and bug fixes to the underlying cStor, Jiva and Local PV data engines. The improvements to the cStor backup and restore can help with a significant reduction in storage capacity consumption and also in enabling faster CI/CD pipelines. Some long-requested features like support for bulk volume upgrades and ARM containers for all OpenEBS containers are also included in this release.

For a detailed change summary, please refer to Release 1.9 Change Summary.

Special thanks to our first-time contributor @stevefan1999-personal in this release and also many thanks to @jemershaw, @lfillmore, @pkavajin, @SVronskiy, @zzzuzik, @davidkarlsen for reporting issues and testing the fixes.

Here are some of the key highlights in this release:

Key Improvements:

  • Added support for cloning a cStor volume into a different namespace. (#2844) (@SVronskiy, @mynktl)
  • Added support to upgrade multiple volumes using a single command. (#2701) (@zzzuzik, @shubham14bajpai)
  • Added support for reserving block devices that can be used by Local PV (#2916) (@briankmatheson, @akhilerm, @kmova)
  • Added support to migrate Jiva related Kubernetes objects from PVC namespace to OpenEBS. After migration, the administrator can tighten RBAC policies for OpenEBS Service Account. (#2968)(@shubham14bajpai)
  • Added support for taking out faulty Jiva replica without causing a restart to other healthy replicas. (#2967) (@shubham14bajpai)

Key Bug Fixes:

  • Fixes an issue where scheduled remote backups result in consuming extra capacity on the local storage pools. Support for re-claiming the capacity on the cStor Pools, by deleting the snapshots that have been successfully backed up to a remote location. (#2957) (@mynktl)
  • Fixes an issue where some of the OpenEBS containers were using an older unsupported alpine image for base docker image. (#2991) (@mynktl, @prateekpandey14, @shubham14bajpai)
  • Fixes an issue where the Backup of cStor volumes with 50G or higher capacity to MinIO or AWS can fail. (#2973) (@mynktl)
  • Fixes an issue where Jiva replica can fail to initialize due to partially written metadata file, caused by node/pod restart. (#2950) (@payes, @utkarshmani1997)
  • Fixes an issue where labels added to BlockDevices were not retained for NDM pod is restarted. (openebs/node-disk-manager#394) (@shovanmaity)
  • Fixes an issue where Jiva cleanup jobs were not scheduled due to nodes being tainted. (#2912) (@pkavajin, @haijeploeg, @shubham14bajpai)
  • Fixes a panic in Local PV provisioner, while processing a PV delete request with a hostname that is no longer present in the cluster. (#2993) (@lfillmore, @kmova)
  • Fixes an issue where volumeBindingMode: WaitForFirstConsumer will make provisioning fail for cStor. (#2862)(@davidkarlsen, @mittachaitu)

Alpha Features

Active development is underway on the following alpha features:

Some notable changes are:

  • Support for generating automated ARM images cStor, Local PV provisioner and control plane components. (@akhilerm, @shubham14bajpai, @kmova)
  • Support for generating ppc64le images for Local PV provisioner and Jiva components. (@pensu)
  • Support for automated e2e testing for ZFS Local PV and fixing issues reported by the alpha users. (@w3aman, @pawanpraka1,@stevefan1999-personal, @jemershaw)

Major Limitations and Notes

For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you are using cStor Storage Engine, please review the following before upgrading to this release.

  • The recommended approach for deploying cStor Pools is to specify the list of block devices to be used in the StoragePoolClaim (SPC). The automatic selection of block devices has very limited support. Automatic provisioning of cStor pools with block devices of different capacities is not recommended.
  • When using cStor Pools, make sure that raw block devices are available on the nodes. If the block devices are formatted with a filesystem, partitioned or mounted, then cStor Pool will not be created on the block device. In the current release, there are manual steps that could be followed to clear the filesystem or use partitions for creating cStor Pools, please reach out to the community at https://slack.openebs.io.
  • If you are using cStor pools with ephemeral devices, starting with 1.2 - upon node restart, cStor Pool will not be automatically re-created on the new devices. This check has been put to make sure that nodes are not accidentally restarted with new disks. The steps to recover from such a situation are provided here, which involve changing the status of the corresponding CSP to Init.
  • Capacity over-provisioning is enabled by default on the cStor pools. If you don’t have alerts set up for monitoring the usage of the cStor pools, the pools can be fully utilized and the volumes can get into a read-only state. To avoid this, set up resource quotas as described in https://github.com/openebs/openebs/issues/2855.
  • The new version of cStor Schema is being worked on to address the user feedback in terms of ease of use for cStor provisioning as well as to make it easier to perform Day 2 Operations on cStor Pools using GitOps. Note that existing StoragePoolClaim pools will continue to function as-is. Along with stabilizing the new schema, we have also started working on migration features - which will easily migrate the clusters to the new schema in the upcoming releases. Once the proposed changes are complete, seamless migration from older CRs to new will be supported. To track the progress of the proposed changes, please refer to this design proposal. Note: We recommend users to try out the new schema on greenfield clusters to provide feedback. Get started with these (instructions)[https://blog.mayadata.io/openebs/cstor-pool-operations-via-cspc-in-openebs].

Getting Started

Prerequisite to install

  • Kubernetes 1.13+ is installed
  • Make sure that you run the below installation steps with the cluster-admin context. The installation will involve creating a new Service Account and assigning it to OpenEBS components.
  • Make sure iSCSI Initiator is installed on the Kubernetes nodes.
  • Node-Disk-Manager (NDM) helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters in NDM Config Map to exclude paths before installing OpenEBS.
  • NDM runs as a privileged pod since it needs to access the device information. Please make the necessary changes to grant access to run in privileged mode. For example, when running in RHEL/CentOS, you may need to set the security context appropriately. Refer Configuring OpenEBS with selinux=on

Install using kubectl

kubectl apply -f https://openebs.github.io/charts/openebs-operator-1.9.0.yaml

Install using helm stable charts

helm repo update
helm install --namespace openebs --name openebs stable/openebs --version 1.9.0

For more details refer to the documentation at https://docs.openebs.io/

Upgrade

Upgrade to 1.9 is supported only from 1.0 or higher and follows a similar process like earlier releases. The detailed steps are provided here.

  • Upgrade OpenEBS Control Plane components.
  • Upgrade Jiva PVs to 1.9, either one at a time or multiple volumes.
  • Upgrade CStor Pools to 1.9 and its associated Volumes, either one at a time or multiple volumes.

For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.

Support

If you are having issues in setting up or upgrade, you can contact us via:

openebs - 1.8.0

Published by kmova over 4 years ago

Change Summary

For a detailed change summary, please refer to Release 1.8 Change Summary

Special thanks to our first-time contributors in this release. @Pensu  @novasharper @nerdeveloper @nicklasfrahm  

OpenEBS v1.8 includes a critical fix (#2956) for Jiva volumes that are running in version 1.6 and 1.7. You must use these pre-upgrade steps to check if your jiva volumes are impacted. If they are, please reach out to us on OpenEBS Slack  or Kubernetes Slack #openebs channel for helping you with the upgrade. 

Here are some of the key highlights in this release:

Key Improvements

  • Added support for configuring capacity threshold limit for a cStor Pool. The default threshold limit is set at 85%. The threshold setting has been introduced to avoid a scenario where pool capacity is fully utilized, resulting in failure of all kinds of operations - including pool expansion. #2937 ( @mynktl, @shubham14bajpai)
  • Validated that OpenEBS cStor can be used with K3OS(k3os-v0.9.0). #2686 (@gprasath)  

Key Bug Fixes

  • Fixes an issue where Jiva volumes could cause data loss when a node restarts during an ongoing space reclamation at its replica.  #2956( @utkarshmani1997 @payes)
  • Fixes an issue where cStor restore from scheduled backup fails, if the first scheduled backup was aborted. #2926 (@mynktl)
  • Fixes an issue where upgrade scripts were failing on Mac. #2952 (@novasharper)
  • Fixes documentation references to deprecated disk custom resource in example YAMLs. (@nerdeveloper)
  • Fixes documentation to include a troubleshooting section to work with openebs api server ports blocked due to advanced network configuration. #2843 (@nicklasfrahm)

Alpha Features

Active development is underway on the following alpha features:

Some notable changes are:

  • Support for generating automated ARM builds for Jiva. (@shubham14bajpai)
  • Support for generating automated ppc64le builds for Node Disk Manager. (@Pensu)
  • Support for volume expansion of ZFS Local PV and add automated e2e tests. (@pawanpraka1, @w3aman )
  • Support for declarative scale up and down of cstor volume replicas, increasing the e2e coverage and fixing the issue uncovered. (@mittachaitu, @gprasath, @nsathyaseelan  )
  • Incorporate the feedback on the cStor Custom Resource Schema and work towards v1 schema. (@sonasingh46 @prateekpandey14 @mittachaitu )   

Major Limitations and Notes

For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you using cStor Storage Engine, please review the following before upgrading to this release.

  • The recommended approach for deploying cStor Pools is to specify the list of block devices to be used in the StoragePoolClaim (SPC). The automatic selection of block devices has very limited support. Automatic provisioning of cStor pools with block devices of different capacities is not recommended.
  • When using cStor Pools, make sure that raw block devices are available on the nodes. If the block devices are formatted with a filesystem, partitioned or mounted, then cStor Pool will not be created on the block device. In the current release, there are manual steps that could be followed to clear the filesystem or use partitions for creating cStor Pools, please reach out to the community at https://slack.openebs.io.
  • If you are using cStor pools with ephemeral devices, starting with 1.2 - upon node restart, cStor Pool will not be automatically re-created on the new devices. This check has been put to make sure that nodes are not accidentally restarted with new disks. The steps to recover from such a situation are provided here, which involve changing the status of the corresponding CSP to Init.
  • Capacity over provisioning is enabled by default on the cStor pools. If you don’t have alerts setup for monitoring the usage of the cStor pools, the pools can be fully utilized and the volumes can get into read-only state. To avoid this, setup resource quotas as described in https://github.com/openebs/openebs/issues/2855.
  • The new version of cStor Schema is being worked on to address the user feedback in terms of ease of use for cStor provisioning as well as to make it easier to perform Day 2 Operations on cStor Pools using GitOps. Note that existing StoragePoolClaim pools will continue to function as-is. Along with stabilizing the new schema, we have also started working on migration feature - which will easily migrate the clusters to new schema in the upcoming releases. Once the proposed changes are complete, seamless migration from older CRs to new will be supported. To track the progress of the proposed changes, please refer to this design proposal. Note: We recommend users to try out the new schema on greenfield clusters to provide feedback. Get started with these (instructions)[https://blog.mayadata.io/openebs/cstor-pool-operations-via-cspc-in-openebs].

Getting Started

Prerequisite to install

  • Kubernetes 1.13+ is installed
  • Make sure that you run the below installation steps with the cluster-admin context. The installation will involve creating a new Service Account and assigning it to OpenEBS components.
  • Make sure iSCSI Initiator is installed on the Kubernetes nodes.
  • Node-Disk-Manager (NDM) helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters in NDM Config Map to exclude paths before installing OpenEBS.
  • NDM runs as a privileged pod since it needs to access the device information. Please make the necessary changes to grant access to run in privileged mode. For example, when running in RHEL/CentOS, you may need to set the security context appropriately. Refer Configuring OpenEBS with selinux=on

Install using kubectl

kubectl apply -f https://openebs.github.io/charts/openebs-operator-1.8.0.yaml

Install using helm stable charts

helm repo update
helm install --namespace openebs --name openebs stable/openebs --version 1.8.0

For more details refer to the documentation at https://docs.openebs.io/

Upgrade

Upgrade to 1.8 is supported only from 1.0 or higher and follows a similar process like earlier releases. The detailed steps are provided here.

  • Upgrade OpenEBS Control Plane components.
  • Upgrade Jiva PVs to 1.8, one at a time
  • Upgrade CStor Pools to 1.8 and its associated Volumes, one at a time.

For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.

Support

If you are having issues in setting up or upgrade, you can contact us via:

openebs - 1.7.0

Published by kmova over 4 years ago

Change Summary

For a detailed change summary, please refer to: Release 1.7 Change Summary. OpenEBS v1.7 release includes stability fixes and enhancing the resiliency tests in the e2e pipelines. CSI Drivers for OpenEBS Jiva, cStor and ZFS Local PV are available for alpha testing.

Key Bug Fixes

  • Fixes an issue where Jiva Replicas could get stuck in WO or NA state, when the size of the replica data grows beyond 300GB. #2809 ( @prateekpandey14)
  • Fixes an issue where unused custom resources from older versions are left in the etcd, even after openebs is upgraded. ( @shovanmaity @shubham14bajpai )
  • Fixes an issue where cleanup of Jiva volumes on OpenShift 4.2 environment was failing. (@utkarshmani1997)
  • Fixes an issue where custom resources used by cStor Volumes fail to get deleted when the underlying pool was removed prior to deleting the volumes. (@mynktl)
  • Fixes an issue where a cStor Volume Replica would be incorrectly marked as invalid due to a race condition caused between a terminating and its corresponding newly launched pool pods. (@mittachaitu)

Alpha Features

Active development is underway on the following alpha features:

Some notable changes are:

  • Support for generating automated ARM builds for NDM. (@akhilerm)
  • Support for managing snapshot and clones of ZFS Local PV. (@pawanpraka1 )
  • Support for setting up PDB and PriorityClasses on cStor Pool Pods. Increasing the e2e coverage and fixing the issue uncovered. (@mittachaitu @shubham14bajpai @sonasingh46 @payes @prateekpandey14)
  • Support for resizing Jiva Volume via CSI driver and other bug fixes. (@utkarshmani1997)

Major Limitations and Notes

For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you using cStor Storage Engine, please review the following before upgrading to this release.

  • The recommended approach for deploying cStor Pools is to specify the list of block devices to be used in the StoragePoolClaim (SPC). The automatic selection of block devices has very limited support. Automatic provisioning of cStor pools with block devices of different capacity is not recommended.
  • When using cStor Pools, make sure that raw block devices are available on the nodes. If the block devices are formatted with a filesystem, partitioned or mounted, then cStor Pool will not be created on the block device. In the current release, there are manual steps that could be followed to clear the filesystem or use partitions for creating cStor Pools, please reach out to the community at https://slack.openebs.io.
  • If you are using cStor pools with ephemeral devices, starting with 1.2 - upon node restart, cStor Pool will not be automatically re-created on the new devices. This check has been put to make sure that nodes are not accidentally restarted with new disks. The steps to recover from such a situation are provided here, which involve changing the status of the corresponding CSP to Init.
  • Capacity over provisioning is enabled by default on the cStor pools. If you don’t have alerts setup for monitoring the usage of the cStor pools, the pools can be fully utilized and the volumes can get into read-only state. To avoid this, setup resource quotas as described in https://github.com/openebs/openebs/issues/2855.
  • The new version of cStor Schema is being worked on to address the user feedback in terms of ease of use for cStor provisioning as well as to make it easier to perform Day 2 Operations on cStor Pools using GitOps. Note that existing StoragePoolClaim pools will continue to function as-is. Along with stabilizing the new schema, we have also started working on migration feature - which will easily migrate the clusters to new schema in the upcoming releases. Once the proposed changes are complete, a seamless migration from older CRs to new will be supported. To track the progress of the proposed changes, please refer to this design proposal. Note: We recommend users to try out the new schema on greenfield clusters to provide feedback. Get started with these (instructions)[https://blog.mayadata.io/openebs/cstor-pool-operations-via-cspc-in-openebs].

Getting Started

Prerequisite to install

  • Kubernetes 1.13+ is installed
  • Make sure that you run the below installation steps with cluster admin context. The installation will involve creating a new Service Account and assigning to OpenEBS components.
  • Make sure iSCSI Initiator is installed on the Kubernetes nodes.
  • Node-Disk-Manager (NDM) helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters in NDM Config Map to exclude paths before installing OpenEBS.
  • NDM runs as a privileged pod since it needs to access the device information. Please make the necessary changes to grant access to run in privileged mode. For example, when running in RHEL/CentOS, you may need to set the security context appropriately. Refer Configuring OpenEBS with selinux=on

Install using kubectl

kubectl apply -f https://openebs.github.io/charts/openebs-operator-1.7.0.yaml

Install using helm stable charts

helm repo update
helm install --namespace openebs --name openebs stable/openebs --version 1.7.0

For more details refer to the documentation at: https://docs.openebs.io/

Upgrade

Upgrade to 1.7 is supported only from 1.0 or higher and follows a similar process like earlier releases. The detailed steps are provided here.

  • Upgrade OpenEBS Control Plane components.
  • Upgrade Jiva PVs to 1.7, one at a time
  • Upgrade CStor Pools to 1.7 and its associated Volumes, one at a time.

For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.

Support

If you are having issues in setting up or upgrade, you can contact us via:

openebs - 1.6.0

Published by kmova almost 5 years ago

Change Summary

For detailed change summary, please refer to: Release 1.6 Change Summary. Here are some Key highlights.

Key Improvements

  • Added support to use Local PV on nodes with taints. (@rahulchheda)
  • Optimize the Jiva replica rebuild process in case of controller or replica restart. (@utkarshmani1997)
  • Add an option to helm chart to work in PSP enabled clusters. (@mpolednik)

Key Bug Fixes

  • Fixes an issue where cleanup of OpenEBS Local PV with hostpaths in OpenShift 4.2 environment was failing. (@akhilerm)
  • Fixes an issue where cStor cloned volume was always defaulting to ext4 filesystem. #2809 ( @EvanPrivate, @prateekpandey14)
  • Fixes an issue with openebs velero plugin when used on applications that use annotations to specify the storage class. (@mynktl)

Alpha Features

Active development is underway on the following alpha features:

Some notable changes are:

  • Support for generating ARM builds for cStor Control Plane. (@Wangzihao18)
  • Support for reporting ZFS Local PV metrics and setting up alert rules via Prometheus monitoring. (@pawanpraka1 )
  • Support for reporting cStor Volume metrics via the CSI driver as well as several other features that include ability to specify pod priority class, additional validations to catch user errors in the CSCP YAML, and so forth. (@mittachaitu @shubham14bajpai @sonasingh46 @payes @prateekpandey14)
  • Support for reporting Jiva Volume metrics via CSI driver and other bug fixes. (@utkarshmani1997)

Major Limitations and Notes

For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you using cStor Storage Engine, please review the following before upgrading to this release.

  • The recommended approach for deploying cStor Pools is to specify the list of block devices to be used in the StoragePoolClaim (SPC). The automatic selection of block devices has very limited support. Automatic provisioning of cStor pools with block devices of different capacity is not recommended.
  • When using cStor Pools, make sure that raw block devices are available on the nodes. If the block devices are formatted with a filesystem, partitioned or mounted, then cStor Pool will not be created on the block device. In the current release, there are manual steps that could be followed to clear the filesystem or use partitions for creating cStor Pools, please reach out to the community at https://slack.openebs.io.
  • If you are using cStor pools with ephemeral devices, starting with 1.2 - upon node restart, cStor Pool will not be automatically re-created on the new devices. This check has been put to make sure that nodes are not accidentally restarted with new disks. The steps to recover from such a situation are provided here, which involve changing the status of the corresponding CSP to Init.
  • Capacity over provisioning is enabled by default on the cStor pools. If you don’t have alerts setup for monitoring the usage of the cStor pools, the pools can be fully utilized and the volumes can get into read-only state. To avoid this, setup resource quotas as described in https://github.com/openebs/openebs/issues/2855.
  • The new version of cStor Schema is being worked on to address the user feedback in terms of ease of use for cStor provisioning as well as to make it easier to perform Day 2 Operations on cStor Pools using GitOps. Note that existing StoragePoolClaim pools will continue to function as-is. Along with stabilizing the new schema, we have also started working on migration feature - which will easily migrate the clusters to new schema in the upcoming releases. Once the proposed changes are complete, a seamless migration from older CRs to new will be supported. To track the progress of the proposed changes, please refer to this design proposal. Note: We recommend users to try out the new schema on greenfield clusters to provide feedback. Get started with these (instructions)[https://blog.mayadata.io/openebs/cstor-pool-operations-via-cspc-in-openebs].

Getting Started

Prerequisite to install

  • Kubernetes 1.13+ is installed
  • Make sure that you run the below installation steps with cluster admin context. The installation will involve creating a new Service Account and assigning to OpenEBS components.
  • Make sure iSCSI Initiator is installed on the Kubernetes nodes.
  • Node-Disk-Manager (NDM) helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters in NDM Config Map to exclude paths before installing OpenEBS.
  • NDM runs as a privileged pod since it needs to access the device information. Please make the necessary changes to grant access to run in privileged mode. For example, when running in RHEL/CentOS, you may need to set the security context appropriately. Refer Configuring OpenEBS with selinux=on

Install using kubectl

kubectl apply -f https://openebs.github.io/charts/openebs-operator-1.6.0.yaml

Install using helm stable charts

helm repo update
helm install --namespace openebs --name openebs stable/openebs --version 1.6.0

For more details refer to the documentation at: https://docs.openebs.io/

Upgrade

Upgrade to 1.6 is supported only from 1.0 or higher and follows a similar process like earlier releases. The detailed steps are provided here.

  • Upgrade OpenEBS Control Plane components.
  • Upgrade Jiva PVs to 1.6, one at a time
  • Upgrade CStor Pools to 1.6 and its associated Volumes, one at a time.

For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.

Support

If you are having issues in setting up or upgrade, you can contact us via:

openebs - 1.5.0

Published by kmova almost 5 years ago

Change Summary

This is the last planned release of 2019, with some bug fixes and several improvements to the alpha features. For detailed change summary, please refer to: Release 1.5 Change Summary. Here are some Key highlights.

Key Improvements

  • Support BlockVolumeMode for OpenEBS Local PV backed by devices. (@rahulchheda)
  • Support ZFS as a filesystem type for OpenEBS ZFS Local PV. (@pawanpraka1)

Key Bug Fixes

  • Fixed a bug where OpenEBS Local PV with hostpaths in OpenShift 4.2 environment was failing. (@akhilerm)
  • Fixed a security vulnerability by upgrading the base (alpine) image to version 3.10.3. Prior to this release, the alpine image version used was 3.9.(@kmova)
  • Fixed a bug with the Liveness probe on cStor Pool pods that wasn’t detecting the loss of devices. (@muratkars, @mittachaitu, @shubham14bajpai)
  • Fixed a bug with cStor Volume initialization where replica failed to recover from an intermittent failure during initialization. (@mittachaitu)
  • Fixed a bug with Upgrade jobs that were failing to determine the image name in air-gapped environments where container server was hosted on a custom port like ourserver:port/openebs/theimagename:thetag (@davidkarlsen, @shubham14bajpai)

Alpha Features

Active development is underway on the following alpha features:

Some notable changes are:

  • Support for Block Device Replacement via the cStor YAML file (using new schema) (@sonasingh46,@mittachaitu)
  • Support for generating of ARM builds for cStor Data Engine. (@Wangzihao18, @jjjighg)
  • Enable automatic recovery of cStor Volumes from read-only file system condition caused by intermittent network errors that could last for more than 30 seconds. (@mittachaitu)
  • Enhance the cStor CSI Driver for automatically recovering from read-only issues with Ext4 filesystems. (@prateekpandey14, @payes)

Special shout out to our new contributors @jjjighg, @rahulchheda for working on some significant enhancements.

Major Limitations and Notes

For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you using cStor Storage Engine, please review the following before upgrading to this release.

  • The recommended approach for deploying cStor Pools is to specify the list of block devices to be used in the StoragePoolClaim (SPC). The automatic selection of block devices has very limited support. Automatic provisioning of cStor pools with block devices of different capacity is not recommended.
  • When using cStor Pools, make sure that raw block devices are available on the nodes. If the block devices are formatted with a filesystem, partitioned or mounted, then cStor Pool will not be created on the block device. In the current release, there are manual steps that could be followed to clear the filesystem or use partitions for creating cStor Pools, please reach out to the community at https://slack.openebs.io.
  • If you are using cStor pools with ephemeral devices, starting with 1.2 - upon node restart, cStor Pool will not be automatically re-created on the new devices. This check has been put to make sure that nodes are not accidentally restarted with new disks. The steps to recover from such a situation are provided here, which involve changing the status of the corresponding CSP to Init.
  • Capacity over provisioning is enabled by default on the cStor pools. If you don’t have alerts setup for monitoring the usage of the cStor pools, the pools can be fully utilized and the volumes can get into read-only state. To avoid this, setup resource quotas as described in https://github.com/openebs/openebs/issues/2855.
  • The new version of cStor Schema is being worked on to address the user feedback in terms of ease of use for cStor provisioning as well as to make it easier to perform Day 2 Operations on cStor Pools using GitOps. Note that existing StoragePoolClaim pools will continue to function as-is. Along with stabilizing the new schema, we have also started working on migration feature - which will easily migrate the clusters to new schema in the upcoming releases. Once the proposed changes are complete, a seamless migration from older CRs to new will be supported. To track the progress of the proposed changes, please refer to this design proposal. Note: We recommend users to try out the new schema on greenfield clusters to provide feedback. Get started with these (instructions)[https://blog.mayadata.io/openebs/cstor-pool-operations-via-cspc-in-openebs].

Getting Started

Prerequisite to install

  • Kubernetes 1.13+ is installed
  • Make sure that you run the below installation steps with cluster admin context. The installation will involve creating a new Service Account and assigning to OpenEBS components.
  • Make sure iSCSI Initiator is installed on the Kubernetes nodes.
  • Node-Disk-Manager (NDM) helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters in NDM Config Map to exclude paths before installing OpenEBS.
  • NDM runs as a privileged pod since it needs to access the device information. Please make the necessary changes to grant access to run in privileged mode. For example, when running in RHEL/CentOS, you may need to set the security context appropriately. Refer Configuring OpenEBS with selinux=on

Install using kubectl

kubectl apply -f https://openebs.github.io/charts/openebs-operator-1.5.0.yaml

Install using helm stable charts

helm repo update
helm install --namespace openebs --name openebs stable/openebs --version 1.5.0

For more details refer to the documentation at: https://docs.openebs.io/

Upgrade

Upgrade to 1.5 is supported only from 1.0 or higher and follows a similar process like earlier releases. The detailed steps are provided here.

  • Upgrade OpenEBS Control Plane components.
  • Upgrade Jiva PVs to 1.5, one at a time
  • Upgrade CStor Pools to 1.5 and its associated Volumes, one at a time.

For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.

Support

If you are having issues in setting up or upgrade, you can contact us via:

openebs - 1.4.0

Published by kmova almost 5 years ago

Change Summary

For detailed change summary, please refer to: Release 1.4 Change Summary. Here are some Key highlights.

Huge thanks to everyone that participated in the Hacktoberfest 2019 adding another 20+ new contributors to OpenEBS project.

Key Improvements

Key Bug Fixes

  • Fixed a security vulnerability that granted higher privileges to the default Kubernetes service account called user. This issue impacts any clusters that had openebs installed using kubectl using the default openebs-operator.yaml. Thanks to @surajssd for detecting and resolving this issue. https://github.com/openebs/openebs/pull/2816
  • Fixed an issue where Jiva Replica pod could get stuck in CrashLoopBack in case it was restarted abruptly during the first attempt to start. @faithlesstomas
  • Fixed an issue where NDM was not able to detect devices from QEMU hardware. @footplus
  • Fixed an issue where NDM was not able to detect devices when the OS partition was on NVMe devices. @Modulus

Also, starting with this release, the SPARSE block devices will not be created by default. If you need to create them - please update the SPARSE_FILE_COUNT env variable in the operator or the helm value ndm.sparse.count.

Alpha Features

Active development is underway on the following alpha features:

Major Limitations and Notes

For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you using cStor Storage Engine, please review the following before upgrading to this release.

  • The recommended approach for deploying cStor Pools is to specify the list of block devices to be used in the StoragePoolClaim (SPC). The automatic selection of block devices has very limited support. Automatic provisioning of cStor pools with block devices of different capacity is not recommended.
  • When using cStor Pools, make sure that raw block devices are available on the nodes. If the block devices are formatted with a filesystem, partitioned or mounted, then cStor Pool will not be created on the block device. In the current release, there are manual steps that could be followed to clear the filesystem or use partitions for creating cStor Pools, please reach out to the community at https://slack.openebs.io.
  • If you are using cStor pools with ephemeral devices, starting with 1.2 - upon node restart, cStor Pool will not be automatically re-created on the new devices. This check has been put to make sure that nodes are not accidentally restarted with new disks. The steps to recover from such a situation are provided here, which involve changing the status of the corresponding CSP to Init.
  • We have received feedback from the community that the provisioning of cStor Pools is not as simple as Jiva or Local PV and we are working on improving this experience in the upcoming releases. The proposed enhancements are being supported via new CRDs, so that the existing cStor deployments are not impacted. Once the proposed changes are complete, a seamless migration from older CRs to new will be supported. To track the progress of the proposed changes, please refer to this design proposal.

Getting Started

Prerequisite to install

  • Kubernetes 1.13+ is installed
  • Make sure that you run the below installation steps with cluster admin context. The installation will involve creating a new Service Account and assigning to OpenEBS components.
  • Make sure iSCSI Initiator is installed on the Kubernetes nodes.
  • Node-Disk-Manager (NDM) helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters in NDM Config Map to exclude paths before installing OpenEBS.
  • NDM runs as a privileged pod since it needs to access the device information. Please make the necessary changes to grant access to run in privileged mode. For example, when running in RHEL/CentOS, you may need to set the security context appropriately. Refer Configuring OpenEBS with selinux=on

Install using kubectl

kubectl apply -f https://openebs.github.io/charts/openebs-operator-1.4.0.yaml

Install using helm stable charts

helm repo update
helm install --namespace openebs --name openebs stable/openebs --version 1.4.0

For more details refer to the documentation at: https://docs.openebs.io/

Upgrade

A significant improvement in the upgrade process compared to earlier releases, is that upgrades can be triggered as Kubernetes jobs instead of downloading and executing shell scripts. The detailed steps are provided here.

Upgrade to 1.4 is supported only from 1.0 or 1.1 or 1.2 or 1.3 and follows a similar process like earlier releases.

  • Upgrade OpenEBS Control Plane components.
  • Upgrade Jiva PVs to 1.4, one at a time
  • Upgrade CStor Pools to 1.4 and its associated Volumes, one at a time.

For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.

Support

If you are having issues in setting up or upgrade, you can contact us via:

openebs - 1.3.0

Published by kmova about 5 years ago

Getting Started

Prerequisite to install

  • Kubernetes 1.12+ is installed
  • Make sure that you run the below installation steps with cluster admin context. The installation will involve creating a new Service Account and assigning to OpenEBS components.
  • Make sure iSCSI Initiator is installed on the Kubernetes nodes.
  • Node-Disk-Manager (NDM) helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters in NDM Config Map to exclude paths before installing OpenEBS.
  • NDM runs as a privileged pod since it needs to access the device information. Please make the necessary changes to grant access to run in privileged mode. For example, when running in RHEL/CentOS, you may need to set the security context appropriately. Refer Configuring OpenEBS with selinux=on

Install using kubectl

kubectl apply -f https://openebs.github.io/charts/openebs-operator-1.3.0.yaml

Install using helm stable charts

helm repo update
helm install --namespace openebs --name openebs stable/openebs --version 1.3.0

For more details refer to the documentation at: https://docs.openebs.io/

Major Limitations and Notes

For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you using cStor Storage Engine, please review the following before upgrading to this release.

  • The recommended approach for deploying cStor Pools is to specify the list of block devices to be used in the StoragePoolClaim (SPC). The automatic selection of block devices has very limited support. Automatic provisioning of cStor pools with block devices of different capacity is not recommended.

  • When using cStor Pools, make sure that raw block devices are available on the nodes. If the block devices are formatted with a filesystem, partitioned or mounted, then cStor Pool will not be created on the block device. In the current release, there are manual steps that could be followed to clear the filesystem or use partitions for creating cStor Pools, please reach out to the community at https://slack.openebs.io.

  • If you are using cStor pools with ephemeral devices, starting with 1.2 - upon node restart, cStor Pool will not be automatically re-created on the new devices. This check has been put to make sure that nodes are not accidentally restarted with new disks. The steps to recover from such a situation are provided here, which involve changing the status of the corresponding CSP to Init.

  • We have received feedback from the community that the provisioning of cStor Pools is not as simple as Jiva or Local PV and we are working on improving this experience in the upcoming releases. The proposed enhancements are being supported via new CRDs, so that the existing cStor deployments are not impacted. Once the proposed changes are complete, a seamless migration from older CRs to new will be supported. To track the progress of the proposed changes, please refer to this design proposal.

Upgrade

A significant improvement in the upgrade process compared to earlier releases, is that upgrades can be triggered as Kubernetes jobs instead of downloading and executing shell scripts. The detailed steps are provided here.

Upgrade to 1.3 is supported only from 1.0 or 1.1 or 1.2 and follows a similar process like earlier releases.

  • Upgrade OpenEBS Control Plane components.
  • Upgrade Jiva PVs to 1.3, one at a time
  • Upgrade CStor Pools to 1.3 and its associated Volumes, one at a time.

For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.

Change Summary

OpenEBS Release 1.3 is primarily intended towards users upgrading to Kubernetes 1.16.

This release fixes a known issue where OpenEBS fails to provision volumes after upgrading to Kubernetes 1.16, as some of the Kubernetes APIs used for deploying Volume pods are deprecated. OpenEBS 1.16 helps with moving all of the deployment and daemonsets to use the app/v1 API.

In addition, this release also saw some great progress in terms of enhancements, design and feasibility for the following features:

Special shout out to Marius (from OpenEBS Slack), @obeyler, @steved, @davidkarlsen for helping with reporting issues/enhancements with OpenEBS Helm charts and fixing/reviewing them.

Also, a Huge thanks to @Wangzihao18 for helping with refactoring the build scripts to support ARM builds.

For detailed change summary, alpha features under development and steps to upgrade from previous version, please refer to: Release 1.3 Change Summary

openebs - 1.2.0

Published by kmova about 5 years ago

Getting Started

Prerequisite to install

  • Kubernetes 1.12+ is installed
  • Make sure that you run the below installation steps with cluster admin context. The installation will involve creating a new Service Account and assigning to OpenEBS components.
  • Make sure iSCSI Initiator is installed on the Kubernetes nodes.
  • NDM helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters in NDM Config Map to exclude paths before installing OpenEBS.
  • NDM runs as a privileged pod since it needs to access the device information. Please make the necessary changes to grant access to run in privileged mode. For example, when running in RHEL/CentOS, you may need to set the security context appropriately. Refer Configuring OpenEBS with selinux=on

Install using kubectl

kubectl apply -f https://openebs.github.io/charts/openebs-operator-1.2.0.yaml

Install using helm stable charts

helm repo update
helm install --namespace openebs --name openebs stable/openebs --version 1.2.0

For more details refer to the documentation at: https://docs.openebs.io/

Major Limitations and Notes

For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you using cStor Storage Engine, please review the following before upgrading to this release.

  • The recommended approach for deploying cStor Pools is to specify the list of block devices to be used in the StoragePoolClaim (SPC). The automatic selection of block devices has very limited support. If you already are using the SPC with automatic block device selection, please watch for the known issues.
  • When using cStor Pools, make sure that raw block devices are available on the nodes. If the block devices are formatted with a filesystem, partitioned or mounted, then cStor Pool will not be created on the block device. In the current release, there are manual steps that could be followed to clear the filesystem or use partitions for creating cStor Pools, please reach out to the community at https://slack.openebs.io.
  • If you are using cStor pools with ephemeral devices, starting with 1.2 - upon node restart, cStor Pool will not be automatically re-created on the new devices. This check has been put to make sure that nodes are not accidentally restarted with new disks. The steps to recover from such a situation are provided here, which involve changing the status of the corresponding CSP to Init.
  • We have received feedback from the community that the provisioning of cStor Pools is not as simple as Jiva or Local PV and we are working on improving this experience in the upcoming releases. The proposed enhancements are being supported via new CRDs, so that the existing cStor deployments are not impacted. Once the proposed changes are complete, a seamless migration from older CRs to new will be supported. To track the progress of the proposed changes, please refer to this design proposal.

Upgrade

A significant improvement in the upgrade process compared to earlier releases, is that upgrades can be triggered as Kubernetes jobs instead of downloading and executing shell scripts. The detailed steps are provided here.

Upgrade to 1.2 is supported only from 1.0 or 1.1 and follows a similar process like earlier releases.

  • Upgrade OpenEBS Control Plane components.
  • Upgrade Jiva PVs to 1.2, one at a time
  • Upgrade CStor Pools to 1.2 and its associated Volumes, one at a time.

For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.

Change Summary

OpenEBS Release 1.2 has been about fixing user-reported issues and continuing the development of alpha features. Major bugfixes in this release include:

  • Fixed issues related to the provisioning of Jiva, cStor and Local PVs on deployments where the label kubernetes.io/hostname and the node name are different. @obeyler
  • Enhanced Jiva Storage Engine to automatically clear internal snapshots created during node restarts. This feature supersedes the manual steps provided in the previous release to delete the internal snapshots via script. @rgembalik @amarshaw
  • Added support to customize the storage path used by default Local PV and Jiva storage classes. @obeyler @nike38rus
  • Fixes an issue in NDM where failure to detect the OS disk resulted in none of the other devices to be discovered. @Modulus
  • Added validation for snapshot commands to check if the underlying storage engine supports them. For example for Local PV, the snapshot is not supported and the controller will provide the error. @NeilW
  • Added support to specify tolerations for NDM DaemonSet when deploying through Helm Chart. @omitrowski

For detailed change summary, alpha features under development and steps to upgrade from previous version, please refer to: Release 1.2 Change Summary

openebs - 1.1.0

Published by kmova about 5 years ago

Getting Started

Prerequisite to install

  • Kubernetes 1.12+ is installed
  • Make sure that you run the below installation steps with cluster admin context. The installation will involve creating a new Service Account and assigning to OpenEBS components.
  • Make sure iSCSI Initiator is installed on the Kubernetes nodes.
  • NDM helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters in NDM Config Map to exclude paths before installing OpenEBS.
  • NDM runs as a privileged pod since it needs to access the device information. Please make the necessary changes to grant access to run in privileged mode. For example, when running in RHEL/CentOS, you may need to set the security context appropriately. Refer Configuring OpenEBS with selinux=on

Install using kubectl

kubectl apply -f https://openebs.github.io/charts/openebs-operator-1.1.0.yaml

Install using helm stable charts

helm repo update
helm install --namespace openebs --name openebs stable/openebs --version 1.1.0

For more details refer to the documentation at: https://docs.openebs.io/

Major Limitations and Notes

For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. Here are some of the major limitations that you have to consider before upgrading to this release.

  • Jiva and Local PVs require that the label kubernetes.io/hostname and the node name be the same. While this is true for many Kubernetes deployments, we have come across some deployments where they are different. Please reach out to us on slack if you need help in deploying on such environments.
  • The recommended approach for deploying cStor Pools is to specify the list of block devices to be used in the StoragePoolClaim (SPC). The automatic selection of block devices has very limited support. We have received feedback from community that the provisioning of cStor Pools is not as simple as Jiva or Local PV and we are working on improving this experience in the upcoming releases. If you already are using the SPC with automatic block device selection, please watch for the known issues.

Upgrade

A significant improvement in the upgrade process compared to earlier releases, is that upgrades can be triggered as Kubernetes jobs instead of downloading and executing shell scripts. The detailed steps are provided here.

Upgrade to 1.1 is supported only from 1.0 and follows a similar process like earlier releases.

  • Upgrade OpenEBS Control Plane components.
  • Upgrade Jiva PVs to 1.1, one at a time
  • Upgrade CStor Pools to 1.1 and its associated Volumes, one at a time.

For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.

Change Summary

OpenEBS Release 1.1 has been about fixing and documenting the cross platform usability issues reported by users and also laying the foundation for some of the long overdue backlogs. Major features, enhancements and bug fixes in this release include:

  • Support for upgrade of OpenEBS storage pools and volumes through Kubernetes Job. As a user, you no longer have to download the scripts to upgrade from 1.0 to 1.1, like in earlier releases. The procedure to upgrade via Kubernetes Job is provided here. Kubernetes Job based upgrade is a step towards completely automating the upgrades in the upcoming releases. Would love to hear your feedback on the proposed design. Note: Upgrade job makes uses of a new container image called quay.io/openebs/m-upgrade:1.1.0.
  • Support for an alpha version of CSI driver with limited functionality for provisioning and deprovisioning of cStor volumes. Once you have OpenEBS 1.1 installed, you can take it for a spin on your development clusters using the instructions provided here. CSI driver also requires a shift in the paradigm of how the configuration of the storage class parameters should be passed on to the drivers. We want to keep this seamless, please let us know if you have any inputs on what you notice as some of the nice to have as we shift towards the CSI driver.
  • Thanks to the support of the OpenEBS user community and contributors we now support setting up of OpenEBS using additional platforms like AWS MarketPlace, OpenShift Operator Hub, Rancher K3OS and so forth. Special shoutout to @vitobotta for his extensive exploration of OpenEBS on several different Cloud Providers and Platforms. His blog contains some good tips for both existing and new OpenEBS users.

Another highlight of this release is an increased involvement from OpenEBS user community pitching in with GitHub Issues as well as providing contributions. Here are some issues that were raised and fixed within the current release.

  • Fixed an issue where backup and restore of cStor volume using OpenEBS velero-plugin was failing when OpenEBS was installed through helm. @gridworkz
  • Fixed an issue with NDM where the kubernetes.io/hostname for Block Devices on AWS Instances was being set as the nodeName. This was resulting in cStor Pools not being scheduled to the node as there was a mismatch between hostname and nodename in AWS instances. @obeyler
  • Fixed an issue where NDM was seen to crash intermittently on nodes where NVMe devices are attached. There was an issue in handling of NVMe devices with write cache supported resulting in segfault. [Private User]
  • Added support to disable the generation of default storage configuration like StorageClasses, in case the administrators would like to run a customized OpenEBS configuration. @nike38rus
  • Fixed an issue where the cStor Target would fail to start when the NDM sparse path is customized. @obeyler
  • Fixed a regression that was introduced into the cStor Sparse Pool that would cause the entire Volume Replica to be recreated upon the restart of a cStor Sparse Pool. The fix was to make sure the data is rebuilt from the peer Sparse pools instead of recreating. Test cases have been added to the e2e pipeline to catch this behaviour with Sparse Pools. Note that this doesn’t impact the cStor Pools created on Block Devices. @vishnuitta
  • For Jiva Volumes, created a utility that can clear the internal snapshots created during replica restart and rebuild. For long running volumes that have gone through multiple restarts, the number of internal snapshots can hit the maximum supported value of 255 , after which the Replica will fail to start. The utility to check and clear the snapshots is available here. @rgembalik @amarshaw
  • Enhanced velero-plugin to allow users to specify a backupPathPrefix for storing the volume snapshots in custom location. This allows users to save/backup configuration and volume snapshot data under the same location rather than saving the configuration and data in different locations . @amarshaw

For detailed change summary and steps to upgrade from previous version, please refer to: Release 1.1 Change Summary

openebs - 1.0.0

Published by kmova over 5 years ago

Congratulations and thanks to everyone of you from the OpenEBS community for reaching this significant milestone!

Getting Started

Prerequisite to install

  • Kubernetes 1.12+ is installed
  • Make sure that you run the below installation steps with cluster admin context. The installation will involve creating a new Service Account and assigning to OpenEBS components.
  • Make sure iSCSI Initiator is installed on the Kubernetes nodes.
  • NDM helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters in NDM Config Map to exclude paths before installing OpenEBS.
  • NDM runs as a privileged pod since it needs to access the device information. Please make the necessary changes to grant access to run in privileged mode. For example, when running in RHEL/CentOS, you may need to set the security context appropriately. Refer Configuring OpenEBS with selinux=on

Using kubectl

kubectl apply -f https://openebs.github.io/charts/openebs-operator-1.0.0.yaml

Using helm stable charts

helm repo update
helm install  --namespace openebs --name openebs stable/openebs

For more details refer to the documentation at: https://docs.openebs.io/

Upgrade

Upgrade to 1.0 is supported only from 0.9 and follows a similar approach like earlier releases.

  • Upgrade OpenEBS Control Plane components. This involves a pre-upgrade step.
  • Upgrade Jiva PVs to 1.0, one at a time
  • Upgrade CStor Pools to 1.0 and its associated Volumes, one at a time.

The detailed steps are provided here.

For upgrading from releases prior to 0.9, please refer to the respective release upgrade here.

Change Summary

OpenEBS Release 1.0 has multiple enhancements and bug fixes which include:

  • Major enhancements to Node Device Manager (NDM) to help with managing the lifecycle of block devices attached to the Kubernetes nodes.
  • The first and most widely deployed OpenEBS Data Engine - Jiva has graduated to stable. Jiva is ideal for use cases where Kubernetes nodes have storage available via hostpaths. Jiva Volumes support thin provisioning and supports backup and restore via Velero.
  • cStor Data Engine continues to be a preferred solution for use cases that require instant snapshot and clone of volumes. This release has some more fixes around the rebuild and backup/restore scenarios.
  • The latest volume type - OpenEBS Local PV has graduated to beta with some users already using it in production. The current release enhances the support of Local PV by tighter integration into NDM and adding the ability to create Local PVs on attached block devices.

Note: If you have automated tools built around OpenEBS cStor Data Engine, please pay closer attention to the following changes:

  • The Storage Devices are now represented using the the CR called - blockdevice. For a list of blockdevices in your cluster - run kubectl get blockdevices -n <openebs namespace>
  • The StoragePoolClaim (SPC) that is used to setup the cStor Pools - will have to be provided with blockdevices in place of disk CRs. For more details and examples check the documentation.

For detailed change summary and steps to upgrade from previous version, please refer to: Release 1.0 Change Summary

For a more comprehensive list of open issues uncovered through e2e, please refer to open issues.

Package Rankings
Top 3.97% on Proxy.golang.org
Badges
Extracted from project README
Releases Releases Releases Releases Slack channel #openebs Twitter PRs Welcome FOSSA Status CII Best Practices OpenEBS Welcome Banner CNCF logo