Most popular & widely deployed Open Source Container Native Storage platform for Stateful Persistent Applications on Kubernetes.
APACHE-2.0 License
Bot releases are visible (Hide)
Published by kmova over 3 years ago
OpenEBS v2.6 contains some key enhancements and several fixes for the issues reported by the user community across all 9 types of OpenEBS volumes.
Here are some of the key highlights in this release.
OpenEBS is introducing a new CSI driver for dynamic provisioning of Jiva volumes. This driver is released as alpha and currently supports the following additional features compared to the non-CSI jiva volumes.
For instructions on how to set up and use the Jiva CSI driver, please see. https://github.com/openebs/jiva-operator.
Kubernetes 1.17 or higher release is recommended as this release contains the following updates that will not be compatible with older Kubernetes releases.
If you are upgrading from an older version of cStor operators to this version, you will need to manually delete the cStor CSI driver object prior to upgrading. kubectl delete csidriver cstor.csi.openebs.io
. For complete details on how to upgrade your cStor operators, see https://github.com/openebs/upgrade/blob/master/docs/upgrade.md#cspc-pools.
The CRD API version has been updated for the cStor custom resources to v1. If you are upgrading via the helm chart, you might have to make sure that the new CRDs are updated. https://github.com/openebs/cstor-operators/tree/master/deploy/helm/charts/crds
The e2e pipelines include upgrade testing only from 1.5 and higher releases to 2.6. If you are running on release older than 1.5, OpenEBS recommends you upgrade to the latest version as soon as possible.
Thank you @coboluxx (IDNT) for becoming a public reference and supporter of OpenEBS by sharing your use case on ADOPTERS.md
Are you using or evaluating OpenEBS? You can help OpenEBS in its journey towards becoming a CNCF Incubation project by sharing your OpenEBS story and join the league of OpenEBS Adopters.
MANY THANKS to our existing contributors and for everyone helping OpenEBS Community going.
A very special thanks to our first-time contributors to code, tests, and docs: @luizcarlosfaria, @Z0Marlin, @iyashu, @dyasny, @hanieh-m, @si458, @Ab-hishek
kubectl apply -f https://openebs.github.io/charts/2.6.0/openebs-operator.yaml
helm repo add openebs https://openebs.github.io/charts
helm repo update
helm install --namespace openebs --name openebs openebs/openebs --version 2.6.0
For more details refer to the documentation at https://docs.openebs.io/
Upgrade to 2.6 is supported only from 1.0 or higher and follows a similar process as earlier releases. Detailed steps are provided here.
For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.
If you are having issues in setting up or upgrade, you can contact:
For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you are using the cStor Storage Engine, please review the following before upgrading to this release.
Init
.Published by kmova almost 4 years ago
A warm and happy new year to all our users, contributors, and supporters. 🎉 🎉 🎉.
Keeping up with our tradition of monthly releases, OpenEBS v2.5 is now GA with some key enhancements and several fixes for the issues reported by the user community. Here are some of the key highlights in this release:
OpenEBS has support for multiple storage engines, and the feedback from users has shown that users tend to only use a few of these engines on any given cluster depending on the workload requirements. As a way to provide more flexibility for users, we are introducing separate helm charts per engine. With OpenEBS 2.5 the following helm charts are supported.
(Special shout out to @sonasingh46, @shubham14bajpai, @prateekpandey14, @xUnholy, @akhilerm for continued efforts in helping to build the above helm charts.)
OpenEBS is introducing a new CSI driver for dynamic provisioning to Kubernetes Local Volumes backed by LVM. This driver is released as alpha and currently supports the following features.
For instructions on how to set up and use the LVM CSI driver, please see. https://github.com/openebs/lvm-localpv
Enhanced the ZFS Local PV scheduler to support spreading the volumes across the nodes based on the capacity of the volumes that are already provisioned. After upgrading to this release, capacity-based spreading will be used by default. In the previous releases, the volumes were spread based on the number of volumes provisioned per node. https://github.com/openebs/zfs-localpv/pull/266.
Added support to configure image pull secrets for the pods launched by OpenEBS Local PV Provisioner and cStor (CSPC) operators. The image pull secrets (comma separated strings) can be passed as an environment variable (OPENEBS_IO_IMAGE_PULL_SECRETS) to the deployments that launch these additional pods. The following deployments need to be updated.
openebs-localpv-provisioner
and openebs-ndm-operator
cspc-operator
and cvc-operator
Added support to allow users to specify custom node labels for allowedTopologies under the cStor CSI StorageClass. https://github.com/openebs/cstor-csi/pull/135
pending
state on Kubernetes 1.20 and above clusters. K8s 1.20 has deprecated SelfLink
option which causes this failure with older Jiva and cStor Provisioners. https://github.com/openebs/openebs/issues/3314
Kubernetes 1.17 or higher release is recommended as this release contains the following updates that will not be compatible with older Kubernetes releases.
v1
. (Thanks to the efforts from @RealHarshThakur, @prateekpandey14, @akhilerm)If you are upgrading from an older version of cStor Operators to this version, you will need to manually delete the cstor CSI driver object prior to upgrade. kubectl delete csidriver cstor.csi.openebs.io
. For complete details on how to upgrade your cStor Operators, see https://github.com/openebs/upgrade/blob/master/docs/upgrade.md#cspc-pools.
Thank you @laimison (Renthopper) for becoming a public reference and supporter of OpenEBS by sharing your use case on ADOPTERS.md
Are you using or evaluating OpenEBS? You can help OpenEBS in its journey towards becoming CNCF Incubation project by sharing your OpenEBS story and join the league of OpenEBS Adopters.
MANY THANKS to our existing contributors and for everyone helping OpenEBS Community going.
A very special thanks to our first-time contributors to code, tests, and docs: @allenhaozi, @anandprabhakar0507, @Hoverbear, @kaushikp13, @praveengt
kubectl apply -f https://openebs.github.io/charts/2.5.0/openebs-operator.yaml
helm repo add openebs https://openebs.github.io/charts
helm repo update
helm install --namespace openebs --name openebs openebs/openebs --version 2.5.0
For more details refer to the documentation at https://docs.openebs.io/
Upgrade to 2.5 is supported only from 1.0 or higher and follows a similar process as earlier releases. Detailed steps are provided here.
For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.
If you are having issues in setting up or upgrade, you can contact:
For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you are using the cStor Storage Engine, please review the following before upgrading to this release.
Init
.Published by kmova almost 4 years ago
OpenEBS v2.4 is our last monthly release for the year with some key enhancements and several fixes for the issues reported by the user community.
Note: With Kubernetes 1.20, SelfLink
option has been removed which is used by the OpenEBS jiva and cStor (non-csi) provisioners. This causes the PVCs to remain in a pending
state. The workaround and fix for this are being tracked under this issue. A patch release will be made available as soon as the fix has been verified on 1.20 platforms.
Here are some of the key highlights in this release:
ZFS Local PV has now been graduated to stable with all the supported features and upgrade tests automated via e2e testing. ZFS Local PV is best suited for distributed workloads that require resilient local volumes that can sustain local disk failures. You can read more about using the ZFS Local volumes at https://github.com/openebs/zfs-localpv and check out how ZFS Local PVs are used in production at Optoro.
OpenEBS is introducing a new NFS dynamic provisioner to allow the creation and deletion of NFS volumes using Kernel NFS backed by block storage. This provisioner is being actively developed and released as alpha. This new provisioner allows users to provision OpenEBS RWX volumes where each volume gets its own NFS server instance. In the previous releases, OpenEBS RWX volumes were supported via the Kubernetes NFS Ganesha and External Provisioner - where multiple RWX volumes share the same NFS Ganesha Server. You can read more about the new OpenEBS Dynamic Provisioner at https://github.com/openebs/dynamic-nfs-provisioner.
kubenetes.io/hostname
for setting the PV Node Affinity. Users can now specify a custom label to use for PV Node Affinity. Custom node affinity can be specified in the Local PV storage class as follows:
kind: StorageClass
metadata:
name: openebs-hostpath
annotations:
openebs.io/cas-type: local
cas.openebs.io/config: |
- name: StorageType
value: "hostpath"
- name: NodeAffinityLabel
value: "openebs.io/custom-node-id"
provisioner: openebs.io/local
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete
This will help with use cases like:
kubenetes.io/hostname
is not unique across the cluster (Ref: https://github.com/openebs/openebs/issues/2875) - name: OPENEBS_IO_JIVA_PATCH_NODE_AFFINITY
value: "disabled"
configuration option autoSetTargetIP
as follows:
apiVersion: velero.io/v1
kind: VolumeSnapshotLocation
metadata:
...
spec:
config:
...
...
autoSetTargetIP: "true"
(Huge thanks to @zlymeda for working on this feature which involved co-ordinating this fix across multiple repositories)./dev/dm-X
and /dev/mapper/x
patterns. (Ref: https://github.com/openebs/openebs/issues/3310).--dry-run
would fail due to admission webhook checks. (Ref: https://github.com/openebs/maya/issues/1771).cos
nodes, where the root partition entry was set as root=/dev/dm-0
. (Ref: https://github.com/openebs/node-disk-manager/pull/516).velero.io/change-pvc-node
to velero.io/change-pvc-node-selector
. ( Ref: https://github.com/openebs/velero-plugin/pull/139)v1beta1
to v1
Thank you @FeynmanZhou (KubeSphere) for becoming a public reference and supporter of OpenEBS by sharing your use case on ADOPTERS.md
Are you using or evaluating OpenEBS? You can help OpenEBS in its journey towards becoming CNCF Incubation project by sharing your OpenEBS story and join the league of OpenEBS Adopters.
MANY THANKS to our existing contributors and for everyone helping OpenEBS Community going.
A very special thanks to our first-time contributors to code, tests, and docs: @alexppg, @arne-rusek, @Atharex, @bobek, @Mosibi, @mpartel, @nareshdesh, @rahulkrishnanfs, @ssytnikov18, @survivant
kubectl apply -f https://openebs.github.io/charts/2.4.0/openebs-operator.yaml
helm repo add openebs https://openebs.github.io/charts
helm repo update
helm install --namespace openebs --name openebs openebs/openebs --version 2.4.0
For more details refer to the documentation at https://docs.openebs.io/
Upgrade to 2.4 is supported only from 1.0 or higher and follows a similar process as earlier releases. Detailed steps are provided here.
For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.
If you are having issues in setting up or upgrade, you can contact:
For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you are using the cStor Storage Engine, please review the following before upgrading to this release.
Init
.Published by kmova almost 4 years ago
OpenEBS v2.3 is our Hacktoberfest release with 40+ new contributors added to the project and ships with ARM64 support for cStor, Jiva, Dynamic Local PV. Mayastor seeing higher adoption rates resulting in further fixes and enhancements.
Here are some of the key highlights in this release:
ARM64 support (declared beta) for OpenEBS Data Engines - cStor, Jiva, Local PV (hostpath and device), ZFS Local PV.
<image name>-amd64:<image-tag>
, are also made available from docker hub and quay to support backward compatibility to users running OpenEBS deployments with arch-specific images.to-image
to the multi-arch image available from docker or your own copy of it.Enhanced the cStor Velero Plugin to help with automating the restore from an incremental backup. Restoring an incremental backup involves restoring the full backup (also called base backup and subsequent incremental backups till the desired incremental backup. With this release, the user can set a new parameter(restoreAllIncrementalSnapshots
) in the VolumeSnapshotLocation
to automate the restore of the required base and incremental backups. For detailed instructions to try this feature, please refer to this doc.
OpenEBS Mayastor is seeing tremendous growth in terms of users trying it out and providing feedback. A lot of work in this release has gone into fixing issues, enhancing the e2e tests and control plane, and adding initial support for snapshots. For further details on enhancements and bug fixes in Mayastor, please refer to Mayastor.
loopback
devices, luks
encrypted devices, and LVM
devices. Prior to this release, if users had to use dm devices, they would have to manually create the corresponding Block Device CRs.quay.io/openebs
, when upgrade job doesn't specify the registry location. The upgrade job will now fallback to the registry that is already configured on the existing pods.dynamic-localpv-provisioner
has been moved from openebs/maya to its own repository as openebs/dynamic-localpv-provisioner. This refactoring of the source code will also help with the simplified build and faster release process per data engine.v1beta1
to v1
Thank you @shock0572 (ExactLab), @yydzhou (ByteDance), @kuja53, and @darioneto for becoming a public reference and supporter of OpenEBS by sharing your use case on ADOPTERS.md
Are you using or evaluating OpenEBS? You can help OpenEBS in its journey towards becoming CNCF Incubation project by sharing your OpenEBS story and join the league of OpenEBS Adopters.
MANY THANKS to our existing contributors and for everyone helping OpenEBS Community going.
A very special thanks to our first-time contributors to code, tests, and docs: @filip-lebiecki, @hack3r-0m, @mtzaurus, @niladrih, @Akshay-Nagle, @Aman1440, @AshishMhrzn10, @Hard-Coder05, @ItsJulian, @KaranSinghBisht, @Naveenkhasyap, @Nelias, @Shivam7-1, @ShyamGit01, @Sumindar, @Taranzz25, @archit041198, @aryanrawlani28, @codegagan, @de-sh, @harikrishnajiju, @heygroot, @hnifmaghfur, @iTechsTR, @iamrajiv, @infiniteoverflow, @invidian, @kichloo, @lambda2, @lucasqueiroz, @prakhargurunani, @prakharshreyash15, @rafael-rosseto, @sabbatum, @salonigoyal2309, @sparkingdark, @sudhinm, @trishitapingolia, @vijay5158, @vmr1532.
kubectl apply -f https://openebs.github.io/charts/2.3.0/openebs-operator.yaml
helm repo add openebs https://openebs.github.io/charts
helm repo update
helm install --namespace openebs --name openebs openebs/openebs --version 2.3.0
For more details refer to the documentation at https://docs.openebs.io/
Upgrade to 2.3 is supported only from 1.0 or higher and follows a similar process as earlier releases. Detailed steps are provided here.
For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.
If you are having issues in setting up or upgrade, you can contact:
For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you are using the cStor Storage Engine, please review the following before upgrading to this release.
Init
.Published by kmova about 4 years ago
OpenEBS v2.2 comes with a critical fix to NDM and several enhancements to cStor, ZFS Local PV and Mayastor. Here are some of the key highlights in this release:
OpenEBS ZFS Local PV adds support for Incremental Backup and Restore by enhancing the OpenEBS Velero Plugin. For detailed instructions to try this feature, please refer to this doc.
OpenEBS Mayastor instances now expose a gRPC API which is used to enumerate block disk devices attached to the host node, as an aid to the identification of suitable candidates for inclusion within storage Pools during configuration. This functionality is also accessible within the mayastor-client
diagnostic utility. For further details on enhancements and bug fixes in Mayastor, please refer to Mayastor release notes.
Enhanced the Velero Plugin to restore ZFS Local PV into a different cluster or a different node in the cluster. This feature depends on the Velero velero.io/change-pvc-node: RestoreItemAction
feature. https://github.com/openebs/velero-plugin/pull/118.
The Kubernetes custom resources for managing cStor Backup and Restore have been promoted to v1. This change is backward compatible with earlier resources and transparent to users. When the SPC resources are migrated to CSPC, the related Backup/Restore resources on older volumes are also upgraded to v1. https://github.com/openebs/upgrade/pull/59.
Enhanced the SPC to CSPC migration feature with multiple usability fixes like supporting migrating multiple volumes in parallel. https://github.com/openebs/upgrade/pull/52, ability to detect the changes in the underlying virtual disk resources (BDs) and automatically update them in CSPC https://github.com/openebs/upgrade/pull/53. Prior to this release, when migrating to CSPC, the user need to manually update the BDs.
Enhanced the Velero Plugin to use custom certificates for S3 object storage. https://github.com/openebs/velero-plugin/pull/115.
Enhanced cStor Operators to allow users to specify the name of the new node for a previously configured cStor Pool. This will help with scenarios where a Kubernetes node can be replaced with a new node but can be attached with the block devices from the old node that contain cStor Pool and the volume data. https://github.com/openebs/cstor-operators/pull/167.
Enhanced NDM OS discovery logic for nodes that use /dev/root
as the root filesystem. https://github.com/openebs/node-disk-manager/pull/492.
Enhanced NDM OS discovery logic to support excluding multiple devices that could be mounted as host filesystem directories. https://github.com/openebs/node-disk-manager/issues/224.
MANY THANKS to our existing contributors and for everyone helping OpenEBS Community going.
A very special thanks to our first-time contributors to code, tests, and docs: @didier-durand, @zlymeda, @avats-dev, and many more contributing via Hacktoberfest.
Thank you @danielsand for becoming a public reference and supporter of OpenEBS by sharing their use case on ADOPTERS.md
Are you using or evaluating OpenEBS? You can help OpenEBS in its journey towards becoming CNCF Incubation project by sharing your OpenEBS story and join the league of OpenEBS Adopters.
kubectl apply -f https://openebs.github.io/charts/2.2.0/openebs-operator.yaml
helm repo add openebs https://openebs.github.io/charts
helm repo update
helm install --namespace openebs --name openebs openebs/openebs --version 2.2.0
For more details refer to the documentation at https://docs.openebs.io/
Upgrade to 2.2 is supported only from 1.0 or higher and follows a similar process as earlier releases. Detailed steps are provided here.
For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.
If you are having issues in setting up or upgrade, you can contact:
For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you are using the cStor Storage Engine, please review the following before upgrading to this release.
Init
.Published by kmova about 4 years ago
OpenEBS v2.1 is a developer release focused on code, tests and build refactoring along with some critical bug fixes and user enhancements. This release also introduces support for remote Backup and Restore of ZFS Local PV using OpenEBS Velero plugin.
Here are some of the key highlights in this release:
MANY THANKS to our existing contributors and for everyone helping OpenEBS Community going.
A very special thanks to our first-time contributors to code, tests, and docs: @rohansadale, @AJEETRAI707, @smijolovic, @jlcox1970
Thanks, also to @sonasingh46 for being the 2.1 release coordinator.
Thank you @SeMeKh (Hamravesh), @tobg(TOBG Services Ltd) for becoming a public reference and supporter of OpenEBS by sharing their use case on ADOPTERS.md
Are you using or evaluating OpenEBS? You can help OpenEBS in its journey towards becoming CNCF Incubation project by sharing your OpenEBS story and join the league of OpenEBS Adopters.
kubectl apply -f https://openebs.github.io/charts/2.1.0/openebs-operator.yaml
helm repo add openebs https://openebs.github.io/charts
helm repo update
helm install --namespace openebs --name openebs openebs/openebs --version 2.1.0
For more details refer to the documentation at https://docs.openebs.io/
Upgrade to 2.1 is supported only from 1.0 or higher and follows a similar process as earlier releases. Detailed steps are provided here.
For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.
If you are having issues in setting up or upgrade, you can contact:
For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you are using the cStor Storage Engine, please review the following before upgrading to this release.
Init
.Published by kmova about 4 years ago
OpenEBS has reached a significant milestone with v2.0 with support for cStor CSI drivers graduating to beta, improved NDM capabilities to manage virtual and partitioned block devices, and much more.
OpenEBS v2.0 includes the following Storage Engines that are currently deployed in production by various organizations:
OpenEBS v2.0 also includes the following Storage Engines, going through alpha testing at a few organizations. Please get in touch with us, if you would like to participate in the alpha testing of these engines.
For a change summary since v1.12, please refer to Release 2.0 Change Summary.
Here are some of the key highlights in this release:
MANY THANKS to our existing contributors and for everyone helping OpenEBS Community going.
A very special thanks to our first-time contributors to code, tests, and docs: @silentred, @whoan, @sonicaj, @dhoard, @akin-ozer, @alexppg, @FestivalBobcats
Thanks, also to @akhilerm for being the 2.0 release coordinator.
Thank you @nd2014-public(D-Rating), @baskinsy(Stratus5), @evertmulder(KPN) for becoming a public reference and supporter of OpenEBS by sharing their use case on ADOPTERS.md
A very special thanks to @yhrenlee for sharing the story in DoK Community, about how OpenEBS helped Arista with migrating their services to Kubernetes.
Are you using or evaluating OpenEBS? You can help OpenEBS in its journey towards becoming CNCF Incubation project by sharing your OpenEBS story and join the league of OpenEBS Adopters.
kubectl apply -f https://openebs.github.io/charts/2.0.0/openebs-operator.yaml
helm repo add openebs https://openebs.github.io/charts
helm repo update
helm install --namespace openebs --name openebs openebs/openebs --version 2.0.0
For more details refer to the documentation at https://docs.openebs.io/
Upgrade to 2.0 is supported only from 1.0 or higher and follows a similar process as earlier releases. Detailed steps are provided here.
For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.
If you are having issues in setting up or upgrade, you can contact:
For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you are using the cStor Storage Engine, please review the following before upgrading to this release.
Init
.Published by kmova over 4 years ago
The theme for OpenEBS v1.12 continues to be about polishing OpenEBS Storage engines Mayastor, cStor CSI Driver, and preparing them for Beta. A lot of efforts from the contributors were around evaluating more CI/CD and testing frameworks.
For a detailed change summary, please refer to Release 1.12 Change Summary.
Before getting into the release summary,
Important Announcement: OpenEBS Community Slack channels have migrated to Kubernetes Slack Workspace by Jun 22nd
The OpenEBS channels on Kubernetes Slack are:
More details about this migration can be found here.
Here are some of the key highlights in this release:
helm repo add openebs https://openebs.github.io/charts
helm repo update
helm install --namespace openebs --name openebs openebs/openebs
ADMISSION_WEBHOOK_FAILURE_POLICY
) on admission server deployment. maya#1726 (@prateekpandey14)MANY THANKS to our existing contributors and for everyone helping OpenEBS Community going.
A very special thanks to our first-time contributors to code, tests, and docs: @mikroskeem, @stuartpb, @utkudarilmaz
Thanks, also to @mittachaitu for being the 1.12 release coordinator.
With gratitude and joy, we welcome the following members to the OpenEBS organization as reviewers for their continued contributions and commitment to help the OpenEBS project and community.
Check out our full list of maintainers and reviewers here. Our Governance policy is here.
Thank you @dstathos and @mikroskeem for becoming a public reference and supporter of OpenEBS by sharing your use case on ADOPTERS.md
Are you using or evaluating OpenEBS? You can help OpenEBS in its journey towards becoming CNCF Incubation project by sharing your OpenEBS story and join the league of OpenEBS Adopters.
kubectl apply -f https://openebs.github.io/charts/1.12.0/openebs-operator.yaml
helm repo update
helm install --namespace openebs --name openebs stable/openebs --version 1.12.1
For more details refer to the documentation at https://docs.openebs.io/
Upgrade to 1.12 is supported only from 1.0 or higher and follows a similar process as earlier releases. Detailed steps are provided here.
For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.
If you are having issues in setting up or upgrade, you can contact:
For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you are using the cStor Storage Engine, please review the following before upgrading to this release.
Init
.Published by kmova over 4 years ago
The theme for OpenEBS v1.11 has been about polishing OpenEBS Storage engines Mayastor, ZFS Local PV, cStor CSI Driver, and preparing them for Beta. This release also includes several supportability enhancements and fixes for the existing engines.
For a detailed change summary, please refer to Release 1.11 Change Summary.
Before getting into the release details,
Important Announcement: OpenEBS Community Slack channels will be migrated to Kubernetes Slack Workspace by Jun 22nd
In the interest of neutral governance, the OpenEBS community support via slack is being migrated from openebs-community
slack (a free version of slack with limited support for message retention) to the following OpenEBS channels on Kubernetes Slack owned by CNCF.
The #openebs-users
channel will be marked as read-only by June 22nd.
More details about this migration can be found here.
Given that openebs-community
slack has been a neutral home for many vendors that are offering free and commercial support/products on top of OpenEBS, the workspace will continue to live on. These vendors are requested to create their own public channels and the information about those channels can be communicated to users via the OpenEBS website by raising an issue/pr to https://github.com/openebs/website.
Here are some of the key highlights in this release:
filterconfigs.state
configurable. charts#107 (@fukuta-tatsuya-intec)rbd
devices from being used for creating Block Devices charts#111 (@GTB3NW)MANY THANKS to our existing contributors and for everyone helping OpenEBS Community going.
A very special thanks to our first-time contributors to code, tests and docs: @cruwe, @sgielen, @ShubhamB99, @GTB3NW, @Icedroid, @fukuta-tatsuya-intec, @mtmn, @nrusinko, @radicand, @zadunn, @xUnholy,
We also are delighted to have @harshthakur9030, @semekh, @vaniisgh contributing to OpenEBS via the CNCF Community Bridge Program.
Thanks, also to @shubham14bajpai for being the 1.11 release co-ordinator.
Thank you @zadunn (Optoro), @meyskens, @stevefan1999-personal, @darias1986(DISID) for becoming a public reference and supporter of OpenEBS by sharing your use case on ADOPTERS.md
Are you using or evaluating OpenEBS? You can help OpenEBS in its journey towards becoming CNCF Incubation project by sharing your OpenEBS story and join the league of OpenEBS Adopters.
kubectl apply -f https://openebs.github.io/charts/1.11.0/openebs-operator.yaml
helm repo update
helm install --namespace openebs --name openebs stable/openebs --version 1.11.0
For more details refer to the documentation at https://docs.openebs.io/
Upgrade to 1.11 is supported only from 1.0 or higher and follows a similar process as earlier releases. Detailed steps are provided here.
For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.
If you are having issues in setting up or upgrade, you can contact:
For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you are using the cStor Storage Engine, please review the following before upgrading to this release.
Init
.Published by kmova over 4 years ago
The theme for OpenEBS v1.10 has been about polishing the new OpenEBS Storage engines Mayastor, ZFS Local PV, and preparing them for Beta. This release also includes several supportability enhancements and fixes for the existing engines.
For a detailed change summary, please refer to Release 1.10 Change Summary.
Here are some of the key highlights in this release:
MANY THANKS for everyone helping OpenEBS Community Slack going and very special thanks to the following people who joined us on GitHub for this release:
You can help OpenEBS in its journey towards becoming CNCF Incubation project by sharing your OpenEBS story and join the league of OpenEBS Adopters.
Thank you @aretakisv @alexjmbarton for adding your OpenEBS usage story to ADOPTERS.md
kubectl apply -f https://openebs.github.io/charts/openebs-operator-1.10.0.yaml
helm repo update
helm install --namespace openebs --name openebs stable/openebs --version 1.10.0
For more details refer to the documentation at https://docs.openebs.io/
Upgrade to 1.10 is supported only from 1.0 or higher and follows a similar process as earlier releases. Detailed steps are provided here.
For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.
If you are having issues in setting up or upgrade, you can contact us via:
For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you are using the cStor Storage Engine, please review the following before upgrading to this release.
Init
.Published by kmova over 4 years ago
OpenEBS v1.9 includes several enhancements and bug fixes to the underlying cStor, Jiva and Local PV data engines. The improvements to the cStor backup and restore can help with a significant reduction in storage capacity consumption and also in enabling faster CI/CD pipelines. Some long-requested features like support for bulk volume upgrades and ARM containers for all OpenEBS containers are also included in this release.
For a detailed change summary, please refer to Release 1.9 Change Summary.
Special thanks to our first-time contributor @stevefan1999-personal in this release and also many thanks to @jemershaw, @lfillmore, @pkavajin, @SVronskiy, @zzzuzik, @davidkarlsen for reporting issues and testing the fixes.
Here are some of the key highlights in this release:
Active development is underway on the following alpha features:
Some notable changes are:
For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you are using cStor Storage Engine, please review the following before upgrading to this release.
Init
.kubectl apply -f https://openebs.github.io/charts/openebs-operator-1.9.0.yaml
helm repo update
helm install --namespace openebs --name openebs stable/openebs --version 1.9.0
For more details refer to the documentation at https://docs.openebs.io/
Upgrade to 1.9 is supported only from 1.0 or higher and follows a similar process like earlier releases. The detailed steps are provided here.
For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.
If you are having issues in setting up or upgrade, you can contact us via:
Published by kmova over 4 years ago
For a detailed change summary, please refer to Release 1.8 Change Summary.
Special thanks to our first-time contributors in this release. @Pensu @novasharper @nerdeveloper @nicklasfrahm
OpenEBS v1.8 includes a critical fix (#2956) for Jiva volumes that are running in version 1.6 and 1.7. You must use these pre-upgrade steps to check if your jiva volumes are impacted. If they are, please reach out to us on OpenEBS Slack or Kubernetes Slack #openebs channel for helping you with the upgrade.
Here are some of the key highlights in this release:
disk
custom resource in example YAMLs. (@nerdeveloper)Active development is underway on the following alpha features:
Some notable changes are:
For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you using cStor Storage Engine, please review the following before upgrading to this release.
Init
.kubectl apply -f https://openebs.github.io/charts/openebs-operator-1.8.0.yaml
helm repo update
helm install --namespace openebs --name openebs stable/openebs --version 1.8.0
For more details refer to the documentation at https://docs.openebs.io/
Upgrade to 1.8 is supported only from 1.0 or higher and follows a similar process like earlier releases. The detailed steps are provided here.
For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.
If you are having issues in setting up or upgrade, you can contact us via:
Published by kmova over 4 years ago
For a detailed change summary, please refer to: Release 1.7 Change Summary. OpenEBS v1.7 release includes stability fixes and enhancing the resiliency tests in the e2e pipelines. CSI Drivers for OpenEBS Jiva, cStor and ZFS Local PV are available for alpha testing.
Active development is underway on the following alpha features:
Some notable changes are:
For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you using cStor Storage Engine, please review the following before upgrading to this release.
Init
.kubectl apply -f https://openebs.github.io/charts/openebs-operator-1.7.0.yaml
helm repo update
helm install --namespace openebs --name openebs stable/openebs --version 1.7.0
For more details refer to the documentation at: https://docs.openebs.io/
Upgrade to 1.7 is supported only from 1.0 or higher and follows a similar process like earlier releases. The detailed steps are provided here.
For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.
If you are having issues in setting up or upgrade, you can contact us via:
Published by kmova almost 5 years ago
For detailed change summary, please refer to: Release 1.6 Change Summary. Here are some Key highlights.
Active development is underway on the following alpha features:
Some notable changes are:
For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you using cStor Storage Engine, please review the following before upgrading to this release.
Init
.kubectl apply -f https://openebs.github.io/charts/openebs-operator-1.6.0.yaml
helm repo update
helm install --namespace openebs --name openebs stable/openebs --version 1.6.0
For more details refer to the documentation at: https://docs.openebs.io/
Upgrade to 1.6 is supported only from 1.0 or higher and follows a similar process like earlier releases. The detailed steps are provided here.
For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.
If you are having issues in setting up or upgrade, you can contact us via:
Published by kmova almost 5 years ago
This is the last planned release of 2019, with some bug fixes and several improvements to the alpha features. For detailed change summary, please refer to: Release 1.5 Change Summary. Here are some Key highlights.
ourserver:port/openebs/theimagename:thetag
(@davidkarlsen, @shubham14bajpai)Active development is underway on the following alpha features:
Some notable changes are:
Special shout out to our new contributors @jjjighg, @rahulchheda for working on some significant enhancements.
For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you using cStor Storage Engine, please review the following before upgrading to this release.
Init
.kubectl apply -f https://openebs.github.io/charts/openebs-operator-1.5.0.yaml
helm repo update
helm install --namespace openebs --name openebs stable/openebs --version 1.5.0
For more details refer to the documentation at: https://docs.openebs.io/
Upgrade to 1.5 is supported only from 1.0 or higher and follows a similar process like earlier releases. The detailed steps are provided here.
For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.
If you are having issues in setting up or upgrade, you can contact us via:
Published by kmova almost 5 years ago
For detailed change summary, please refer to: Release 1.4 Change Summary. Here are some Key highlights.
Huge thanks to everyone that participated in the Hacktoberfest 2019 adding another 20+ new contributors to OpenEBS project.
NewReplicaDegraded
and ReconstructingNewReplica
. NewReplicaDegraded
means cStor volume replica is newly created and it is successfully connected with its target pod. ReconstructingNewReplica
means cStor volume replica is newly created and it started reconstructing entire data from another healthy replica. https://github.com/openebs/maya/pull/1514
user
. This issue impacts any clusters that had openebs installed using kubectl using the default openebs-operator.yaml. Thanks to @surajssd for detecting and resolving this issue. https://github.com/openebs/openebs/pull/2816
Also, starting with this release, the SPARSE block devices will not be created by default. If you need to create them - please update the SPARSE_FILE_COUNT
env variable in the operator or the helm value ndm.sparse.count
.
Active development is underway on the following alpha features:
For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you using cStor Storage Engine, please review the following before upgrading to this release.
Init
.kubectl apply -f https://openebs.github.io/charts/openebs-operator-1.4.0.yaml
helm repo update
helm install --namespace openebs --name openebs stable/openebs --version 1.4.0
For more details refer to the documentation at: https://docs.openebs.io/
A significant improvement in the upgrade process compared to earlier releases, is that upgrades can be triggered as Kubernetes jobs instead of downloading and executing shell scripts. The detailed steps are provided here.
Upgrade to 1.4 is supported only from 1.0 or 1.1 or 1.2 or 1.3 and follows a similar process like earlier releases.
For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.
If you are having issues in setting up or upgrade, you can contact us via:
Published by kmova about 5 years ago
kubectl apply -f https://openebs.github.io/charts/openebs-operator-1.3.0.yaml
helm repo update
helm install --namespace openebs --name openebs stable/openebs --version 1.3.0
For more details refer to the documentation at: https://docs.openebs.io/
For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you using cStor Storage Engine, please review the following before upgrading to this release.
The recommended approach for deploying cStor Pools is to specify the list of block devices to be used in the StoragePoolClaim (SPC). The automatic selection of block devices has very limited support. Automatic provisioning of cStor pools with block devices of different capacity is not recommended.
When using cStor Pools, make sure that raw block devices are available on the nodes. If the block devices are formatted with a filesystem, partitioned or mounted, then cStor Pool will not be created on the block device. In the current release, there are manual steps that could be followed to clear the filesystem or use partitions for creating cStor Pools, please reach out to the community at https://slack.openebs.io.
If you are using cStor pools with ephemeral devices, starting with 1.2 - upon node restart, cStor Pool will not be automatically re-created on the new devices. This check has been put to make sure that nodes are not accidentally restarted with new disks. The steps to recover from such a situation are provided here, which involve changing the status of the corresponding CSP to Init
.
We have received feedback from the community that the provisioning of cStor Pools is not as simple as Jiva or Local PV and we are working on improving this experience in the upcoming releases. The proposed enhancements are being supported via new CRDs, so that the existing cStor deployments are not impacted. Once the proposed changes are complete, a seamless migration from older CRs to new will be supported. To track the progress of the proposed changes, please refer to this design proposal.
A significant improvement in the upgrade process compared to earlier releases, is that upgrades can be triggered as Kubernetes jobs instead of downloading and executing shell scripts. The detailed steps are provided here.
Upgrade to 1.3 is supported only from 1.0 or 1.1 or 1.2 and follows a similar process like earlier releases.
For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.
OpenEBS Release 1.3 is primarily intended towards users upgrading to Kubernetes 1.16.
This release fixes a known issue where OpenEBS fails to provision volumes after upgrading to Kubernetes 1.16, as some of the Kubernetes APIs used for deploying Volume pods are deprecated. OpenEBS 1.16 helps with moving all of the deployment and daemonsets to use the app/v1 API.
In addition, this release also saw some great progress in terms of enhancements, design and feasibility for the following features:
Special shout out to Marius (from OpenEBS Slack), @obeyler, @steved, @davidkarlsen for helping with reporting issues/enhancements with OpenEBS Helm charts and fixing/reviewing them.
Also, a Huge thanks to @Wangzihao18 for helping with refactoring the build scripts to support ARM builds.
For detailed change summary, alpha features under development and steps to upgrade from previous version, please refer to: Release 1.3 Change Summary
Published by kmova about 5 years ago
kubectl apply -f https://openebs.github.io/charts/openebs-operator-1.2.0.yaml
helm repo update
helm install --namespace openebs --name openebs stable/openebs --version 1.2.0
For more details refer to the documentation at: https://docs.openebs.io/
For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you using cStor Storage Engine, please review the following before upgrading to this release.
Init
.A significant improvement in the upgrade process compared to earlier releases, is that upgrades can be triggered as Kubernetes jobs instead of downloading and executing shell scripts. The detailed steps are provided here.
Upgrade to 1.2 is supported only from 1.0 or 1.1 and follows a similar process like earlier releases.
For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.
OpenEBS Release 1.2 has been about fixing user-reported issues and continuing the development of alpha features. Major bugfixes in this release include:
kubernetes.io/hostname
and the node name
are different. @obeylerFor detailed change summary, alpha features under development and steps to upgrade from previous version, please refer to: Release 1.2 Change Summary
Published by kmova about 5 years ago
kubectl apply -f https://openebs.github.io/charts/openebs-operator-1.1.0.yaml
helm repo update
helm install --namespace openebs --name openebs stable/openebs --version 1.1.0
For more details refer to the documentation at: https://docs.openebs.io/
For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. Here are some of the major limitations that you have to consider before upgrading to this release.
kubernetes.io/hostname
and the node name
be the same. While this is true for many Kubernetes deployments, we have come across some deployments where they are different. Please reach out to us on slack if you need help in deploying on such environments.A significant improvement in the upgrade process compared to earlier releases, is that upgrades can be triggered as Kubernetes jobs instead of downloading and executing shell scripts. The detailed steps are provided here.
Upgrade to 1.1 is supported only from 1.0 and follows a similar process like earlier releases.
For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.
OpenEBS Release 1.1 has been about fixing and documenting the cross platform usability issues reported by users and also laying the foundation for some of the long overdue backlogs. Major features, enhancements and bug fixes in this release include:
quay.io/openebs/m-upgrade:1.1.0
.Another highlight of this release is an increased involvement from OpenEBS user community pitching in with GitHub Issues as well as providing contributions. Here are some issues that were raised and fixed within the current release.
OpenEBS
was installed through helm. @gridworkzkubernetes.io/hostname
for Block Devices on AWS Instances was being set as the nodeName
. This was resulting in cStor Pools not being scheduled to the node as there was a mismatch between hostname and nodename in AWS instances. @obeylerbackupPathPrefix
for storing the volume snapshots in custom location. This allows users to save/backup configuration and volume snapshot data under the same location rather than saving the configuration and data in different locations . @amarshawFor detailed change summary and steps to upgrade from previous version, please refer to: Release 1.1 Change Summary
Published by kmova over 5 years ago
Congratulations and thanks to everyone of you from the OpenEBS community for reaching this significant milestone!
kubectl apply -f https://openebs.github.io/charts/openebs-operator-1.0.0.yaml
helm repo update
helm install --namespace openebs --name openebs stable/openebs
For more details refer to the documentation at: https://docs.openebs.io/
Upgrade to 1.0 is supported only from 0.9 and follows a similar approach like earlier releases.
The detailed steps are provided here.
For upgrading from releases prior to 0.9, please refer to the respective release upgrade here.
OpenEBS Release 1.0 has multiple enhancements and bug fixes which include:
Note: If you have automated tools built around OpenEBS cStor Data Engine, please pay closer attention to the following changes:
blockdevice
. For a list of blockdevices in your cluster - run kubectl get blockdevices -n <openebs namespace>
blockdevices
in place of disk
CRs. For more details and examples check the documentation.For detailed change summary and steps to upgrade from previous version, please refer to: Release 1.0 Change Summary
For a more comprehensive list of open issues uncovered through e2e, please refer to open issues.