openebs

Most popular & widely deployed Open Source Container Native Storage platform for Stateful Persistent Applications on Kubernetes.

APACHE-2.0 License

Stars
8.6K
Committers
221

Bot releases are hidden (Show)

openebs - 0.9

Published by kmova over 5 years ago

Getting Started

Prerequisite to install

  • Kubernetes 1.12+ is installed
  • Make sure that you run the below installation steps with cluster admin context. The installation will involve creating a new Service Account and assigning to OpenEBS components.
  • Make sure iSCSI Initiator is installed on the Kubernetes nodes.
  • NDM helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters in NDM Config Map to exclude paths before installing OpenEBS.
  • NDM runs as a privileged pod since it needs to access the device information. Please make the necessary changes to grant access to run in privileged mode. For example, when running in RHEL/CentOS, you may need to set the security context appropriately. Refer Configuring OpenEBS with selinux=on

Using kubectl

kubectl apply -f https://openebs.github.io/charts/openebs-operator-0.9.0.yaml

Using helm stable charts

helm repo update
helm install  --namespace openebs --name openebs stable/openebs

For more details refer to the documentation at: https://docs.openebs.io/

Change Summary

OpenEBS Release 0.9 has multiple enhancements and bug fixes which include:

  • Support for Dynamic Provisioning of Local PV (using hostpath)
  • Support for Backup/Restore of cStor Volumes using Velero OpenEBS Plugin
  • Support for distributing/scheduling of the cStor Volume Replicas efficiently for Statefulsets

For detailed change summary and steps to upgrade from previous version, please refer to: Release 0.9 Change Summary

For a more comprehensive list of open issues uncovered through e2e, please refer to open issues.

openebs - 0.9.0-RC2

Published by kmova over 5 years ago

Getting Started

Prerequisite to install

  • Kubernetes 1.12+ is installed
  • Make sure that you run the below installation steps with cluster admin context. The installation will involve creating a new Service Account and assigning to OpenEBS components.
  • Make sure iSCSI Initiator is installed on the Kubernetes nodes.
  • NDM helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters in NDM Config Map to exclude paths before installing OpenEBS.
  • NDM runs as a privileged pod since it needs to access the device information. Please make the necessary changes to grant access to run in privileged mode. For example, when running in RHEL/CentOS, you may need to set the security context appropriately. Refer Configuring OpenEBS with selinux=on

Using kubectl

kubectl apply -f https://openebs.github.io/charts/openebs-operator-0.9.0-RC2.yaml

For change summary refer to: Release 0.9 Change Summary

openebs - 0.8.2

Published by kmova over 5 years ago

Getting Started

Prerequisite to install

  • Kubernetes 1.9.7+ is installed
  • Make sure that you run the below installation steps with cluster admin context. The installation will involve creating a new Service Account and assigning to OpenEBS components.
  • Make sure iSCSI Initiator is installed on the Kubernetes nodes.
  • NDM helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters on NDM to exclude paths before installing OpenEBS.
  • NDM runs as a privileged pod since it needs to access the device information. Please make the necessary changes to grant access to run in privileged mode. For example, when running in RHEL/CentOS, you may need to set the security context appropriately.

Using kubectl

kubectl apply -f https://openebs.github.io/charts/openebs-operator-0.8.2.yaml

Using helm stable charts

helm repo update
helm install  --namespace openebs --name openebs stable/openebs

For more details refer to the documentation at: https://docs.openebs.io/

Change Summary

This is a patch release with limited scope.

Major Bugs Fixed

  • Fixed an issue where a newly added cStor Volume Replica may not be successfully registered with the cStor target, if the cStor target tries to connect to Replica before the Replica is completely initialized. (https://github.com/openebs/cstor/pull/214)
  • Fixed an issue with Jiva Volumes where target can mark the Replica as Timed out on IO, even when the Replica might actually be processing the Sync IO. Fixed by increasing the timeout check to be greater than the timeout set on the Sync IO. (https://github.com/openebs/jiva/pull/194)
  • Fixed an issue with Jiva Volumes that would not allow for Replicas to re-connect with the Target, if the initial Registration failed to successfully process the hand-shake request. (https://github.com/openebs/jiva/pull/195)
  • Fixed an issue with Jiva Volumes that would cause Target to restart when a send diagnostic command was received from the client. (https://github.com/openebs/jiva/pull/197). Many thanks to @rgembalik for help with debugging the issue and validating the RC build.
  • Fixed an issue causing PVC to be stuck in pending state, when there were more than one PVCs associated with an Application Pod. (https://github.com/openebs/maya/pull/1045)
  • Fixed an issue causing cStor Volume Replica CRs to be stuck, when the OpenEBS namespace was being deleted. (https://github.com/openebs/maya/pull/955)

New Capabilities

  • Support for new Storage Policies:
    • Pool Tolerations ( Applicable to cStor Pools) (https://github.com/openebs/maya/pull/1007).
      Pool Tolerations policy can be used to allow scheduling of cStor Pool Pods on nodes with taints. The Tolerations can be specified in the cStor SPC as follows, where t1, t2 represent the taints and the conditions as expected by Kubernetes.
        name: cstor-sparse-pool
        annotations:
          cas.openebs.io/config: |
            - name: Tolerations
              value: |-
                t1:
                  effect: NoSchedule
                  key: nodeA
                  operator: Equal
                t2:
                  effect: NoSchedule
                  key: app
                  operator: Equal
                  value: storage
      

Sample Storage Pool Claims, Storage Class and PVC configurations to make use of new features can be found here: Sample YAMLs

For a more comprehensive list of open issues uncovered through e2e, please refer to open issues.

Additional details and note to upgrade and uninstall are available on Project Tracker Wiki.

openebs - 0.8.1

Published by kmova over 5 years ago

Getting Started

Prerequisite to install

  • Kubernetes 1.9.7+ is installed
  • Make sure that you run the below installation steps with cluster admin context. The installation will involve creating a new Service Account and assigning to OpenEBS components.
  • Make sure iSCSI Initiator is installed on the Kubernetes nodes.
  • NDM helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters on NDM to exclude paths before installing OpenEBS.
  • NDM runs as a privileged pod since it needs to access the device information. Please make the necessary changes to grant access to run in privileged mode. For example, when running in RHEL/CentOS, you may need to set the security context appropriately.

Using kubectl

kubectl apply -f https://openebs.github.io/charts/openebs-operator-0.8.1.yaml

Using helm stable charts

helm repo update
helm install  --namespace openebs --name openebs stable/openebs

Sample Storage Pool Claims, Storage Class and PVC configurations to make use of new features can be found here: Sample YAMLs

For more details refer to the documentation at: https://docs.openebs.io/

Change Summary

New Capabilities

  • Support for new Storage Policies:
    • TargetTolerations ( Applicable to both cStor and Jiva volumes) (openebs/maya#921). TargetTolerations policy can be used to allow scheduling of cStor or Jiva Target Pods on nodes with taints. The TargetTolerations can be specified in the StorageClass as follows, where t1, t2 represent the taints and the conditions as expected by Kubernetes.
      annotations:
        cas.openebs.io/config: |
          - name: TargetTolerations
            value: |
              t1:
                key: "key1"
                operator: "Equal"
                value: "value1"
                effect: "NoSchedule"
              t2:
                key: "key1"
                operator: "Equal"
                value: "value1"
                effect: "NoExecute"    
      
    • ReplicaTolerations ( Applicable to Jiva volumes) (openebs/maya#921). ReplicaTolerations policy can be used to allow scheduling of Jiva Replica Pods on nodes with taints. The ReplicaTolerations can be specified in the StorageClass as follows, where t1, t2 represent the taints and the conditions as expected by Kubernetes
      annotations:
        cas.openebs.io/config: |
          - name: ReplicaTolerations
            value: |
              t1:
                key: "key1"
                operator: "Equal"
                value: "value1"
                effect: "NoSchedule"
              t2:
                key: "key1"
                operator: "Equal"
                value: "value1"
                effect: "NoExecute"    
      
    • TargetNodeSelector policy can be used with cStor volumes as well(openebs/maya#914). TargetNodeSelector policy can be specified in the Storage Class as follows, to pin the targets to certain set of Kuberenetes nodes labelled as node: storage-node :
      annotations:
         openebs.io/cas-type: cstor
         cas.openebs.io/config: |
          - name: TargetNodeSelector
            value: |-
              node: storage-node
      
    • ScrubImage (Applicable to Jiva Volumes) (openebs/maya#936). After the Jiva volumes are deleted, a Scrub job is scheduled to clear the data. The container used to complete the scrub job is available at quay.io/openebs/openebs-tools:3.8. For deployments where the images can't be downloaded from the Internet, this image can be hosted locally and the location specified using the ScrubImage policy in the StorageClass as follows:
      annotations:
        cas.openebs.io/config: |
          - name: ScrubImage
            value: localrepo/openebs-tools:latest
      

Enhancements

  • Enhanced the documentation for better readability and revamped the guides for cStor Pool and Volume provisioning.
  • Enhanced the quorum handling logic in cStor volumes to reach quorum in a quicker way by optimizing the retries and timeouts required to establish the quorum.(openebs/zfs#182)
  • Enhanced the cStor Pool Pods to include a Liveness check to fail fast if the underlying disks have been detached. (openebs/maya#894)
  • Enhanced the cStor Volume - Target Pods to get reschedule faster in the event of a node failure. (openebs/maya#894)
  • Enhanced the cStor Volume Replica placement to distribute randomly between the available Pools. Prior to this fix, the Replicas were always placed on the first available Pool. In case the Volumes are launched with Replica count of 1, all the replicas were scheduled onto the first Pool only. (openebs/maya#910)
  • Enhanced the Jiva and cStor provisioner to set the upgrade strategy in respective deployments to recreate. (openebs/maya#923)
  • Enhanced the node-disk-manager to fetch additional details about the underlying disks via openSeaChest. The details will be fetched for devices that support them. (openebs/node-disk-manager#185)
    A new section called “Stats” is added that will include the information like:
    Stats:
      Device Utilization Rate:  0
      Disk Temperature:
        Current Temperature:   0
        Highest Temperature:   0
        Lowest Temperature:    0
      Percent Endurance Used:  0
      Total Bytes Read:        0
      Total Bytes Written:     0
    
  • Enhanced the node-disk-manager to add additional information to the DiskCRs like if the disks is partitioned or has a filesystem on it. (openebs/node-disk-manager#197)
    FileSystem and Partition details will be included in the Disk CR as follows:
    partitionDetails:
      - fileSystemType: None
        partitionType: "0x83"
      - fileSystemType: None
        partitionType: "0x8e"
    
    If the disk is formatted as a whole with a filesystem
    fileSystem: ext4
    
  • Enhanced the Disk CRs to include a property called managed, that will indicate whether node-disk-manager should modify the Disk CRs. When managed is set to false, node-disk-manager will not update the status of the Disk CRs. This is helpful in cases, where administrators would like to create Disk CRs for devices that are not yet supported by node-disk-manager like partitioned disks or nvme devices. A sample YAML for specifying a custom disk can be found here. (openebs/node-disk-manager#192)
  • Enhance the debuggability of cstor and jiva volumes by adding additional details about the IOs in the CLI commands and logs. For more details check:
    • openebs/zfs#165
    • openebs/zfs#190
    • openebs/zfs#184
    • openebs/jiva#186
    • openebs/maya#891
  • Enhanced uZFS to include the zpool clear command (openebs/zfs#186) (openebs/zfs#187)
  • Enhanced the OpenEBS CRDs to include custom columns to be displayed in kubectl get .. output of the CR. This feature requires K8s 1.11 or higher. (openebs/maya#925)
    The sample output looks as follows:
    $ kubectl get csp -n openebs
    NAME                     ALLOCATED   FREE    CAPACITY    STATUS    TYPE       AGE
    sparse-claim-auto-lja7   125K        9.94G   9.94G       Healthy   striped    1h
    
    $ kubectl get cvr -n openebs
    NAME                                                              USED  ALLOCATED  STATUS    AGE
    pvc-9ca83170-01e3-11e9-812f-54e1ad0c1ccc-sparse-claim-auto-lja7   6K    6K         Healthy   1h
    
    $ kubectl get cstorvolume -n openebs
    NAME                                        STATUS    AGE
    pvc-9ca83170-01e3-11e9-812f-54e1ad0c1ccc    Healthy   4h
    
    $ kubectl get disk
    NAME                                      SIZE          STATUS   AGE
    sparse-5a92ced3e2ee21eac7b930f670b5eab5   10737418240   Active   10m
    

Major Bugs Fixed

  • Fixed an issue where cStor target was not rebuilding the data onto replicas, after the underlying cStor Pool was recreated with new disks. This scenario occurs in Cloud Provider or Private Cloud with Virtual Machine deployments, where ephemeral disks are used to create cstor pools and after a node reboot, the node comes up with new ephemeral disks. (openebs/zfs#164) (openebs/maya#899)
  • Fixed an issue where cstor volume causes timeout for iscsi discovery command and can potentially trigger a K8s vulnerability that can bring down a node with high RAM usage. (openebs/istgt#231)
  • Fixed an issue where cStor iSCSI target was not confirming to iSCSI protocol w.r.t immediate bit during discovery phase. (openebs/istgt#231)
  • Fixed an issue where some of the internal snapshots and clones created for cStor Volume rebuild purpose were not cleaned up. (openebs/zfs#200)
  • Fixed an issue where Jiva Replica Pods continued to show running even when the underlying disks were detached. The error is now handled and Pod is restarted. (openebs/openebs#1387)
  • Fixed an issue where Jiva Replicas that have large number of sparse extent files created would timeout in connecting to their Jiva Targets. This will cause data unavailability, if the Target can’t establish quorum with already connected replicas. (openebs/jiva#184)
  • Fixed an issue where node-disk-manager pods were not getting upgraded to latest version, even after the image version was changed. (openebs/node-disk-manager#200)
  • Fixed an issue where ndm would get into CrashLoopBackoff when run in unprivileged mode (openebs/node-disk-manager#198)
  • Fixed an issue with mayactl in displaying cStor volume details, when cStor target is deployed in its PVC namespace. (openebs/maya#891)

Backwards Incompatibilities

  • From 0.8.0:
    • mayactl snapshot commands are deprecated in favor of the kubectl approach of taking snapshots. snapshot` commands.
  • For previous releases, please refer to the respective release notes and upgrade steps.

Upgrade

Upgrade to 0.8.1 is supported only from 0.8.0 and follows a similar approach like earlier releases.

  • Upgrade OpenEBS Control Plane components
  • Upgrade jiva PV at a time to 0.8.1, one at a time
  • Upgrade cStor Pools to 0.8.1 and its associated Volumes, one at a time.

Note that the upgrade uses the node labels to pin the Jiva replicas to the nodes where they are present. On node restart, these labels will disappear and can cause the replica to be not scheduled.

The scripts and detailed instructions for upgrade are available here.

Uninstall

  • The recommended steps to uninstall are:
    • delete all the OpenEBS PVCs that were created
    • delete all the SPCs (in case of cStor)
    • ensure that no volume or pool pods are pending in terminating state kubectl get pods -n <openebs namespace>
    • ensure that no openebs custom resources are present kubectl get cvr -n <openebs namespace>
    • delete the openebs either via helm purge or kubectl delete
  • Uninstalling the OpenEBS doesn't automatically delete the CRDs that were created. If you would like to complete remove the CRDs and the associated objects, run the following commands:
    kubectl delete crd castemplates.openebs.io
    kubectl delete crd cstorpools.openebs.io
    kubectl delete crd cstorvolumereplicas.openebs.io
    kubectl delete crd cstorvolumes.openebs.io
    kubectl delete crd runtasks.openebs.io
    kubectl delete crd storagepoolclaims.openebs.io
    kubectl delete crd storagepools.openebs.io
    kubectl delete crd volumesnapshotdatas.volumesnapshot.external-storage.k8s.io
    kubectl delete crd volumesnapshots.volumesnapshot.external-storage.k8s.io
    
  • As part of deleting the Jiva Volumes - OpenEBS launches scrub jobs for clearing the data from the nodes. The completed jobs need to be cleared using the following command:
    kubectl delete jobs -l openebs.io/cas-type=jiva -n <namespace>

Limitations / Known Issues

  • The current version of the OpenEBS volumes are not optimized for performance sensitive applications
  • cStor Target or Pool pods can at times be stuck in a Terminating state. They will need to be manually cleaned up using kubectl delete with 0 sec grace period. Example: kubectl delete deploy <volume-target-deploy> -n openebs --force --grace-period=0
  • cStor Pool pods can consume more Memory when there is continuous load. This can cross memory limit and cause pod evictions. It is recommend that you create the cStor pools by setting the Memory limits and requests.
  • Jiva Volumes are not recommended if your use case requires snapshots and clone capabilities.
  • Jiva Replicas use sparse file to store the data. When the application causing too many fragments (extents) to be created on the sparse file, the replica restart can cause replica take longer time to get attached to the target. This issue was seen when there were 31K fragments created.
  • Volume Snapshots are dependent on the functionality provided by the Kubernetes. The support is currently alpha. The only operations supported are:
  • Create Snapshot, Delete Snapshot and Clone from a Snapshot
    Creation of the Snapshot - uses a reconciliation loop, which would mean that a Create Snapshot operation will be retried on failure, till the Snapshot has been successfully created. This may not be an desirable option in cases where Point in Time snapshots are expected.
  • If you using K8s version earlier than 1.12, in certain cases, it will be observed that when the node the target pod is offline, the target pod can take more than 120 seconds to get rescheduled. This is because, target pods are configured with Tolerations based on the Node Condition, and TaintNodesByCondition is available only from K8s 1.12. If running earlier version, you may have to enable the alpha gate for TaintNodesByCondition. If there is active load on the volume when the target pod goes offline, the volume will be marked as read-only.
  • If you are using K8s version 1.13 or later, that includes the checks on ephemeral storage limits on the Pods, there is a chance that OpenEBS cstor and jiva pods can get evicted - because there are no ephemeral requests specified. To avoid this issue, you can specify the ephemeral storage requests in the storage class or storage pool claim. (openebs/openebs#2294)
  • When disks used by a cStor Pool are detached and reattached, the cStor Pool may miss to detect this event in certain scenarios. A manual intervention may be required to bring the cStor Pool online. (openebs/openebs#2363)
  • When the underlying disks used by cStor or Jiva volumes are under disk pressure due to heavy IO load, and if the Replicas taken longer than 60 seconds to process the IO, the Volumes will get into Read-Only state. In 0.8.1, logs have been added to the cstor and jiva replicas to indicate if IO has longer latency. (openebs/openebs#2337)

For a more comprehensive list of open issues uncovered through e2e, please refer to open issues.

Additional details and note to upgrade and uninstall are available on Project Tracker Wiki.

openebs - 0.8

Published by kmova almost 6 years ago

Getting Started

Prerequisite to install

  • Kubernetes 1.9.7+ is installed
  • Make sure that you run the below installation steps with cluster admin context. The installation will involve creating a new Service Account and assigning to OpenEBS components.
  • Make sure iSCSI Initiator is installed on the Kubernetes nodes.
  • NDM helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters on NDM to exclude paths before installing OpenEBS.
  • NDM runs as a privileged pod since it needs to access the device information. Please make the necessary changes to grant access to run in privileged mode. For example, when running in RHEL/CentOS, you may need to set the security context appropriately.

Using kubectl

kubectl apply -f https://openebs.github.io/charts/openebs-operator-0.8.0.yaml

Using helm stable charts

helm repo update
helm install  --namespace openebs --name openebs stable/openebs

Sample Storage Pool Claims, Storage Class and PVC configurations to make use of new features can be found here: Sample YAMLs

For more details refer to the documentation at: https://docs.openebs.io/

Change Summary

New Capabilities

  • Support for creating instant Snapshots on cStor volumes that can be used for both data protection or data warm-up usecases. The snapshots can be taken using kubectl by providing an VolumeSnapshot YAML as shown below:

    apiVersion: volumesnapshot.external-storage.k8s.io/v1
    kind: VolumeSnapshot
    metadata:  
      name: <snapshot-name>
      namespace: default
    spec:
      persistentVolumeClaimName: <cstor-pvc-name>
    

    cStor Volume Snapshot creation is controlled by the cStor Target Pod by flushing all the pending IOs to the Replicas and requesting each of the Replica to take snapshot. cStor Volume Snapshot can be taken as long as two Replicas are Healthy.

    Since snapshots can be managed via kubectl, you can setup your own K8s cron job that can take period snapshots etc.,

  • Support for creating a clone PVs from a previously taken snapshot. Clones can be used for both recovering data from a previously taken snapshot or for optimizing the application startup time. Application startup time can be reduced in usecases where an application pod requires some kind of seed data to be available. With clones support, user can setup a seed volume and fill it with the data. Create a snapshot with the seed data. When the application are launched, the PVCs can be setup to create Cloned PVs from the seed data snapshot. cStor Clones are also optimized for minimizing the capacity overhed. cStor Clones are reference based and need capacity only for new or modified data.

    Clones can be created using the PVC YAML as shown below:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: <cstor-clone-pvc-name>
      namespace: default
      annotations:
        snapshot.alpha.kubernetes.io/snapshot: <snapshot-name>
    spec:
      storageClassName: openebs-snapshot-promoter
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 5Gi
    

    Note that, the requested storage specified (like 5Gi in the above example) should match the requested storage on the <cstor-pvc-name>. <cstor-pvc-name> is the PVC on which the snapshot was taken.

  • Support for providing the runtime status of the cStor Volumes and Pools via kubectl describe commands. For example:

    • cStor Volume Status can be obtained by kubectl describe cstorvolume <cstor-pv-name> -n <openebs-namespace>. The status reported will contain information as shown below and more:
      Status:
        Phase:  Healthy
        Replica Statuses:
          Replica Id:           15041373535437538769
          Status:               Healthy
          Up Time:              14036
          Replica Id:           6608387984068912137
          Status:               Healthy
          Up Time:              14035
          Replica Id:           17623616871400753550
          Status:               Healthy
          Up Time:              14034
      
    • Similarly, each cStor Pool status can also be fetched by a kubectl describe csp <pool-name> -n <openebs-namespace>. The details shown are:
      status:
        capacity:
          free: 9.62G
          total: 9.94G
          used: 322M
        phase: Healthy
      
    • The above describe command for cStor Volume - can also show details like the number of IOs currently inflight to Replica and details of how the Volume status has changed via Kubernetes events.
    • The interval between updating the status can be configured via Storage Policy - ResyncInterval. The default sync interval is 30s.
  • The status of the Volume or Pool can be in - Init, Healthy, Degraded, Offline, Error.
  • Support for new Storage Policies for cStor Volumes such as:

    • Target Affinity: (Applicable to both both jiva and cStor Volumes) The stateful workloads access the OpenEBS Storage by connecting to the Volume Target Pod. This policy can be used to co-locate volume target pod on the same node as workload to avoid conditions like:

      • network disconnects between the workload node and target node
      • shutting down of the node on which volume target pod is scheduled for maintenance.
        In the above cases, if the restoration of network, pod or node takes more than 120 seconds, the workload loses connectivity to the storage.
        This feature makes use of the Kubernetes Pod Affinity feature that is dependent on the Pod labels. User will need to add the following label to both Application and PVC.
      labels:
        openebs.io/target-affinity: <application-unique-label>
      

      Example of using this policy can be found here.

      Note that this Policy only applies to Deployments or StatefulSets with single workload instance.

    • Target Namespace: (Applicable to only cStor volumes). By default the cStor target pods are scheduled in a dedicated openebs namespace. The target pod also is provided with openebs service account so that it can access the Kubernetes Custom Resource called CStorVolume and Events.
      This policy, allows the Cluster administrator to specify if the Volume Target pods should be deployed in the namespace of the workloads itself. This can help with setting the limits on the resources on the target pods, based on the namespace in which they are deployed.
      To use this policy, the Cluster administrator could either use the existing openebs service account or create a new service account with limited access and provide it in the StorageClass as follows:

        annotations:
          cas.openebs.io/config: |
            - name: PVCServiceAccountName
              value: "user-service-account"
      

      The sample service account can be found here

  • Support for sending anonymous analytics to Google Analytics server. This feature can be disabled by setting the maya-apiserver environment flag - OPENEBS_IO_ENABLE_ANALYTICS to false. Very minimal information like the K8s version and the type of volumes being deployed is collected. No sensitive information like the names or IP addresses are collected. The details collected can be viewed here.

Enhancements

  • Enhance the metrics reported from cStor and jiva Volumes to include target and replica status.
  • Enhance the volume metrics exporter to provide metrics in json format.
  • Enhance the maya-apiserver API to include a stats api that will pass through the request to the respective volume exporter metrics API.
  • Enhance cStor storage engine to be resilient against replica failures and cover several corner cases associated with rebuild replica.
  • Enhance jiva storage engine to clear up the space occupied by temporary snapshots taken by the replicas during replica rebuild.
  • Enhance jiva storage engine to support sync and unmap IOs.
  • Enhance CAS Templates to allow invoking REST API calls to non Kubernetes services.
  • Enhance CAS Templates to support an option to disable a Run Task.
  • Enhance CAS Templates to include .CAST object which will be available for Run Tasks. .CAST contains information like openebs and kubernetes versions.
  • Enhance the maya-apiserver installer code to remove the dependency on the config map and to determine and load the CAS Templates based on maya-apiserver version. When maya-apiserver is upgraded from 0.7 to 0.8 - a new set of default CAS Templates will be available for 0.8.
  • Enhance mayactl (CLI) to include bash or zsh completion support. User needs to run : source <(mayactl completion bash).
  • Enhance the volume provisioning to add Prometheus annotations for scrape and port to the volume target pods.
  • Enhance the build scripts to push commit tagged images and also add support for GitLab based CI.
  • Enhance the CI scripts in each of the repos to cover new features.
  • 250+ PRs merged from the community fixing the documentation, code style/lint and add missing unit tests across various repositories.

Major Bugs Fixed

  • Fixed an issue where cStor pool can become inaccessible if two pool pods attempt to access the same disks. This can happen during pool pod termination/eviction, followed by immediately scheduling a new pod on the same node.
  • Fixed an issue where cStor pool can restart if one of the cstor volume target pod is restarted.
  • Fixed an issue with auto-creation of cStor Pools using SPC and type as mirrored. The type was being ignored during the pool creation.
  • Fixed an issue with recreating the cStor Pool by automatically selecting Disks. A check has been added to only pick the Active Disks on the node.
  • Fixed an issue with Provisioning of cStor Volume by creating the Replica even if the Pool is Offline during the provisioning. After the Pool comes back online, the Replica will be created on the Pool.

Backwards Incompatibilities

  • None from 0.7.0
  • For previous releases, please refer to the respective release notes and upgrade steps. Upgrade to 0.8.0 is supported only from 0.7.0.

Limitations / Known Issues

  • The current version of the OpenEBS volumes are not optimized for performance sensitive applications
  • cStor Target or Pool pods can at times be stuck in a Terminating state. They will need to be manually cleaned up using kubectl delete with 0 sec grace period. Example:
    kubectl delete deploy <volume-target-deploy> -n openebs --force --grace-period=0
  • cStor Pool pods can consume more Memory when there is continuous load. This can cross memory limit and cause pod evictions. It is recommend that you create the cStor pools by setting the Memory limits and requests.
  • Jiva Volumes are not recommended if your use case requires snapshots and clone capabilities.
  • Jiva Replicas use sparse file to store the data. When the application causing too many fragments (extents) to be created on the sparse file, the replica restart can cause replica take longer time to get attached to the target. This issue was seen when there were 31K fragments created.
  • Volume Snapshots are dependent on the functionality provided by the Kubernetes. The support is currently alpha. The only operations supported are:
    • Create Snapshot, Delete Snapshot and Clone from a Snapshot
    • Creation of the Snapshot - uses a reconciliation loop, which would mean that a Create Snapshot operation will be retried on failure, till the Snapshot has been successfully created. This may not be an desirable option in cases where Point in Time snapshots are expected.
  • If you using K8s version earlier than 1.12, in certain cases, it will be observed that when the node the target pod is offline, the target pod can take more than 120 seconds to get rescheduled. This is because, target pods are configured with Tolerations based on the Node Condition, and TaintNodesByCondition is available only from K8s 1.12. If running earlier version, you may have to enable the alpha gate for TaintNodesByCondition. If there is active load on the volume when the target pod goes offline, the volume will be marked as read-only.

For a more comprehensive list of open issues uncovered through e2e, please refer to open issues.

Additional details and note to upgrade and uninstall are available on Project Tracker Wiki.

openebs - 0.7.2

Published by kmova almost 6 years ago

Getting Started

Prerequisite to install

  • Kubernetes 1.9.7+ is installed
  • Make sure that you run the below installation steps with cluster admin context. The installation will involve creating a new Service Account and assigning to OpenEBS components.
  • Make sure iSCSI Initiator is installed on the Kubernetes nodes.
  • NDM helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters on NDM to exclude paths before installing OpenEBS.

Using kubectl

kubectl apply -f https://openebs.github.io/charts/openebs-operator-0.7.2.yaml

Using helm stable charts

helm repo update
helm install  --namespace openebs --name openebs stable/openebs

Sample Storage Pool Claims, Storage Class and PVC configurations to make use of new features can be found here: Sample YAMLs

For more details refer to the documentation at: https://docs.openebs.io/

Change Summary

Minor enhancements

  • Support for clearing space used by the Jiva Replica after the Jiva Volume is deleted. The space reclaim is done by scheduling a Kubernetes Job. Note: If your cluster setup requires lots of OpenEBS volumes to be created and deleted, you will need to setup a cron job to clear the completed jobs. This is required, till Kubernetes feature TTL Mechanism for Finished Jobs is supported.
  • Support for a storage policy that can disable the Jiva Volume Space reclaim. Please add the following to your StorageClasses, if you would like to disable Jiva Volume Space reclaim (or in other words - retain the volume data post PVC deletion).
      cas.openebs.io/config: |
        - name: RetainReplicaData
          enabled: true  
    
  • Support for a storage policy to allow scheduling the Jiva Target Pods on the same node as the Application Pod. This feature makes use of the Kubernetes Pod Affinity feature that is dependent on the Pod labels. You will need to add the following label to both Application and PVC.
    labels:
      openebs.io/target-affinity: <application-unique-label>
    

Bug Fixes

  • Fixed an issue where internal snapshots created for rebuilding were getting deleted in case of errors in opening or holding the snapshots.
  • Fixed an issue in sending cStor volume metrics to Prometheus like uptime, read/write block count and read/write time(ns)

Detailed release notes are maintained in Project Tracker Wiki.

Limitations

  • Jiva target to Replica message protocol has been enhanced to handle the write errors. This change in the data exchanges causes the older replicas to be incompatible with the newer target and vice versa. The upgrade involves shutting down all the replicas before launching them with the new version. Since the volume requires the target and at least 2 replicas to be online, chances of volumes getting into the read-only state during upgrade are higher. A manual intervention will be required to recover the volume.
  • For OpenEBS volumes configured with more than 1 replica, at least more than half of the replicas should be online for the Volume to allow Read and Write. In the upcoming releases, with cStor data engine, Volumes can be allowed to Read/Write when there is at least one replica in the ready state.
  • This release contains a preview support for cloning an OpenEBS Volume from a snapshot. This feature only supports single replica for a cloned volume, which is intended to be used for temporarily spinning up a new application pod for recovering lost data from the previous snapshot.
  • While testing for different platforms, with a three-node/replica OpenEBS volume and shutting down one of the three nodes, there was an intermittent case where one of the 2 remaining replicas also had to be restarted.
  • The OpenEBS target (controller) pod depends on the Kubernetes node tolerations to reschedule the pod in the event of node failure. For this feature to work, TaintNodesByCondition alpha feature must be enabled in Kubernetes. In a scenario where OpenEBS target (controller) is not rescheduled or is back to running within 120 seconds, the volume gets into a read-only state and a manual intervention is required to make the volume as read-write.
  • The current version of OpenEBS volumes are not optimized for performance sensitive applications.

For a more comprehensive list of open issues uncovered through e2e, please refer to open issues.

openebs - 0.7.1

Published by kmova almost 6 years ago

Getting Started

Prerequisite to install

  • Kubernetes 1.9.7+ is installed
  • Make sure that you run the below installation steps with cluster admin context. The installation will involve creating a new Service Account and assigning to OpenEBS components.
  • Make sure iSCSI Initiator is installed on the Kubernetes nodes.
  • NDM helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters on NDM to exclude paths before installing OpenEBS.

Using kubectl

kubectl apply -f https://openebs.github.io/charts/openebs-operator-0.7.1.yaml

Using helm stable charts

helm install  --namespace openebs --name openebs stable/openebs

Using OpenEBS Helm Charts (will be deprecated in the coming releases)

helm repo add openebs-charts https://openebs.github.io/charts/
helm repo update
helm install openebs-charts/openebs

Sample Storage Pool Claims, Storage Class and PVC configurations to make use of new features can be found here: Sample YAMLs

For more details refer to the documentation at: https://docs.openebs.io/

Change Summary

Minor enhancements

  • Support for using OpenEBS PVs as Block Devices for Application Pods

Bug Fixes

  • Fixed an issue with PVs not getting created when capacity had "i" suffix
  • Fixed an issue with cStor Target Pod stuck in terminating state due to shared hostPath
  • Fixed an issue with FSType from StorageClass not being configured on PV
  • Fixed an issue with NDM discovering capacity of disks via CDB16
  • Fixed an issue with PV name generation exceeding 64 characters. PVC UUID will be used as PV Name.
  • Fixed an issue with cStor Pool Pod terminating when there is an abrupt connection break
  • Fixed an issue with cStor Volume clean-up failure blocking new volumes from being created.

Detailed release notes are maintained in Project Tracker Wiki.

Limitations

  • Jiva target to Replica message protocol has been enhanced to handle the write errors. This change in the data exchanges causes the older replicas to be incompatible with the newer target and vice versa. The upgrade involves shutting down all the replicas before launching them with the new version. Since the volume requires the target and at least 2 replicas to be online, chances of volumes getting into the read-only state during upgrade are higher. A manual intervention will be required to recover the volume.
  • For OpenEBS volumes configured with more than 1 replica, at least more than half of the replicas should be online for the Volume to allow Read and Write. In the upcoming releases, with cStor data engine, Volumes can be allowed to Read/Write when there is at least one replica in the ready state.
  • This release contains a preview support for cloning an OpenEBS Volume from a snapshot. This feature only supports single replica for a cloned volume, which is intended to be used for temporarily spinning up a new application pod for recovering lost data from the previous snapshot.
  • While testing for different platforms, with a three-node/replica OpenEBS volume and shutting down one of the three nodes, there was an intermittent case where one of the 2 remaining replicas also had to be restarted.
  • The OpenEBS target (controller) pod depends on the Kubernetes node tolerations to reschedule the pod in the event of node failure. For this feature to work, TaintNodesByCondition alpha feature must be enabled in Kubernetes. In a scenario where OpenEBS target (controller) is not rescheduled or is back to running within 120 seconds, the volume gets into a read-only state and a manual intervention is required to make the volume as read-write.
  • The current version of OpenEBS volumes are not optimized for performance sensitive applications.

For a more comprehensive list of open issues uncovered through e2e, please refer to open issues.

openebs - v0.7

Published by kmova about 6 years ago

Getting Started

Prerequisite to install

  • Kubernetes 1.9.7+ is installed
  • Make sure that you have completed the below installation steps with cluster admin context. The installation will involve creating a new Service Account and assigning to OpenEBS components.
  • Make sure iSCSI Initiator is installed on the Kubernetes nodes.
  • NDM helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters on NDM to exclude paths before installing OpenEBS.

Using kubectl

kubectl apply -f https://openebs.github.io/charts/openebs-operator-0.7.0.yaml

Using OpenEBS Helm Charts (will be deprecated in the coming releases)

helm repo add openebs-charts https://openebs.github.io/charts/
helm repo update
helm install openebs-charts/openebs

For more details refer to the documentation at: https://docs.openebs.io/

Note: Kubernetes stable/openebs helm chart and other charts still point to 0.6 and efforts are underway to update them to 0.7.

Quick Summary on changes

  • Node Disk Manager that helps with discovering block devices attached to nodes
  • Alpha support for cStor Storage Engines
  • Updated CRDs for supporting cStor as well as pluggable storage control plane
  • Jiva Storage Pool called default and StorageClass called openebs-jiva-default
  • cStor Storage Pool Claim called cstor-sparse-pool and StorageClass called openebs-cstor-sparse
  • There has been a change in the way volume storage policies can be specified with the addition of new policies like:
    • Number of Data copies to be made
    • Specify the nodes on which the Data copies should be persisted
    • Specify the CPU or Memory Limits per PV
    • Choice of Storage Engine : cStor or Jiva

Sample Storage Pool Claims, Storage Class and PVC configurations to make use of new features can be found here: Sample YAMLs

Detailed release notes are maintained in Project Tracker Wiki.

Limitations

  • cStor Target or Pool pods can at times be stuck in a Terminating state. They will need to be manually cleaned up using kubectl delete with 0 sec grace period.
  • Jiva target to Replica message protocol has been enhanced to handle the write errors. This change in the data exchanges causes the older replicas to be incompatible with the newer target and vice versa. The upgrade involves shutting down all the replicas before launching them with the new version. Since the volume requires the target and at least 2 replicas to be online, chances of volumes getting into the read-only state during upgrade are higher. A manual intervention will be required to recover the volume.
  • For OpenEBS volumes configured with more than 1 replica, at least more than half of the replicas should be online for the Volume to allow Read and Write. In the upcoming releases, with cStor data engine, Volumes can be allowed to Read/Write when there is at least one replica in the ready state.
  • This release contains a preview support for cloning an OpenEBS Volume from a snapshot. This feature only supports single replica for a cloned volume, which is intended to be used for temporarily spinning up a new application pod for recovering lost data from the previous snapshot.
  • While testing for different platforms, with a three-node/replica OpenEBS volume and shutting down one of the three nodes, there was an intermittent case where one of the 2 remaining replicas also had to be restarted.
  • The OpenEBS target (controller) pod depends on the Kubernetes node tolerations to reschedule the pod in the event of node failure. For this feature to work, TaintNodesByCondition alpha feature must be enabled in Kubernetes. In a scenario where OpenEBS target (controller) is not rescheduled or is back to running within 120 seconds, the volume gets into a read-only state and a manual intervention is required to make the volume as read-write.
  • The current version of OpenEBS volumes are not optimized for performance sensitive applications.

For a more comprehensive list of open issues uncovered through e2e, please refer to open issues.

openebs - v0.7-RC2

Published by kmova about 6 years ago

Please Note: This is a release candidate build. If you are looking at deploying from a stable release, please follow the instructions at Quick Start Guide.

Getting Started with OpenEBS v0.7-RC2

Prerequisite to install

  • Kubernetes 1.9.7+ is installed
  • Make sure you run the following kubectl command with cluster admin context. The installation will involve create a new Service Account and assigned to OpenEBS components.

Install and Setup

kubectl apply -f https://openebs.github.io/charts/openebs-operator-0.7.0-RC2.yaml

The above command will install OpenEBS Control Plane components and all the required Kubernetes CRDs. With 0.7, the following new services will be installed:

  • Node Disk Manager that helps with discovering block devices attached to nodes
  • Configuration Files required for supporting both Jiva and cStor Storage Engines
  • A default Jiva Storage Pool and a StorageClass called openebs-standard
  • A default cStor Storage Pool and a StorageClass called openebs-cstor-sparse

You are all set!

You can now install your Stateful applications that make use of either of the above StorageClasses or you can create a completely new StorageClass that can be configured with Storage Policies like:

  • Number of Data copies to be made
  • Specify the nodes on which the Data copies should be persisted
  • Specify the CPU or Memory Limits per PV
  • Choice of Storage Engine : cStor or Jiva

Some of the sample Storage Class and PVC configurations can be found here: Sample YAMLs

Additional details and release notes are available on Project Tracker Wiki.

openebs - v0.7-RC1

Published by kmova about 6 years ago

Getting Started

Prerequisite to install

Make sure that user is assigned with cluster-admin clusterrole to run the below provided install steps.

Using kubectl

Install the 0.7.0-RC1 OpenEBS with CAS Templates.

kubectl apply -f https://raw.githubusercontent.com/openebs/store/master/docs/openebs-operator-0.7.0-RC1.yaml
kubectl apply -f https://raw.githubusercontent.com/openebs/store/master/docs/openebs-pre-release-features-0.7.0-RC1.yaml

Download the following file, update the disks and apply to create cStor Pools.

wget https://raw.githubusercontent.com/openebs/store/master/docs/openebs-config-0.7.0-RC1.yaml
kubectl apply -f openebs-config-0.7.0-RC1.yaml

openebs - v0.6

Published by kmova about 6 years ago

Getting Started

Using kubectl

kubectl apply -f https://raw.githubusercontent.com/openebs/openebs/v0.6/k8s/openebs-operator.yaml
kubectl apply -f https://raw.githubusercontent.com/openebs/openebs/v0.6/k8s/openebs-storageclasses.yaml

Using Kubernetes Stable Helm charts

helm install  --namespace openebs --name openebs  -f https://openebs.github.io/charts/helm-values-0.6.0.yaml stable/openebs
kubectl apply -f https://raw.githubusercontent.com/openebs/openebs/v0.6/k8s/openebs-storageclasses.yaml

Using OpenEBS Helm Charts (will be deprecated in the coming releases)

helm repo add openebs-charts https://openebs.github.io/charts/
helm repo update
helm install openebs-charts/openebs

For more details refer to the documentation at: https://docs.openebs.io/

New Capabilities / Enhancements

  • Integrate the Volume Snapshot capabilities with Kubernetes Snapshot controller
  • Enhance maya-apiserver to use CAS Templates for orchestrating new Storage Engines
  • Enhance mayactl to provide additional details about volumes such as - replica status and node details where replicas are running.
  • Enhance maya-apiserver to schedule Replica Pods on specific nodes using nodeSelector
  • Enhance provisioner and maya-apiserver to allow specifying cross AZ scheduling of Replica Pods.
  • Support for deploying OpenEBS via Kubernetes stable Helm Charts
  • openebs-operator.yaml is modified to run OpenEBS pods in its own namespace openebs
  • Enhance e2e tests to simulate chaos at different layers such as - CPU, RAM, Disk, Network, and Node

Major Issues Fixed

  • Fixed an issue where intermittent connectivity errors between controller and replica caused iSCSI initiator to mark the volume as read-only. openebs/gotgt#15
  • Fixed an issue where intermittent connectivity errors were causing the controller to silently drop the replicas and mark the Volumes as read-only. The replicas dropped in this way were not getting re-added to the Controller. openebs/jiva#45
  • Fixed an issue where volume would be marked as read-only if one of the three replicas returned an error to IO. openebs/jiva#56
  • Fixed an issue where replica fails to register back with the controller if the attempt to register occurred before the controller cleared the replica's previous state. openebs/jiva#56
  • Fixed an issue where a volume with a single replica would get stuck in the read-only state once the replica was restarted. openebs/jiva#45

Upgrade from older releases

Since 0.6 has made changes to the way controller and replica pods communicate with each other, the older volumes need to be upgraded with scheduled downtime for applications.

Limitations

  • For OpenEBS volumes configured with more than 1 replica, at least more than half of the replicas should be online for the Volume to allow Read and Write. In the upcoming releases, with cStor data engine, Volumes can be allowed to Read/Write when there is at least one replica in the ready state.
  • This release contains a preview support for cloning an OpenEBS Volume from a snapshot. This feature only supports single replica for a cloned volume, which is intended to be used for temporarily spinning up a new application pod for recovering lost data from the previous snapshot.
  • While testing for different platforms, with a three-node/replica OpenEBS volume and shutting down one of the three nodes, there was an intermittent case where one of the 2 remaining replicas also had to be restarted.
  • The OpenEBS target (controller) pod depends on the Kubernetes node tolerations to reschedule the pod in the event of node failure. For this feature to work, TaintNodesByCondition alpha feature must be enabled in Kubernetes. In a scenario where OpenEBS target (controller) is not rescheduled or is back to running within 120 seconds, the volume gets into a read-only state and a manual intervention is required to make the volume as read-write.
  • The current version of OpenEBS volumes are not optimized for performance sensitive applications.
  • For a more comprehensive list of open issues uncovered through e2e, please refer open issues.

Additional details are available on Project Tracker Wiki.

openebs - v0.5.4

Published by ksatchit over 6 years ago

Issues Fixed in v0.5.4

  • Provision to specify filesystems other than ext4 (default) in the OpenEBS provisioner spec (#1454 )
  • Support for xfs filesystem format for mongodb statefulset using OpenEBS Persistent Volume (#1446 )

Known Issues in v0.5.4

For a complete list of known issues, go to v0.5.4 known issues

  • xfs formatted volumes are not remounted post snapshot reverts/forced restarts (bugs)
  • Requires Kubernetes 1.7.5+
  • Requires iSCSI initiator to be installed in the Kubernetes nodes or kubelet container
  • Not recommended for mission critical workloads
  • Not recommended for performance sensitive workloads. Ongoing efforts intended to improve performance

Enhancements

Installation

Using kubectl

kubectl apply -f https://raw.githubusercontent.com/openebs/openebs/v0.5.4/k8s/openebs-operator.yaml

Using helm

helm repo add openebs-charts https://openebs.github.io/charts/
helm repo update
helm install openebs-charts/openebs

Alternately, refer : https://docs.openebs.io/docs/next/installation.html#install-openebs-using-helm-charts

openebs - v0.5.3

Published by kmova over 6 years ago

Issues Fixed in v0.5.3

  • Fixed usage of StoragePool issue when rbac settings are applied 1189.
  • Fixed hardcoded maya-apiserver-service name to a configurable value as it resulted in conflict with other services running on the same cluster 1227.
  • Fixed an issue where the OpenEBS iSCSI volume showed progressive increase in the memory consumed by the controller pod 1298.

Known Issues in v0.5.3

For a complete list of known issues, go to v0.5.3 known issues.

  • Requires Kubernetes 1.7.5+
  • Requires iSCSI initiator to be installed in the Kubernetes nodes or kubelet container
  • Not recommended for mission critical workloads
  • Not recommended for performance sensitive workloads. Ongoing efforts intended to improve performance

Enhancement to Documentation

The OpenEBS documentation is now available at https://docs.openebs.io/. You can provide your feedback comments by clicking the Feedback button provided on every page.

Installation

Using kubectl

kubectl apply -f https://raw.githubusercontent.com/openebs/openebs/v0.5.3/k8s/openebs-operator.yaml

Using helm

helm repo add openebs-charts https://openebs.github.io/charts/
helm repo update
helm install openebs-charts/openebs
openebs - v0.5.2

Published by kmova over 6 years ago

This is a single-fix release on top of v0.5.1 to allow maya-apiserver and openebs-provisioner to work with Kubernetes non-SSL configuration.

Issue Fixed:

  • #1184 : You can set the non-SSL Kubernetes endpoints to use by specifying the ENV variables OPENEBS_IO_KUBE_CONFIG and OPENEBS_IO_K8S_MASTER on maya-apiserver and openebs-provisioner.

To use the above ENV variables, the following image versions have to be used:

  • openebs/m-apiserver:0.5.2: OpenEBS Maya API Server along with the latest maya cli.
  • openebs/openebs-k8s-provisioner:0.5.2: Dynamic OpenEBS Volume Provisioner for Kubernetes.
openebs - v0.5.1

Published by kmova almost 7 years ago

This is a incremental release on top of v0.5. This release fixes bugs and adds support for running OpenEBS on CentOS based Kubernetes Cluster including OpenShift 3.7+

Issues Fixed in v0.5.1

  • Fix the inter-operability issues of connecting to OpenEBS Volume from CentOS iSCSI Initiator (#1087)
  • Fix openebs-k8s-provisioner to be launched in non-default namespace (#1055)
  • Update the documentation with steps to use OpenEBS on OpenShift Kubernetes Cluster (#1102) and Kubernetes on CentOS (#1104)
  • Update helm charts to use OpenEBS 0.5.1 (#1100)

Known Limitations

  • Requires Kubernetes 1.7.5+
  • Requires iSCSI initiator to be installed in the Kubernetes nodes or kubelet container
  • Not recommended for mission critical workloads
  • Not recommended for performance sensitive workloads. Ongoing efforts intended to improve performance

Installation

Using kubectl

kubectl apply -f https://raw.githubusercontent.com/openebs/openebs/v0.5.1/k8s/openebs-operator.yaml

Using helm

helm repo add openebs-charts https://openebs.github.io/charts/
helm repo update
helm install openebs-charts/openebs

Images

  • openebs/jiva:0.5.1 : Containerized Storage Controller
  • openebs/m-apiserver:0.5.1 : OpenEBS Maya API Server along with the latest maya cli.
  • openebs/openebs-k8s-provisioner:0.5.1 : Dynamic OpenEBS Volume Provisioner for Kubernetes.
  • openebs/m-exporter:0.5.1 : OpenEBS Volume metrics exporter.

Setup OpenEBS Volume Monitoring

If you are running your own prometheus, please update it with the following job configuration:

    - job_name: 'openebs-volumes'
      scheme: http
      kubernetes_sd_configs:
      - role: pod
      relabel_configs:
      - source_labels: [__meta_kubernetes_pod_label_monitoring]
        regex: volume_exporter_prometheus
        action: keep
      - source_labels: [__meta_kubernetes_pod_name]
        action: replace
        target_label: kubernetes_pod_name
      - source_labels: [__meta_kubernetes_pod_label_vsm]
        action: replace
        target_label: openebs_pv
      - source_labels: [__meta_kubernetes_pod_container_port_number]
        action: drop
        regex: '(.*)9501'
      - source_labels: [__meta_kubernetes_pod_container_port_number]
        action: drop
        regex: '(.*)3260

If you don't have prometheus running, you can use the following yaml file to run prometheus and grafana.

kubectl apply -f  https://raw.githubusercontent.com/openebs/openebs/v0.5.0/k8s/openebs-monitoring-pg.yaml

You can import the following grafana-dashboard file to view the OpenEBS Volume metrics.

openebs - v0.5.0

Published by kmova almost 7 years ago

This release marks a significant milestone for OpenEBS. We are excited about the new capabilities - like policy based Volume Provisioning and Customizations that will finally provide DevOps teams, the missing tools to automate the StorageOperations. We are more excited about the contributions that poured in from 50+ new community members that have made this release possible.

Changelog

  • Storage Policy Enforcement Framework that allows DevOps teams to deploy a customized storage. Some policies supported are:
    • Storage Policy - for using a custom Storage Engine like Jiva,
    • Storage Policy - for exposing volume metrics in Prometheus format using a side-car to volume controller
    • Storage Policy - for defining Capacity
    • Storage Policy - for defining the persistent storage location like /var/openebs (default) or a directory mounted on EBS or GPD etc.,
  • Extend OpenEBS API Server to expose volume snapshot api
  • Support for deploying OpenEBS via helm charts
  • Sample Prometheus configuration for collecting OpenEBS Volume Metrics
  • Sample Grafana OpenEBS Volume Dashboard - using the prometheus Metrics
  • Sample Deployment YAMLs and corresponding Storage Classes for different types of applications (see Project Tracker Wiki for detailed list)
  • Sample Deployment YAMLs for launching Kubernetes Dashboard for a preview of the changes done by OpenEBS Team to Kubernetes Dashboard (see Project Tracker Wiki for the PRs raised and merged)
  • Sample Deployment YAMLs for Prometheus and Grafana - in case they are not already part of your deployment.
  • Several Documentation and Code Re-factoring Changes for improving code quality

Additional Details are available on Project Tracker Wiki

Changes from earlier releases to v0.5.0

  • Some of the ENV variables for customizing default options have changed (openebs/openebs #927)
    • DEFAULT_CONTROLLER_IMAGE -> OPENEBS_IO_JIVA_CONTROLLER_IMAGE
    • DEFAULT_REPLICA_IMAGE -> OPENEBS_IO_JIVA_REPLICA_IMAGE
    • DEFAULT_REPLICA_COUNT -> OPENEBS_IO_JIVA_REPLICA_COUNT

Known Limitations

  • Requires Kubernetes 1.7.5+
  • Requires iscsi initiator to be installed in the kubernetes nodes or kubelet container
  • Has been tested primarily with enabling openebs and its volumes (PVCs) in the default namespace
  • Not recommended for mission critical workloads
  • Not recommended for performance sensitive workloads. Ongoing efforts intended to improve performance

Installation

Using kubectl

kubectl apply -f https://raw.githubusercontent.com/openebs/openebs/v0.5.0/k8s/openebs-operator.yaml

Using helm

helm repo add openebs-charts https://openebs.github.io/charts/
helm repo update
helm install openebs-charts/openebs

Images

  • openebs/jiva:0.5.0 : Containerized Storage Controller
  • openebs/m-apiserver:0.5.0 : OpenEBS Maya API Server along with the latest maya cli.
  • openebs/openebs-k8s-provisioner:0.5.0 : Dynamic OpenEBS Volume Provisioner for Kubernetes.
  • openebs/m-exporter:0.5.0 : OpenEBS Volume metrics exporter.

Setup OpenEBS Volume Monitoring

If you are running your own prometheus, please update it with the following job configuration:

    - job_name: 'openebs-volumes'
      scheme: http
      kubernetes_sd_configs:
      - role: pod
      relabel_configs:
      - source_labels: [__meta_kubernetes_pod_label_monitoring]
        regex: volume_exporter_prometheus
        action: keep
      - source_labels: [__meta_kubernetes_pod_name]
        action: replace
        target_label: kubernetes_pod_name
      - source_labels: [__meta_kubernetes_pod_label_vsm]
        action: replace
        target_label: openebs_pv
      - source_labels: [__meta_kubernetes_pod_container_port_number]
        action: drop
        regex: '(.*)9501'
      - source_labels: [__meta_kubernetes_pod_container_port_number]
        action: drop
        regex: '(.*)3260

If you don't have prometheus running, you can use the following yaml file to run prometheus and grafana.

kubectl apply -f  https://raw.githubusercontent.com/openebs/openebs/v0.5.0/k8s/openebs-monitoring-pg.yaml

You can import the following grafana-dashboard file to view the OpenEBS Volume metrics.

openebs - v0.4.0

Published by kmova about 7 years ago

Please try out the latest OpenEBS on your Kubernetes Cluster using the following quick start guide. https://docs.openebs.io/docs/runOpenEBSoperator.html

The following OpenEBS v0.4.0 containers are available from Docker Hub:

  • openebs/jiva:0.4.0 : Storage Controller
  • openebs/m-apiserver:0.4.0 : OpenEBS Maya API Server along with the latest maya cli.
  • openebs/openebs-k8s-provisioner:0.4.0 : Dynamic OpenEBS Volume Provisioner for Kubernetes.

New v0.4.0 features

  • Maya CLI Support for managing snapshots for OpenEBS Volumes
  • Maya CLI Support for obtaining the capacity usage statistics from OpenEBS Volumes
  • OpenEBS Volume - Dynamic Provisioner is merged into kubernetes-incubator/external-storage project.
  • OpenEBS Maya API Server uses the Kubernetes scheduler logic to place OpenEBS Volume Replicas on different nodes.
  • OpenEBS Maya API Server can be customized by providing ENV options through K8s YAML file for default replica count and jiva image to be used.
  • OpenEBS User Documentation is made avilable at https://docs.openebs.io/
  • OpenEBS now supports deployment on AWS, along with previously supported Google Cloud and On-premise setups
  • OpenEBS Vagrant boxes are upgraded to support Kubernetes version 1.7.5
  • OpenEBS can now be deployed within a minikube setup

Notable Issues Fixed in v0.4.0

CI Updates with v0.4.0

  • Support for on-premise Jenkins CI for performing e2e tests
  • iSCSI compliance tests are run as part of the CI
  • CI can now be extended using a framework developer for running storage benchmark tests with vdbench or fio.
  • CI has been extended to run Percona Benchmarking tests on Kubernetes.

Deprecated with v0.4.0

  • The maya cli options (setup-omm, setup-osh, omm-status, osh-status) to setup and manage dedicated OpenEBS setup havebeen removed. Starting with v0.4.0, only hyperconverged with Kubernetes is supported.

Notes for Contributors

  • OpenEBS user documentation is being moved into openebs/openebs/documentation
  • OpenEBS developer documentation is being added to openebs/openebs/contribute
  • The deployment and e2e functionality will continue to be located in openebs/k8s and openebs/e2e respectively.
  • openebs/maya will act as a single repository for hosting differnt OpenEBS Storage Control plane (orchestration) components.
  • New /metrics handlers are getting added to OpenEBS components to allow integration into tools like Prometheus.
  • openebs/maya/cmd/maya-agent which will be deployed as a deamon-set running along-side kubelet is being developed. maya-agent will augument the kubelet with storage management functionality.
openebs - v0.3

Published by kmova over 7 years ago

It is simple to use OpenEBS. Try it!!

New in v0.3

  • OpenEBS hyper-converged with Kubernetes Minion Nodes.
  • Enable OpenEBS via the openebs-operator.yaml
  • OpenEBS Volumes created using the Kuberentes Concepts - Services, Deployments, Pods and PVs
  • Supports creation of OpenEBS volumes using Dynamic Provisioner (using the storage-incubator/provisionercontroller)
  • Storage functionality is delivered as container images on DockerHub
    • openebs/jiva:0.3-RC2
  • Storage Orchestration/Management functionality is delivered as container images on DockerHub
    • openebs/m-apiserver:0.3-RC3
    • openebs/openebs-k8s-provisioner:0.3-RC2
    • maya cli is packaged with m-apiserver.
  • Storage Orchestration/Management functionality is also available as binaries, under the respective repositories.
  • Ansible Playbooks are used (e2e/ansible) for automating the installation/configuration and also to run persistent workloads on K8s and OpenEBS.
openebs - v0.2

Published by kmova over 7 years ago

Download and Try It!!

New in v0.2

Integrated into Kubernetes

  • OpenEBS FlexVolume Driver
  • Dynamically Provision OpenEBS Volumes

Easy to Install Vagrant Boxes

  • Kubernetes 1.5.5 vagrant box
  • OpenEBS 0.2 vagrant box

Maya API Server

Provides new AWS EBS-like API for provisioning Block Storage

  • Hyper Converged with Nomad Scheduler
  • Specify the VSM Network and VSM Storage via Configuration files

Storage Tests Framework

  • openebs/tests-vdbench
  • openebs/tests-fio

Changes since v0.1

Maya CLI

  • New CLI options ( maya vsm-stop, maya vsm-stats )

VSM / Jiva

  • Backend Containers auto registration with Frontend Containers
  • Backup/Restore Data from Amazon S3
  • Node Failure Resiliency Fixes
Package Rankings
Top 3.97% on Proxy.golang.org
Badges
Extracted from project README
Releases Releases Releases Releases Slack channel #openebs Twitter PRs Welcome FOSSA Status CII Best Practices OpenEBS Welcome Banner CNCF logo