rook

Storage Orchestration for Kubernetes

APACHE-2.0 License

Stars
12K
Committers
547

Bot releases are visible (Hide)

rook - v1.2.5

Published by travisn over 4 years ago

Improvements

Rook v1.2.5 is a patch release limited in scope and focusing on bug fixes.

Ceph

  • Set recommended Ceph version v14.2.7 (#4898, @travisn)
  • Allow mons from external cluster in the toolbox (#4922, @travisn)
  • Set successful EC pool creation CR status on the pool CR (#4885, @travisn)
  • Populate CSI configmap for external cluster mons (#4816, @leseb)
  • CSI settings are configurable in the operator via a ConfigMap (#3239, @umangachapagain)
  • Enabling balancer module with older clients (#4842, @leseb)
  • Helm chart fix for deploying the CSI 2.0 driver (#4839, @rwd5213)
  • Make replication setting optional for EC pools (#4750, @travisn)
  • Docs: Set Ceph version for the PVC based example (#4869, @galexrt)
rook - v1.2.4

Published by travisn over 4 years ago

Improvements

Rook v1.2.4 is a patch release limited in scope and focusing on bug fixes.

Ceph

  • Stop garbage collector from deleting the CSI driver unexpectedly (#4820, @travisn)
  • Upgrade legacy OSDs created with partitions created by Rook (#4799, @leseb)
  • Ability to set the pool target_size_ratio (#4803, @leseb)
  • Improve detection of drain-canaries and log significant nodedrain scheduling events (#4679, @rohantmp)
  • Sort flexvolume docs and update for kubespray (#4747, @ftab)
  • Add OpenShift common issues documentation (#4764, @umangachapagain)
  • Improved integration test when cleaning devices (#4796, @leseb)
rook - v1.2.3

Published by travisn over 4 years ago

Improvements

Rook v1.2.3 is a patch release limited in scope and focusing on bug fixes.

Ceph

  • Fix garbage collection for the disappearing CSI driver (#4590, @travisn)
  • Allow enabling of the Ceph-CSI v2.0.0 driver, with the default still as CSI v1.2.2 (#4763, @Madhu-1)
  • Allow for empty "all" annotations (#4351, @travisn)
  • Ability to enable the balancer module (#4784, @leseb)
  • Fix TuneSlowDeviceClass spec (#4777, @leseb)
  • Skip the upgrade checks in integration tests (#4710, @travisn)
  • Add an external cluster functional test (#4742, @leseb)
  • Remove unnecessary mon canary pods after an aborted mon failover (#3764, @jmolmo)
  • Ensure MDS pod placement adheres to zone topology if specified (#4641, @jmolmo)
  • Add nil check for ceph version in crash controller (#4762, @Madhu-1)
  • Add ceph version to cluster status (#4357, @egafford)
  • Ceph external cluster repeatedly saving mon endpoints to config (#4717, @eliaswimmer)
  • Fix resource limits for the OSD prepare job (#4713, @leseb)
  • Ensure OSD location is consistent to avoid unnecessary osds restarts (#4729, @ashangit)
  • Fix external cluster config override configmap creation (#4686, @leseb)
  • Improved ceph-volume logging in the OSD prepare job (#3888, @leseb)
  • Integration test fix for the client CR (#4509, @mateuszlos)
rook - v1.2.2

Published by travisn almost 5 years ago

Improvements

Rook v1.2.2 is a patch release limited in scope and focusing on bug fixes.

Ceph

  • Allow multiple clusters to set useAllDevices (#4692, @leseb)
  • Operator start all mons before checking quorum if they are all down (#4531, @ashishranjan738)
  • Ability to disable the crash controller (#4533, @leseb)
  • Document monitoring options for the cluster CR (#4698, @umangachapagain)
  • Apply node topology labels to PV-backed OSDs in upgrade from v1.1 (#4616, @rohan47)
  • Update examples to Ceph version v14.2.6 (#4653, @leseb)
  • Allow integration tests in minimal config to run on multiple K8s versions (#4674, @travisn)
  • Wrong pod name and hostname shown in alert CephMonHighNumberOfLeaderChanges (#4665, @anmolsachan)
  • Set hostname properly in the CRUSH map for non-portable OSDs on PVCs (#4658, @travisn)
  • Update OpenShift example manifest to watch all namespaces for clusters (#4668, @likid0)
  • Use min_size defaults set by Ceph instead of overriding with Rook's defaults (#4638, @leseb)
  • CSI driver handling of upgrade from OCP 4.2 to OCP 4.3 (#4650, @Madhu-1)
  • Add support for the k8s 1.17 failure domain labels (#4626, @BlaineEXE)
  • Add option to the cluster CR to continue upgrade even with unclean PGs (#4617, @leseb)
  • Add K8s 1.11 back to the integration tests as the minimum version (#4673, @travisn)

YugabyteDB

  • Fixed replication factor flag and the master addresses (#4625, @Arnav15)
rook - v1.1.9

Published by travisn almost 5 years ago

Improvements

Rook v1.1.9 is a patch release limited in scope and focusing on bug fixes.

Ceph

  • CSI driver handling of upgrade from OCP 4.2 to OCP 4.3 (#4650, @Madhu-1)
  • Fix object bucket provisioner when rgw not on port 80 (#4049, @bsperduto)
  • Only perform upgrade checks when the Ceph image changes (#4379, @travisn)
rook - v1.2.1

Published by leseb almost 5 years ago

Improvements

Rook v1.2.1 is a patch release limited in scope and focusing on bug fixes.

Ceph

  • Add missing env var ROOK_CEPH_MON_HOST for OSDs (#4589, @leseb)
  • Avoid logging sensitive info when debug logging is enabled (#4568, @jmolmo)
  • Add missing vol mount for encrypted osds (#4583, @leseb)
  • Bumping ceph-operator memory limit to 256Mi (#4561, @billimek)
  • Fix object bucket provisioner when rgw not on port 80 (#4508, @bsperduto)
rook - v1.2.0

Published by BlaineEXE almost 5 years ago

Major Themes

  • Security audit completed by Trail of Bits found no major concerns
  • Ceph: Added a new "crash collector" daemon to send crash telemetry to the Ceph dashboard, support for priority classes, and a new CephClient resource to create user credentials
  • EdgeFS: Added more flexible key-value backends (e.g., Samsung KV-SSDs), "Instant eventual snapshots", and the ability to send large data chunks to AWS S3.

Action Required

If you are running a previous Rook version, please see the corresponding storage provider upgrade guide:

Notable Features

  • The minimum version of Kubernetes supported by Rook changed from 1.11 to 1.12.
  • Discover daemon started by Ceph and EdgeFS storage providers:
    • When the Storage Operator is deleted, the Discover Daemon will also be deleted, as well as its Config Map
    • Device filtering is now configurable for the user by adding an environment variable
      • A new environment variable DISCOVER_DAEMON_UDEV_BLACKLIST is added through which the user can blacklist the devices
      • If no device is specified, the default values will be used to blacklist the devices

Ceph

  • The job for detecting the Ceph version can be started with node affinity or tolerations according to the same settings in the Cluster CR as the mons.
  • A new CR property skipUpgradeChecks has been added, which allows you force an upgrade by skipping daemon checks. Use this at YOUR OWN RISK, only if you know what you're doing. To understand Rook's upgrade process of Ceph, read the upgrade doc.
  • Mon Quorum Disaster Recovery guide has been updated to work with the latest Rook and Ceph release.
  • A new CRD property PreservePoolsOnDelete has been added to Filesystem(fs) and Object Store(os) resources in order to increase protection against data loss. If it is set to true, associated pools won't be deleted when the main resource (fs/os) is deleted. Creating again the deleted fs/os with the same name will reuse the preserved pools.
  • A new ceph-crashcollector controller has been added for Ceph v14+ that will collect crash telemetry and send it to the Ceph dashboard. These new deployments will run on any node where a Ceph pod is running. Read more about this in the doc
  • PriorityClassNames can now be added to the Rook/Ceph components to influence the scheduler's pod preemption.
  • Rook is now able to create and manage Ceph clients client crd.
  • The Status.Phase property has been introduced for Rook-Ceph CRDs. The current possible values of status are Processing, Ready and Failed. If the operator is performing any task regarding a Ceph related CR, its status will be reflected as Processing. The Status will be changed to Failed if the operator fails at some task related to the CR and will change to Ready once the Rook-Ceph operator finishes all the tasks related to the CR.
  • New CR property available in the Operator: ROOK_UNREACHABLE_NODE_TOLERATION_SECONDS (5 seconds by default). Represents the time to wait until the node controller will move Rook pods to other nodes after detecting an unreachable node.

OSDs

  • After the upgrade to v1.2, when the operator is updated to a new release, the OSD pods won't be restarted unless they are running on PVCs.
  • Add a new CRD property devicePathFilter to support device filtering with path names, e.g. /dev/disk/by-path/pci-.*-sas-.*.
  • Ceph OSD's admin socket is now placed under Ceph's default system location /run/ceph.
  • The on-host log directory for OSDs is updated to be <dataDirHostPath>/log/<namespace>, the same as other Ceph daemons.
  • Do not generate a config (during pod init) for directory-based or legacy filestore OSDs
  • Support PersistentVolume backed by LVM Logical Volume for "OSD on PVC".
  • When running an OSD on a PVC and the device is on a slow device class, Rook can adapt to that by tuning the OSD. This can be enabled by the CR setting tuneSlowDeviceClass.

ObjectStore / RGWs

  • Ceph Object Gateway are automatically configured to not run on the same host if hostNetwork is activated

EdgeFS

  • Rook EdgeFS operator adds support for single node, single device deployments. This is to enable embedded and remote developer use cases.
  • Support for new EdgeFS backend, rtkvs, enables ability to operate on top of any key-value capable interface. Initial integration adds support for Samsung KV-SSD devices.
  • Enhanced support for running EdgeFS in the AWS cloud. It is now possible to store data payload chunks directly in AWS S3 buckets, thus greatly reducing storage billing cost. Metadata chunks still will be in AWS EBS, thus provide low-latency and high-performance.
  • It is now possible to configure ISGW Full-Mesh functionality without the need to create multiple ISGW services. Please read more about ISGW Full-Mesh functionality here.
  • EdgeFS now capable of creating instant snapshots of S3 buckets. It supports billion of objects per-bucket use cases. A snapshot's metadata gets distributed among all the connected EdgeFS segments, such that cloning or accessing of snapshotted objects can be done without the need of full-delta transferring, i.e. on-demand.

Breaking Changes

Ceph

  • The topology setting has been removed from the CephCluster CR. To configure the OSD topology, node labels must be applied.
    See the OSD topology topic. This setting only affects OSDs when they are first created, thus OSDs will not be impacted during upgrade.
    The topology settings only apply to bluestore OSDs on raw devices. The topology labels are not applied to directory-based OSDs.

Deprecations

Ceph

  • Creation of new Filestore OSDs on disks is now deprecated. Filestore is in sustaining mode in Ceph.
    • The storeType storage config setting is now ignored
      • New OSDs created in directories are always Filestore type
      • New OSDs created on disks are always Bluestore type
    • Preexisting disks provisioned as Filestore OSDs will remain as Filestore OSDs
  • Rook will no longer automatically remove OSDs if nodes are removed from the cluster CR to avoid the risk of destroying OSDs unintentionally. To remove OSDs manually, see the new doc on OSD Management
rook - v1.1.8

Published by travisn almost 5 years ago

Improvements

Rook v1.1.8 is a patch release limited in scope and focusing on bug fixes.

Ceph

  • Continue orchestration on osd update errors to avoid retrying forever (#4418, @travisn)
  • Operator crashing on concurrent map read and map write (#4350, @rohantmp)
  • Ensure filesystem and object store are upgraded with new Ceph version (#4403, @leseb)
  • Clarify log message for Ceph upgrades (#4360, @travisn)
  • Ability to disable snapshotter from ceph-csi rbd (#4401, @Madhu-1)
  • Update kubernetes CSI sidecar images (#4335, @Madhu-1)
  • Update ceph-csi from v1.2.1 to v1.2.2 (#4352, @Madhu-1)
  • Add delay between drain switches for managed PDBs (#4346, @rohantmp)
rook - v1.1.7

Published by travisn almost 5 years ago

Improvements

Rook v1.1.7 is a patch release limited in scope and focusing on bug fixes.

Ceph

  • Skip osd prepare job creation if osd daemon exists for the pvc (#4277, @sp98)
  • Stop osd process more quickly during pod shutdown to reduce IO unresponsiveness (#4328, @travisn)
  • Add osd anti-affinity to the example of OSDs on PVCs (#4326, @travisn)
  • Properly set app name on the cmdreporter (#4323, @BlaineEXE)
  • Ensure disruption draining state is set and checked correctly (#4319, @rohantmp)
  • Update LVM filter for OSDs on PVCs (#4312, @leseb)
  • Fix topology logic for disruption drains (#4221, @rohantmp)
  • Skip restorecon during ceph-volume configuration (#4260, @leseb)
  • Added a note around snapshot CRD cleanup (#4302, @mohahmed13)
  • Storage utilization alert threshold and timing updated (#4286, @anmolsachan)
  • Silence disruption errors if necessary and add missing errors (#4288, @leseb)
  • Create csi keys and secrets for external cluster (#4276, @leseb)
  • Add retry to ObjectUser creation (#4149, @umangachapagain)
rook - v1.1.6

Published by travisn almost 5 years ago

Improvements

Rook v1.1.6 is a patch release limited in scope and focusing on bug fixes. Note that the v1.1.5 release was skipped.

Ceph

  • Flex driver should not allow attach before detach on a different node (#3580, @travisn)
  • Properly set the ceph-mgr annotations (#4195, @d-luu)
  • Only trigger an orchestration if the cluster CR changed (#4252, @nizamial09)
  • Fix setting rbdGrpcMetricsPort in the helm chart (#4202, @galexrt)
  • Document all helm chart settings (#4202, @galexrt)
  • Support all layers of CRUSH map with node labels (#4236, @travisn)
  • Skip orchestration restart on device config map update for osd on pvc (#4124, @sp98)
  • Deduplicate tolerations collected for the drain canary pods (#4220, @rohantmp)
  • Role bindings are missing for pod security policies (#3851, @jmolmo)
  • Continue with orchestration if a single mon pod fails to start (#4146, @travisn)
  • OSDs cannot call 'restorecon' when selinux is enabled (#4214, @leseb)
  • Use the rook image for drain canary pods (#4213, @rohantmp)
  • Allow setting of osd prepare resource limits (#4182, @leseb)
  • Documentation for object bucket provisioning (#3882, @thotz)

NFS

  • Set correct owner reference for garbage collection (#4142, @leseb)
rook - v1.1.4

Published by travisn almost 5 years ago

Improvements

Rook v1.1.4 is a patch release limited in scope and focusing on bug fixes. Note that release v1.1.3 was skipped.

Ceph

  • OSD config overrides were ignored for some upgraded OSDs (#4161, @leseb)
  • Enable restoring a cluster after disaster recovery (#4021, @travisn)
  • Enable upgrade of OSDs configured on PVCs (#3996, @sp98)
  • Automatically removing OSDs requires setting: removeOSDsIfOutAndSafeToRemove(#4116, @rohgupta)
  • Rework csi keys and secrets to use minimal privileges (#4086, @leseb)
  • Expose OSD prepare pod resource limits (#4083, @leseb)
  • Minimum K8s version for running OSDs on PVCs is 1.13 (#4009, @sp98)
  • Add 'rgw.buckets.non-ec' to list of RGW metadataPools (#4087, @OwenTuz)
  • Hide wrong error for clusterdisruption controller (#4094, @leseb)
  • Multiple integration test fixes to improve CI stability (#4098, @travisn)
  • Detect mount fstype more accurately in the flex driver (#4109, @leseb)
  • Do not override mgr annotations (#4110, @leseb)
  • Add OSDs to proper buckets in crush hierarchy with topology awareness (#4099, @mateuszlos)
  • More robust removal of cluster finalizer (#4090, @travisn)
  • Take activeStandby into account for the CephFileSystem disruption budget (#4075, @rohantmp)
  • Update the CSI CephFS registration directory name (#4070, @Madhu-1)
  • Fix incorrect Ceph CSI doc links (#4081, @binoue)
  • Remove decimal places for osdMemoryTargetValue monitoring setting (#4046, @eknudtson)
  • Relax pre-requisites for external cluster to allow connections to Luminous (#4025, @leseb)
  • Avoid nodes getting stuck in OrchestrationStatusStarting during OSD config (#3817, @mantissahz)
  • Make metrics and liveness port configurable (#4005, @Madhu-1)
  • Correct system namespace for CSI driver settings during upgrade (#4040, @Madhu-1)

Other

  • Proper owner references for garbage collection for yugabytedb, minio, & cockroachdb (#4090, @travisn)
rook - v1.1.2

Published by travisn about 5 years ago

Improvements

Rook v1.1.2 is a patch release limited in scope and focusing on bug fixes.

Ceph

  • Ceph config overrides were ignored for OSDs (#3926, @leseb)
  • Resolved topologyAware error for clusterrole issue in helm chart (#3993, @rohan47)
  • Fix encrypted osd startup (#3846, @mvollman)
  • Various osd fixes and improvements with lvm and ceph-volume (#3955, @leseb)
  • Reset OSD 'run dir' to default location (#3966, @leseb)
  • Affinity on the ceph version job will use the affinity for mons (#4001, @travisn)
  • Add check for csi volumes before cluster delete (#3967, @Madhu-1)
  • Add Toleration and NodeAffinity to CSI provisioners (#3942, @Madhu-1)
  • Use v1.2.1 cephcsi release (#3982, @Madhu-1)
  • Update CSI service if already present (#3981, @Madhu-1)
  • Fix umount issue when rbd-nodeplugin restarts (#3923, @Madhu-1)
  • Set default filesystem to ext4 on RBD device (#3971, @humblec)
  • Allow default Ceph CSI images to be configurable in the build (#4018, @BlaineEXE)
  • Remove finalizer even if flex is disabled (#3912, @krig)
  • Disable Flexdriver in helm chart and operator-openshift by default (#3945, @Madhu-1)
  • Added validation for cephBlockPool,cephfs and cephnfs (#3480, @rajatsingh25aug)
  • Set default dashboard ssl to true (#4016, @jtlayton)

YugabyteDB

  • Incorrect hostname in YugaByte example (#3816, @ybnelson)
rook - v1.1.1

Published by travisn about 5 years ago

Improvements

Rook v1.1.1 is a patch release limited in scope and focusing on bug fixes.

Ceph

  • Disable the flex driver by default in new clusters (#3724, @leseb)
  • MDB controller to use namespace for checking ceph status (#3879, @ashishranjan738)
  • CSI liveness container socket file (#3886, @Madhu-1)
  • Add list of unusable directories paths (#3569, @galexrt)
  • Remove helm incompatible chars from values.yaml (#3863, @k0da)
  • Fail nfs ganesha if CephFS is not configured (#3835, @leseb)
  • Make lifecycle hook chown less verbose for OSDs (#3848, @leseb)
  • Configure LVM settings for rhel8 base image (#3933, @travisn)
  • Make kubelet path configurable in operator for csi (#3927, @Madhu-1)
  • OSD pods should always use hostname for node selector (#3924, @travisn)
  • Deactivate device from lvm when OSD pods are shutting down (#3755, @sp98)
  • Add CephNFS to OLM's CSV (#3826, @rohantmp)
  • Tolerations for drain detection canaries (#3813, @rohantmp)
  • Enable ceph-volume debug logs (#3907, @leseb)
  • Add documentation for CSI upgrades from v1.0 (#3868, @Madhu-1)
  • Add a new skipUpgradeChecks property to allow forcing upgrades (#3872, @leseb)
  • Include CSI image in helm chart values (#3855, @Madhu-1)
  • Use HTTP port if SSL is disabled (#3810, @krig)
  • Enable SSL for dashboard by default (#3626, @krig)
  • Enable msgr2 properly during upgrades (#3870, @leseb)
  • Nautilus v14.2.4 is the default Ceph image (#3889, @leseb)
  • Ensure the ceph-csi secret exists on upgrade (#3874, @travisn)
  • Disable the min PG warning if the pg_autoscaler is enabled (#3847, @leseb)
  • Disable the warning for bluestore warn on legacy statfs (#3847, @leseb)

NFS

  • Added an example to consume the nfs export created by nfs operator (#3837, @rohan47)
rook - v1.1.0

Published by BlaineEXE about 5 years ago

Major Themes

  • Ceph: CSI driver is ready for production, use PVCs for mon and OSD storage, connect to an external Ceph cluster, ability to enable Ceph manager modules, and much more!
  • EdgeFS: Operator CRDs graduated to v1 stable.
  • YugabyteDB: This high-performance, distributed SQL database was added as a storage backend.

Action Required

If you are running a previous Rook version, please see the corresponding storage provider upgrade guide:

Notable Features

  • The minimum version of Kubernetes supported by Rook changed from 1.10 to 1.11.
  • Kubernetes now supports version 1.15.
  • OwnerReferences are created with the fully qualified apiVersion
  • YugabyteDB is now supported by Rook with a new operator. You can deploy, configure and manage instances of this high-performance distributed SQL database. Create an instance of the new ybcluster.yugabytedb.rook.io custom resource to easily deploy a cluster of YugabyteDB Database. Checkout its user guide to get started with YugabyteDB.
  • Rook now supports Multi-Homed networking. This feature significantly improves performance and security by isolating a backend network as a separate network. Read more on Multi-Homed networking with Rook EdgeFS in this presentation at KubeCon China 2019.

Ceph

  • The Ceph CSI driver is enabled by default and preferred over the flex driver
    • The flex driver can be disabled in operator.yaml by setting ROOK_ENABLE_FLEX_DRIVER=false
    • The CSI drivers can be disabled by setting ROOK_CSI_ENABLE_CEPHFS=false and ROOK_CSI_ENABLE_RBD=false
  • FlexVolume plugin now supports dynamic PVC expansion.
  • Rook can now connect to an external cluster. For more info about external cluster read the design as well as the documentation Ceph External cluster
  • The device discovery daemon can be disabled in operator.yaml by setting ROOK_ENABLE_DISCOVERY_DAEMON=false
  • The Rook Agent pods are now started when the CephCluster is created rather than immediately when the operator is started.
  • The Cluster CRD now provides option to enable Prometheus-based monitoring with a prerequisite that Prometheus is already installed.
  • During upgrades, Rook intelligently checks for each daemon state before and after upgrading. To learn more about the upgrade workflow see Ceph Upgrades
  • Rook Operator now supports 2 new environment variables: AGENT_TOLERATIONS and DISCOVER_TOLERATIONS. Each accept list of tolerations for agent and discover pods accordingly.
  • Ceph daemons now run as the 'ceph' user and not 'root' anymore (monitor or OSD stores already owned by 'root' will keep running under 'root'). Containers still initialized as the root user, however.
  • NodeAffinity can be applied to rook-ceph-agent DaemonSet with AGENT_NODE_AFFINITY environment variable.
  • NodeAffinity can be applied to rook-discover DaemonSet with DISCOVER_AGENT_NODE_AFFINITY environment variable.
  • Rook can now manage PodDisruptionBudgets for the following Daemons: OSD, mon, RGW, MDS. OSD budgets are dynamically managed as documented in the design. This can be enabled with the managePodBudgets flag in the cluster CR. When this is enabled, drains on OSDs will be blocked by default and dynamically unblocked in a safe manner one failureDomain at a time. When a failure domain is draining, it will be marked as no out for a longer time than the default DOWN/OUT interval.
  • Rook can now manage OpenShift MachineDisruptionBudgets for the OSDs. MachineDisruptionBudgets for OSDs are dynamically managed as documented in the disruptionManagement section of the CephCluster CR. This can be enabled with the manageMachineDisruptionBudgets flag in the cluster CR.
  • Creation of storage pools through the custom resource definitions (CRDs) now allows users to optionally specify deviceClass property to enable
    distribution of the data only across the specified device class. See Ceph Block Pool CRD for
    an example usage

Mons

  • Ceph monitor placement will now take failure zones into account see the
    documentation

    for more information.
  • The cluster CRD option to allow multiple monitors to be scheduled on the same
    node---spec.Mon.AllowMultiplePerNode---is now active when a cluster is first
    created. Previously, it was ignored when a cluster was first installed.
  • Ceph monitors have initial support for running on PVC storage. See docs on
    monitor settings for more detail.

Mgr

  • Rook now has a new config CRD mgr to enable Ceph manager modules

OSDs

  • Linear disk device can now be used for Ceph OSDs.
  • Allow metadataDevice to be set per OSD device in the device specific config section.
  • Added deviceClass to the per OSD device specific config section for setting custom crush device class per OSD.
  • Use --db-devices with Ceph 14.2.1 and newer clusters to explicitly set metadataDevice per OSD.
  • Ceph OSDs can be created by using StorageClassDeviceSet. See docs on Storage Class Device Sets.
  • The Rook-enforced minimum memory for OSD pods has been reduced from 4096M to 2048M
  • Provisioning will fail if the user specifies a metadataDevice but that device is not used as a metadata device by Ceph.
  • Rook can now be configured to read "region" and "zone" labels on Kubernetes nodes and use that information as part of the CRUSH location for the OSDs.
  • Added a new property in storageClassDeviceSets named portable:
    • If true, the OSDs will be allowed to move between nodes during failover. This requires a storage class that supports portability (e.g. aws-ebs, but not the local storage provisioner).
    • If false, the OSDs will be assigned to a node permanently. Rook will configure Ceph's CRUSH map to support the portability.
  • Rook does not create an initial CRUSH map anymore and lets Ceph do it normally
  • Ceph CRUSH tunable are not enforced to "firefly" anymore, Ceph picks the right tunable for its own version, to read more about tunable see the Ceph documentation

Object / RGW

  • Buckets can be provisioned by defining an object bucket claim. This follows the standard Kubernetes pattern for PVCs, except now it's available for object storage as well! The admin just needs to create a storage class for object storage to make this available to consumers in the cluster.
  • RGW pods have liveness probe enabled
  • RGW instances have their own key and thus are properly reflected in the Ceph status

EdgeFS

  • The minimum version supported by Rook is now EdgeFS v1.2.64.
  • Graduate CRDs to stable v1 #3702
  • Added support for useHostLocalTime option to synchronize time in service pods to host #3627
  • Added support for Multi-homing networking to provide better storage backend security isolation #3576
  • Allow users to define Kubernetes users to define ServiceType and NodePort via the service CRD spec #3516
  • Added mgr pod liveness probes #3492
  • Ability to add/remove nodes via EdgeFS cluster CRD #3462
  • Support for device full name path spec i.e. /dev/disk/by-id/NAME #3374
  • Rolling Upgrade support #2990
  • Prevents multiple targets deployment on the same node #3181
  • Enhance S3 compatibility support for S3X pods #3169
  • Add K8S_NAMESPACE env to EdgeFS containers #3097
  • Improved support for ISGW dynamicFetch configuring #3070
  • OLM integration #3017
  • Flexible Metadata Offload page size setting support #3776

YugabyteDB

  • Rook now supports YugabyteDB as storage provider. YugaByteDB is a high-performance, cloud-native distributed SQL database which can tolerate disk, node, zone and region failures automatically. You can find more information about YugabyteDB here
  • Newly added Rook operator for YugabyteDB lets you easily create a YugabyteDB cluster.
  • Please follow Rook YugabyteDB operator quickstart guide to create a simple YugabyteDB cluster.

Breaking Changes

Ceph

  • The minimum version supported by Rook is Ceph Mimic v13.2.4. Before upgrading to v1.1 it is required to update the version of Ceph to at least this version.
  • The CSI driver is enabled by default. Documentation has been changed significantly for block and filesystem to use the CSI driver instead of flex.
    While the flex driver is still supported, it is anticipated to be deprecated soon.
  • The Mon.PreferredCount setting has been removed.
  • imagePullSecrets option added to helm-chart

EdgeFS

  • With Rook EdgeFS operator CRDs graduated to v1, please follow upgrade procedure on how to get CRDs and running setups converted.
  • EdgeFS versions greater than v1.2.62 require full cluster restart.

Deprecations

Ceph

  • For RGW, when deploying an object store with object.yaml, using allNodes is not supported anymore, and a transition path has been implemented in the code to migrate automatically without user intervention.
    If you were using allNodes: true, Rook will replace each DaemonSet with a deployment (one for one replacement) gradually. This operation will be triggered on an update or when a new version of the operator is deployed. Once complete, it is expected that you edit your object CR with kubectl -n rook-ceph edit cephobjectstore.ceph.rook.io/my-store and set allNodes: false and instances with the current number of RGW instances.
rook - v1.0.6

Published by travisn about 5 years ago

Rook v1.0.6 is a patch release limited in scope and focusing on bug fixes.

Improvements

Ceph

  • Set public-addr flag for MGR (#3136, @galexrt)
  • Remove the 20GB default for OSD db size and allow ceph-volume to use all available space (#3448, @travisn)
  • Correctly set osd mem target for init-ed clusters (#3638, @odinuge)
  • Properly propagate errors when deleting mds deployment (#3641, @odinuge)
rook - v1.0.5

Published by travisn about 5 years ago

Rook v1.0.5 is a patch release limited in scope and focusing on bug fixes.

Improvements

Ceph

  • Set owner references properly to avoid unexpected K8s garbage collection (#3575, @renekalff)
  • Add prometheus annotations properly to mgr deployment (#3204, @k0da)
rook - v1.0.4

Published by travisn over 5 years ago

Rook v1.0.4 is a patch release limited in scope and focusing on bug fixes.

Improvements

Ceph

  • RGW: Set proper port syntax for beast in nautilus deployments (#3411, @leseb)
  • Stop creating initial crushmap to avoid incorrect crush map warning (#3138, @leseb)
  • Use correct rounding of PV size for binding of PVCs (for example G or Gi) (#2922 #3391, @noahdesu)
rook - v1.0.3

Published by travisn over 5 years ago

Rook v1.0.3 is a patch release limited in scope and focusing on bug fixes.

Improvements

Ceph

  • OSD startup on SDN for error "Cannot assign requested address" (#3140, @leseb)
  • Change default frontend on nautilus to beast (#2707, @leseb)
  • RGW daemon updates: (#3245 #2474 #2957, @leseb)
    • Remove support for AllNodes where we would deploy one rgw per node on all the nodes
    • Each rgw deployed has its own cephx key
    • Upgrades will automatically transition these changes to the rgw daemons
  • Correct --ms-learn-addr-from-peer=false argument for ceph-osd (@leseb)
  • When updating the CephCluster CR to run unsupported octopus, fix operator panic (#3137, @leseb)
  • Add metrics for the flexvolume driver (#1659, @nabokihms)
  • Set the fully qualified apiVersion on the OwnerReferences for cleanup on OpenShift (#2944, @travisn)
  • Stop enforcing crush tunables for octopus warning (#3138, @leseb)
  • Apply the osd nautilus flag for upgrade (#2960, @leseb)

EdgeFS

  • Support for rook device full path (#3374, @dyusupov)
rook - v0.9.3

Published by travisn over 5 years ago

Rook v0.9.3 is a patch release limited in scope and focusing on bug fixes.

Improvements

Cassandra

  • Fix the mount point for the PVs (#2443, @yanniszark)

Ceph

  • Improve mon failover cleanup and operator restart during failover (#2262 #2570, @travisn)
  • Enable host ipc for osd encryption (#923, @noahdesu)
  • Add missing "host path requires privileged" setting to the helm chart (#2735, @galexrt)
rook - v1.0.2

Published by travisn over 5 years ago

Rook v1.0.2 is a patch release limited in scope and focusing on bug fixes.

Improvements

Ceph

  • Handle false positives for triggering orchestration after hotplugging devices (#3185 #3131 #3059, @noahdesu)
  • Set fsgroup only on the top level of the cephfs mount (#2254, @travisn)
  • Improved diff checking when the CR changes (#3166, @d-luu)
  • Retry checking the ceph version if failed (#3227, @leseb)
  • Resource limits: Document the minimum limits, add limits for rbd mirror daemons, and fix the memory check (@leseb)
  • Deserialization of pg dump for nautilus when removing OSDs (#3178, @rohan47)
  • Start OSDs only for correct ceph cluster when there are multiple ceph clusters on the same nodes (#2696, @sp98)
  • Clarify documentation for creating OSDs (#3242 #3243, @travisn)
  • Update the operator base image to v14.2.1 (#3120, @rohan47)
  • Add separator to scc yaml example (#3096, @umangachapagain)
  • Check before copying binaries in osd pods (#3099, @kshlm)

EdgeFS

  • Proper use of metadataOnly property (#3273, @dyusupov)
  • Prevent multiple targets deployment on the same node (#3182, @sabbot)
Package Rankings
Top 0.69% on Proxy.golang.org
Badges
Extracted from project README
CNCF Status GitHub release Docker Pulls Go Report Card OpenSSF Scorecard CII Best Practices Security scanning Slack Twitter Follow FOSSA Status