rook

Storage Orchestration for Kubernetes

APACHE-2.0 License

Stars
12K
Committers
547

Bot releases are visible (Hide)

rook - v1.0.1

Published by travisn over 5 years ago

Rook v1.0.1 is a patch release limited in scope and focusing on bug fixes.

Improvements

Ceph

  • Support for metadataDevice for configuring OSDs (#3108, @mvollman)
  • Upgrades with host networking will be supported from v0.9 (#3111, @travisn)
  • Upgrade documentation to enable msgr2 protocol, which requires failing over the mons to the new default port (#3104, @travisn)
  • Teardown documentation updated with the new common.yaml and related changes (#3148, @galexrt)

EdgeFS

  • S3X not selecting correct EdgeFS version and not starting proxy (#3174, @sabbot)
rook - v1.0.0

Published by travisn over 5 years ago

Major Themes

  • Ceph: Nautilus is supported, improved automation for Ceph upgrades, experimental CSI driver, NFS, and much more!
  • EdgeFS: CRDs declared beta, upgrade guide, new storage protocols, a new management experience, and much more!
  • Minio: Responsive operator reconciliation loop and added health checks
  • NFS: Dynamic provisioning of volumes

Action Required

If you are running a previous Rook version, please see the corresponding storage provider upgrade guide:

Notable Features

  • The minimum version of Kubernetes supported by Rook changed from 1.8 to 1.10.
  • K8s client packages updated from version 1.11.3 to 1.14.0
  • Rook Operator switches from Extensions v1beta1 to use Apps v1 API for DaemonSet and Deployment.

Ceph

  • Ceph Nautilus (v14) is now supported by Rook and is the default version deployed by the examples.
  • The Ceph-CSI driver is available for experimental mode.
  • An operator restart is no longer needed for applying changes to the cluster in the following scenarios:
    • When a node is added to the cluster, OSDs will be automatically configured as needed.
    • When a device is attached to a storage node, OSDs will be automatically configured as needed.
    • Any change to the CephCluster CR will trigger updates to the cluster.
    • Upgrading the Ceph version will update all Ceph daemons (in v0.9, mds and rgw daemons were skipped)
  • Ceph status is surfaced in the CephCluster CR and periodically updated by the operator (default is 60s). The interval can be configured with the ROOK_CEPH_STATUS_CHECK_INTERVAL env var.
  • A CephNFS CRD will start NFS daemon(s) for exporting CephFS volumes or RGW buckets. See the NFS documentation.
  • The flex driver can be configured to properly disable SELinux relabeling and FSGroup with the settings in operator.yaml.
  • The number of mons can be increased automatically when new nodes come online. See the preferredCount setting in the cluster CRD documentation.
  • New Kubernetes nodes or nodes which are not tainted NoSchedule anymore get added automatically to the existing rook cluster if useAllNodes is set.
  • Pod's logs can be written on the filesystem as of Ceph Nautilus 14.2.1 on demand (see common issues)
  • rook-version and ceph-version labels are now applied to Ceph daemon Deployments, DaemonSets,
    Jobs, and StatefulSets. These identify the Rook version which last modified the resource and the
    Ceph version which Rook has detected in the pod(s) being run by the resource.
  • OSDs provisioned by ceph-volume now supports metadataDevice and databaseSizeMB options.
  • The operator will no longer remove osds from specified nodes when the node is tainted with
    automatic Kubernetes taints
    Osds can still be removed by more explicit methods. See the "Node Settings" section of the
    Ceph Cluster CRD documentation for full details.

EdgeFS

  • Declare all EdgeFS CRDs to be Beta v1. All users recommended to use documented migration procedure
  • Automatic host validation and preparation of sysctl settings
  • Support for OpenStack/SWIFT CRD
  • Support for S3 bucket as DNS subdomain
  • Support for Block (iSCSI) CSI Provisioner
  • Support for Prometheus Dashboard and REST APIs
  • Support for Management GUI with automated CRD wizards
  • Support for Failure domains and zoning provisioning support
  • Support for Multi-Namespace clusters with single operator instance
  • Support for embedded mode and low resources deployments with minimally of 1GB of memory and 2 CPU cores
  • Many bug fixes and usability improvements

Breaking Changes

  • Rook no longer supports Kubernetes 1.8 and 1.9.
  • The build process no longer publishes the alpha, beta, and stable channels. The only channels published are master and release.
  • The stability of storage providers is determined by the CRD versions rather than the overall product build, thus the channels were renamed to match this expectation.

Ceph

  • Rook no longer supports running more than one monitor on the same node when hostNetwork and allowMultiplePerNode are true.
  • The example operator and CRD yaml files have been refactored to simplify configuration. See the examples help topic for more details.
    • The common resources are now factored into common.yaml from operator.yaml and cluster.yaml.
    • common.yaml: Creates the namespace, RBAC, CRD definitions, and other common operator and cluster resources
    • operator.yaml: Only contains the operator deployment
    • cluster.yaml: Only contains the cluster CRD
    • Multiple examples of the operator and CRDs are provided for common usage of the operator and CRDs.
    • By default, a single namespace (rook-ceph) is configured instead of two namespaces (rook-ceph-system and rook-ceph). New and upgraded clusters can still be configured with the operator and cluster in two separate namespaces. Existing clusters will maintain their namespaces on upgrade.
  • Rook will no longer create a directory-based osd in the dataDirHostPath if no directories or
    devices are specified or if there are no disks on the host.
  • Containers in mon, mgr, mds, rgw, and rbd-mirror pods have been removed and/or changed names.
  • Config paths in mon, mgr, mds and rgw containers are now always under
    /etc/ceph or /var/lib/ceph and as close to Ceph's default path as possible regardless of the
    dataDirHostPath setting.
  • The rbd-mirror pod labels now read rbd-mirror instead of rbdmirror for consistency.

Known Issues

Ceph

  • Creating an object store from Rook v1.0 will be configured incorrectly when running Ceph Luminous or Mimic. For users who are upgrading from v0.9 it is recommended to either create the object store before upgrading, or update to Nautilus before creating an object store.
rook - v0.9.2

Published by travisn over 5 years ago

Rook v0.9.2 is a patch release limited in scope and focusing on bug fixes.

Improvements

Ceph

  • Correctly capture and log the stderr output from child processes (#2479 #2536, @noahdesu)
  • Allow disabling setting fsgroup when mounting a volume (#2254, @travisn)
  • Allow configuration of SELinux relabeling (#2417, @allen13)
  • Correctly set the secretKey used for cephfs mounts (#2484, @galexrt)
  • Set ceph-mgr privileges to prevent the dashboard from failing on rbd mirroring settings (#2404, @travisn)
  • Correctly configure the ssl certificate for the RGW service (#2435, @syncroswitch)
  • Allow configuration of the dashboard port (#2412, @noahdesu)
  • Allow disabling of ssl on the dashboard (#2433, @noahdesu)

General

  • Increase timeout for starting GCE instances (#2549, @Defilan)
  • Fix the links to Slack (#2459, @bassam)
rook - v0.9.1

Published by travisn almost 6 years ago

Rook v0.9.1 is a patch release limited in scope and focusing on bug fixes.

Improvements

Ceph

  • Build with arch-specific Ceph base image to fix the arm64 build (#2406, @travisn)
  • Detect the correct version of the Ceph image when the crd is edited (#2353, @travisn)
  • Correct the name of the dataBlockPool parameter for the storage class of an erasure-coded pool (#2370, @galexrt)
  • Retry generating the self signed cert if the dashboard module is not ready (#2298, @travisn)
  • Set the server_addr on the prometheus and dashboard modules to avoid health errors (#2335, @travisn)
  • Documentation: Add the missing mon count to a cluster crd example (@multi-io) and add stable channel to the helm chart docs (@jbw976)

EdgeFS

  • Correct device discovery (#2359, @dyusupov)
  • Clarifications for EdgeFS documentation (@dyusupov)
rook - v0.9.0

Published by travisn almost 6 years ago

Major Themes

  • Storage Providers for Cassandra, EdgeFS, and NFS were added
  • Ceph CRDs have been declared stable V1.
  • Ceph versioning is decoupled from the Rook version. Luminous and Mimic can be run in production, or Nautilus in experimental mode.
  • Ceph upgrades are greatly simplified

Action Required

  • Existing clusters that are running previous versions of Rook will need to be migrated to be compatible with the v0.9 operator and to begin using the new ceph.rook.io/v1 CRD types. Please follow the instructions in the upgrade user guide to successfully migrate your existing Rook cluster to the new release.

Notable Features

  • The minimum version of Kubernetes supported by Rook changed from 1.7 to 1.8.
  • K8s client-go updated from version 1.8.2 to 1.11.3

Ceph

  • The Ceph CRDs are now v1. The operator will automatically convert the CRDs from v1beta1 to v1.
  • Different versions of Ceph can be orchestrated by Rook. Both Luminous and Mimic are now supported, with Nautilus coming soon.
    The version of Ceph is specified in the cluster CRD with the cephVersion.image property. For example, to run Mimic you could use image ceph/ceph:v13.2.2-20181023
    or any other image found on the Ceph DockerHub.
  • The fsType default for StorageClass examples are now using XFS to bring it in line with Ceph recommendations.
  • Rook Ceph block storage provisioner can now correctly create erasure coded block images. See Advanced Example: Erasure Coded Block Storage for an example usage.
  • Service account (rook-ceph-mgr) added for the mgr daemon to grant the mgr orchestrator modules access to the K8s APIs.
  • reclaimPolicy parameter of StorageClass definition is now supported.
  • The toolbox manifest now creates a deployment based on the rook/ceph image instead of creating a pod on a specialized rook/ceph-toolbox image.
  • The frequency of discovering devices on a node is reduced to 60 minutes by default, and is configurable with the setting ROOK_DISCOVER_DEVICES_INTERVAL in operator.yaml.
  • The number of mons can be changed by updating the mon.count in the cluster CRD.
  • RBD Mirroring is enabled by Rook. By setting the number of rbd mirroring workers, the daemon(s) will be started by rook. To configure the pools or images to be mirrored, use the Rook toolbox to run the rbd mirror configuration tool.
  • Object Store User creation via CRD for Ceph clusters.
  • Ceph MON, OSD, MGR, MDS, and RGW deployments (or DaemonSets) will be updated/upgraded automatically with updates to the Rook operator.
  • Ceph OSDs are created with the ceph-volume tool when configuring devices, adding support for multiple OSDs per device. See the OSD configuration settings

NFS

  • Network File System (NFS) is now supported by Rook with a new operator to deploy and manage this widely used server. NFS servers can be automatically deployed by creating an instance of the new nfsservers.nfs.rook.io custom resource. See the NFS server user guide to get started with NFS.

Cassandra

  • Cassandra and Scylla are now supported by Rook with the rook-cassandra operator. Users can now deploy, configure and manage Cassandra or Scylla clusters, by creating an instance of the clusters.cassandra.rook.io custom resource. See the user guide to get started.

EdgeFS Geo-Transparent Storage

  • EdgeFS are supported by a Rook operator, providing high-performance and low-latency object storage system with Geo-Transparent data access via standard protocols. See the user guide to get started.

Breaking Changes

  • The Rook container images are no longer published to quay.io, they are published only to Docker Hub. All manifests have referenced Docker Hub for multiple releases now, so we do not expect any directly affected users from this change.
  • Rook no longer supports kubernetes 1.7. Users running Kubernetes 1.7 on their clusters are recommended to upgrade to Kubernetes 1.8 or higher. If you are using kubeadm, you can follow this guide to from Kubernetes 1.7 to 1.8. If you are using kops or kubespray for managing your Kubernetes cluster, just follow the respective projects' upgrade guide.

Ceph

  • The Ceph CRDs are now v1. With the version change, the kind has been renamed for the following Ceph CRDs:
    • Cluster --> CephCluster
    • Pool --> CephBlockPool
    • Filesystem --> CephFilesystem
    • ObjectStore --> CephObjectStore
    • ObjectStoreUser --> CephObjectStoreUser
  • The rook-ceph-cluster service account was renamed to rook-ceph-osd as this service account only applies to OSDs.
    • On upgrade from v0.8, the rook-ceph-osd service account must be created before starting the operator on v0.9.
    • The serviceAccount property has been removed from the cluster CRD.
  • Ceph mons are named consistently with other daemons with the letters a, b, c, etc.
  • Ceph mons are now created with Deployments instead of ReplicaSets to improve the upgrade implementation.
  • Ceph mon, osd, mgr, mds, and rgw container names in pods have changed with the refactors to initialize the daemon environments via pod InitContainers and run the Ceph daemons directly from the container entrypoint.

Minio

  • Minio no longer exposes a configurable port for each distributed server instance to use. This was an internal only port that should not need to be configured by the user. All connections from users and clients are expected to come in through the configurable Service instance.

Known Issues

Ceph

  • Upgrades are not supported to nautilus. Specifically, OSDs configured before the upgrade (without ceph-volume) will fail to start on nautilus. Nautilus is not officially supported until its release, but otherwise is expected to be working in test clusters.
rook - v0.8.3

Published by travisn about 6 years ago

Rook v0.8.3 is a patch release limited in scope and focusing on bug fixes.

Improvements

rook - v0.8.2

Published by travisn about 6 years ago

Rook v0.8.2 is a patch release limited in scope and focusing on bug fixes.

Improvements

  • The operator will handle large numbers of OSDs more gracefully by refreshing status and throttling the watch on the osd status configmap when starting the OSDs (@rootfs)
  • More reliable mounting of PVCs due to #2072 (@ganeshmaharaj)
  • Rook's flex drivers will be loaded more slowly to workaround a k8s race condition (@travisn)
  • Logging has been added to help troubleshoot #1553 around reformatting of a device. (@travisn)
rook - v0.8.1

Published by travisn about 6 years ago

Rook v0.8.1 is a patch release limited in scope and focusing on bug fixes.

Improvements

  • An upgrade guide has been authored for upgrading from the v0.8.0 release to this v0.8.1 patch release. Please refer to this new guide when upgrading to v0.8.1. (@travisn)
  • Ceph is updated to Luminous 12.2.7. (@travisn)
  • Ceph OSDs will be automatically updated by the operator when there is a change to the operator version or when the OSD configuration changes. See the OSD upgrade notes. (@travisn)
  • Ceph erasure-coded pools have the min_size set to the number of data chunks. (@galexrt)
  • Ceph OSDs will refresh their config at each startup with an init container. (@travisn)
  • Ceph OSDs will respect the placement specified in the cluster CRD. (@rootfs)
  • Ceph OSDs will use the update strategy of recreate to avoid resource contention at restart. (@galexrt)
  • Pod names for Ceph OSDs are truncated in environments with long host names (@galexrt)
  • The documentation for Rook flexvolume configuration was improved to reduce confusion and address all known scenarios and environments (@galexrt)
rook - v0.8.0

Published by jbw976 over 6 years ago

Major Themes

  • Framework and architecture to support general cloud-native storage orchestration, with new support for CockroachDB and Minio. More storage providers will be integrated in the near future.
  • Ceph support has been graduated to Beta maturity
  • Security model has been improved, the cluster admin now has full control over the permissions granted to Rook and the privileges required to run the operato(s) are now much more restricted.
  • Openshift is now a supported environment.

Action Required

  • Existing clusters that are running previous versions of Rook will need to be upgraded/migrated to be compatible with the v0.8 operator and to begin using the new rook.io/v1alpha2 and ceph.rook.io/v1beta1 CRD types. Please follow the instructions in the upgrade user guide to successfully migrate your existing Rook cluster to the new release, as it has been updated with specific steps to help you upgrade to v0.8.

Notable Features

  • Rook is now architected to be a general cloud-native storage orchestrator, and can now support multiple types of storage and providers beyond Ceph.
    • CockroachDB is now supported by Rook with a new operator to deploy, configure and manage instances of this popular and resilient SQL database. Databases can be automatically deployed by creating an instance of the new cluster.cockroachdb.rook.io custom resource. See the CockroachDB user guide to get started with CockroachDB.
    • Minio is also supported now with an operator to deploy and manage this popular high performance distributed object storage server. To get started with Minio using the new objectstore.minio.rook.io custom resource, follow the steps in the Minio user guide.
  • The status of Rook is no longer published for the project as a whole. Going forward, status will be published per storage provider or API group. Full details can be found in the project status section of the README.
    • Ceph support has graduated to Beta.
  • The rook/ceph image is now based on the ceph-container project's 'daemon-base' image so that Rook no longer has to manage installs of Ceph in image. This image is based on CentOS 7.
  • One OSD will run per pod to increase the reliability and maintainability of the OSDs. No longer will restarting an OSD pod mean that all OSDs on that node will go down. See the design doc.
  • Ceph tools can be run from any rook pod.
  • Output from stderr will be included in error messages returned from the exec of external tools.
  • Rook-Operator no longer creates the resources CRD's or TPR's at the runtime. Instead, those resources are provisioned during deployment via helm or kubectl.
  • IPv6 environments are now supported.
  • Rook CRD code generation is now working with BSD (Mac) and GNU sed.
  • The Ceph dashboard can be enabled by the cluster CRD.
  • monCount has been renamed to count, which has been moved into the mon spec. Additionally the default if unspecified or 0, is now 3.
  • You can now toggle if multiple Ceph mons might be placed on one node with the allowMultiplePerNode option (default false) in the mon spec.
  • Added nodeSelector to Rook Ceph operator Helm chart.

Breaking Changes

  • It is recommended to only use official releases of Rook, as unreleased versions from the master branch are subject to changes and incompatibilities that will not be supported in the official releases.
  • Removed support for Kubernetes 1.6, including the legacy Third Party Resources (TPRs).
  • Various paths and resources have changed to accommodate multiple storage providers:
    • Examples: The yaml files for creating a Ceph cluster can be found in cluster/examples/kubernetes/ceph. The yaml files that are provider-independent will still be found in the cluster/examples/kubernetes folder.
    • CRDs: The apiVersion of the Rook CRDs are now provider-specific, such as ceph.rook.io/v1beta1 instead of rook.io/v1alpha1.
    • Cluster CRD: The Ceph cluster CRD has had several properties restructured for consistency with other storage provider CRDs. Rook will automatically upgrade the previous Ceph CRD versions to the new versions with all the compatible properties. When creating the cluster CRD based on the new ceph.rook.io apiVersion you will need to take note of the new settings structure.
    • Container images: The container images for Ceph and the toolbox are now rook/ceph and rook/ceph-toolbox. The steps in the upgrade user guide will automatically start using these new images for your cluster.
    • Namespaces: The example namespaces are now provider-specific. Instead of rook-system and rook, you will see rook-ceph-system and rook-ceph.
    • Volume plugins: The dynamic provisioner and flex driver are now based on ceph.rook.io instead of rook.io
  • Minimal privileges are configured with a new cluster role for the operator and Ceph daemons, following the new security design.
    • A role binding must be defined for each cluster to be managed by the operator.
  • OSD pods are started by a deployment, instead of a daemonset or a replicaset. The new OSD pods will crash loop until the old daemonset or replicasets are removed.

Removal of the API service and rookctl tool

The REST API service has been removed. All cluster configuration is now accomplished through the
CRDs or with the Ceph tools in the toolbox.

The tool rookctl has been removed from the toolbox pod. Cluster status and configuration can be queried and changed with the Ceph tools.
Here are some sample commands to help with your transition.

rookctl Command Replaced by Description
rookctl status ceph status Query the status of the storage components
rookctl block See the Block storage and direct Block config Create, configure, mount, or unmount a block image
rookctl filesystem See the Filesystem and direct File config Create, configure, mount, or unmount a file system
rookctl object See the Object storage config Create and configure object stores and object users

Deprecations

  • Legacy CRD types in the rook.io/v1alpha1 API group have been deprecated. The types from
    rook.io/v1alpha2 should now be used instead.
  • Legacy command flag public-ipv4 in the ceph components have been deprecated, public-ip should now be used instead.
  • Legacy command flag private-ipv4 in the ceph components have been deprecated, private-ip should now be used instead.
rook - v0.7.1

Published by jbw976 over 6 years ago

Rook v0.7.1 is a patch release limited in scope and focusing on bug fixes.

Improvements

  • The version of Ceph has been updated to Luminous 12.2.4 (@bassam)
  • When a Ceph monitor is failed over, it will be assigned an appropriate IP address when host networking is being used (@galexrt)
  • The upgrade user guide has been updated to include steps for upgrading from v0.6.x to the v0.7 releases (@travisn)
  • An issue was fixed that prevented the Helm charts from being correctly published to https://charts.rook.io/ (@bassam)
  • In environments where the Kubernetes cluster does not have a version set, the Helm charts will now appropriately proceed (@TimJones)
rook - v0.7.0

Published by jbw976 over 6 years ago

Notable Features

  • The Cluster CRD can now be edited/updated to add and remove storage nodes. Note that only adding/removing entire nodes is currently supported, but adding individual disks/directories will also be supported soon.
  • The rook/rook image now uses the official Ceph packages instead of compiling from source. This ensures that Rook always ships the latest stable and supported Ceph version and reduces the developer burden for maintenance and building.
  • Resource limits are now supported for all pod types. You can constrain the CPU and memory usage for all Rook pods by setting resource limits in the Cluster CRD.
  • Monitoring is now done through the Ceph MGR service for Ceph storage.
  • The CRUSH root can be specified for pools with the crushRoot property, rather than always using the default root. Configuration of the CRUSH hierarchy is necessary with the ceph osd crush commands in the toolbox.
  • A full client API has been generated for all Kubernetes resource types defined in Rook. This allows you to programmatically interact with Rook deployments using golang.
  • The full list of resolved issues can be found in the 0.7 milestone page

Operator Settings

  • AGENT_TOLERATION: Toleration can be added to the Rook agent, such as to run on the master node.
  • FLEXVOLUME_DIR_PATH: Flex volume directory can be overridden on the Rook agent.

Breaking Changes

  • armhf build of Rook have been removed. Ceph is not supported or tested on armhf. arm64 support continues.

Cluster CRD

  • Removed the versionTag property. The container version to launch in all pods will be the same as the version of the operator container.
  • Added the cluster.rook.io finalizer. When a cluster is deleted, the operator will cleanup resources and remove the finalizer, which then allows K8s to delete the CRD.

Operator

  • Removed the ROOK_REPO_PREFIX env var. All containers will be launched with the same image as the operator

Deprecations

rook - v0.6.2

Published by travisn almost 7 years ago

Rook v0.6.2 is a patch release limited in scope with one bug fix.

Improvements

  • Allow Rook to run when RBAC is disabled in the Kubernetes cluster #1221 (@kokhang)
rook - v0.6.1

Published by jbw976 almost 7 years ago

Rook v0.6.1 is a patch release limited in scope and focusing on bug fixes.

Improvements

  • Non-default locations in Kubernetes 1.8 for the volume plugin directory will now correctly be discovered by the rook agents #1162 (@jbw976)
  • Fixed an issue where the rook-api pod would over time begin to consume 100% CPU #1195 (@jbw976)
  • Deletion of an object store will now be scoped to just the specified object store #1198 (@travisn)
  • Volume attachment locks will be cleaned up appropriately when a volume fails to attach for a pod and then the pod is deleted and recreated #1214 (@kokhang)
  • The dynamic volume provisioner in the Rook Operator no longer encounters a panic when a volume from before v0.6.0 is encountered #1252 (@Ulexus)
rook - v0.6.0

Published by jbw976 almost 7 years ago

Major Themes

Rook v0.6 is a release focused on making progress towards our goal of running Rook everywhere Kubernetes runs. There is a new Rook volume plugin and Rook agent that integrates into Kubernetes to provide on demand block and shared filesystem storage for pods with a streamlined and seamless experience. We are keeping the Alpha status for v0.6 due to these two new components. The next release (v0.7) will be our first beta quality release.

Rook has continued its effort for deeper integration with Kubernetes by defining Custom Resource Definitions (CRDs) for both shared filesystems and object storage as well, allowing management of all storage types natively via kubectl.

Reliability has been further improved with a focus on self-healing functionality to recover the cluster to a healthy state when key components have been detected as unhealthy. Investment in the automated test pipelines has been made to increase automated scenario and environment coverage across multiple versions of Kubernetes and cloud providers.

Finally, the groundwork has been laid for automated software upgrades in future releases by both completing the design and publishing a manual upgrade user guide in this release.

Notable Features

  • Rook Flexvolume (@kokhang)
    • Introduces a new Rook plugin based on Flexvolume
    • Integrates with Kubernetes' Volume Controller framework
    • Handles all volume attachment requests such as attaching, mounting and formatting volumes on behalf of Kubernetes
    • Simplifies the deployment by not requiring to have ceph-tools package installed
    • Allows block devices and filesystems to be consumed without any user secret management
    • Improves experience with fencing and volume locking
  • Rook-Agents (@kokhang and @jbw976)
    • Configured and deployed via Daemonset by Rook-operator
    • Installs the Rook Flexvolume plugin
    • Handles all storage operations required on the node, such as attaching devices, mounting volumes and formatting filesystem.
    • Performs node cleanups during Rook cluster teardown
  • File system (@travisn)
    • File systems are defined by a CRD and handled by the Operator
    • Multiple file systems can be created, although still experimental in Ceph
    • Multiple data pools can be created
    • Multiple MDS instances can be created per file system
    • An MDS is started in standby mode for each active MDS
    • Shared filesystems are now supported by the Rook Flexvolume plugin
      • Improved and streamlined experience, now there are no manual steps to copy monitor information or secrets
      • Multiple shared filesystems can be created and consumed within the cluster
      • More information can be found in the shared filesystems user guides
  • Object Store (@travisn)
    • Object Stores are defined by a CRD and handled by the Operator
    • Multiple object stores supported through Ceph realms
  • OSDs (@jbw976, @paha, and @phsiao)
    • Bluestore is now the default backend store for OSDs when creating a new Rook cluster.
    • Bluestore can now be used on directories in addition to raw block devices that were already supported.
    • If an OSD loses its metadata and config but still has its data devices, the OSD will automatically regenerate the lost metadata to make the data available again.
    • If an OSD is observed to be in the down status for an extended period of time then it will automatically be restarted.
    • OSDs can now run as lower privileged when devices are not being used.
  • Pools (@travisn)
    • The failure domain for the CRUSH map can be specified on pools with the failureDomain property
    • Pools created by file systems or object stores are configurable with all options defined in the pool CRD
  • Upgrade (@jbw976)
    • A manual upgrade user guide has been published to help guide users through the recommended upgrade process.
  • API (@galexrt)
    • The API pod will continuously refresh its list of monitor endpoints in order to always be able to service requests even after monitor topology has changed
  • Test framework (@dangula)
    • Test pipelines cover Kubernetes 1.6-1.8 and cloud providers AWS and GCE
    • Long haul testing pipeline will run for multiple days to simulate longer term workloads
    • Raw block devices in addition to local filesystems (directories) are now tested
    • Helm chart installation is now tested
  • HostNetwork (@galexrt)
    • It is now possible to launch a Rook cluster with host networking support. See further details in the Cluster CRD documentation.

Breaking Changes

  • Rook Standalone
    • Standalone mode has been disabled and is no longer supported. A Kubernetes environment is required to run Rook.
  • Rook-operator now deploys in rook-system namespace
    • If using the example manifest of rook-operator.yaml, the rook-operator deployment is now changed to deploy in the rook-system namespace.
  • Rook Flexvolume
    • Persistent volumes from previous releases were created using the RBD plugin. These should be deleted and recreated in order to use the new Flexvolume plugin.
  • Pool CRD
    • replication renamed to replicated
    • erasureCode renamed to erasureCoded
  • OSDs
    • OSD pods now require RBAC permissions to create/get/update/delete/list config maps.
      An upgraded operator will create the necessary service account, role, and role bindings to enable this.
  • API
    • The API pod now uses RBAC permissions that are scoped only to the namespace it is running in.
      An upgraded operator will create the necessary service account, role, and role bindings to enable this.
  • Filesystem
    • The Rook Flexvolume uses the mds_namespace option to specify a cephFS. This is only available on Kernel v4.7 or newer. On older kernel, if there are more than one filesystems in the cluster, the mount operation could be inconsistent. See this doc.

Known Issues

  • Rook Flexvolume
    • For Kubernetes cluster 1.7.x and older, Kubelet will need to be restarted in order to load the new flexvolume. This has been resolved in K8S 1.8. For more information about the requirements, refer to this doc
    • For Kubernetes cluster 1.6.x, the attacher/detacher controller needs to be disabled in order to load the new Flexvolume. This is caused by a regression on 1.6.x. For more information about the requirements, refer to this doc
    • For CoreOS and Rancher Kubernetes, the Flexvolume plugin dir will need to be specified to be different than the default. Refer to Flexvolume configuration pre-reqs
  • OSD pods will not get to the running state for Kubernetes 1.6.4 or lower, due to a regression in Kubernetes that is not compatible with security context settings being applied to the OSD pods. The workaround is to upgrade Kubernetes to 1.6.5 or higher. More details can be found in the tracking issue.

Deprecations

  • Rook Standalone
    • As mentioned in the breaking changes section, Standalone mode has been removed.
  • Rook API
    • Rook has a goal to integrate natively and deeply with container orchestrators such as Kubernetes and using extension points to manage and access the Rook storage cluster. More information can be found in the tracking issue.
  • Rookctl
    • The rookctl client is being deprecated. With the deeper and more native integration of Rook with Kubernetes, kubectl now provides a rich Rook management experience on its own. For direct management of the Ceph storage cluster, the Rook toolbox provides full access to the Ceph tools.
rook - v0.5.1

Published by jbw976 about 7 years ago

Rook v0.5.1 is a patch release limited in scope and focusing on bug fixes and build improvements.

Improvements

  • Ceph Luminous has been upgraded to 12.1.3
  • Helm charts are now built and published as part of the continuous integration pipeline. Details can be found in the Helm Chart readme
  • Improve initial monitor quorum performance so a Rook cluster can be bootstrapped more quickly
  • Rook's metrics and monitoring via Prometheus is now fully compatible with Ceph Luminous
  • Allow placement policy to be applied to manager pods
rook - v0.5.0

Published by bassam about 7 years ago

Major Themes

Rook v0.5 is a milestone release that improves reliability, adds support for newer versions of Kubernetes, picks up the latest stable release of Ceph (Luminous), and makes a number of architectural changes that pave the way to getting to Beta and adding support for other storage back-ends beyond Ceph.

Attention needed

Rook does not yet support upgrading a cluster in place. To upgrade from 0.4 to 0.5 we recommend you tear down your cluster and install Rook 0.5 fresh.

We now publish the rook containers to quay.io and docker hub. Docker hub supports multi-arch containers so a simple docker pull rook/rook will pull the right images for any of the supported architectures. We will continue to publish quay.io for continuity.

Rook no longer runs as a single binary. For standalone mode we now require a container runtime. We now only publish containers for Rook daemons. Client tools are still released in binary form.

There is a new release site for Rook that contains all binaries, images, yaml files, test results etc.

Known Issues

If you shut down a Rook cluster without first unbinding persistent volumes, the volumes might be stuck indefinitely and require a host reboot to get cleared. See #376.

Notable Features

Kubernetes

  • Support for Kubernetes 1.7 with CRDs. Rook uses TPRs for pre-1.7 clusters.
  • Names of the deployments, services, daemonsets, pods, etc are more consistent. For example, rook-ceph-mon, rook-ceph-osd, rook-ceph-mgr, rook-ceph-rgw, rook-ceph-mds
  • Node affinity and Tolerations added to the Cluster TPR for api, mon, osd, mds, and rgw
  • Mon failover is now supported by using a service address for each mon pod
  • Each mon is managed with a replicaset
  • New Rook Operator Helm chart
  • A ConfigMap can be used to override Ceph settings in the daemons

Ceph

  • Ceph Luminous is now the version used by Rook. Luminous is the basis of a LTS release of Ceph and introduces bluestore which improves performance.
  • Ceph is no longer compiled into Rook as a static library. Instead we package a streamlined version of Ceph into our containers and call the ceph daemons and tools directly.
  • Ceph Kraken is no longer supported.

Tools

  • The client binary rook was renamed to rookctl
  • The daemon binary was renamed from rookd to rook
  • The rook-client container is no longer built. Run the toolbox container for access to the rookctl tool.
  • amd64, arm, and arm64 are supported by the toolbox in addition to the daemons

Build and Test

  • No more git submodules
  • Added support for armhf (experimental)
  • Faster incremental image builds based on caching
  • E2E Integration tests for Block, File, and Object
  • Block store long haul testing
rook - v0.4.0

Published by jbw976 over 7 years ago

Notes

  • Breaking changes list for v0.4.0
  • Kubernetes 1.6 is now the minimum supported version
  • In place upgrades are not yet supported
  • Rook releases now support "release channels" (master, alpha, beta, stable)
    • Users can now always try the latest from master in quay.io/rook/rookd:master-latest

Features and Improvements

The full set of completed issues can be found in the v0.4.0 milestone.

rook - v0.3.1

Published by jbw976 over 7 years ago

rook - v0.3.0

Published by travisn over 7 years ago

Rook Integration with Kubernetes

  • Rook storage as a first-class entity of Kubernetes
  • Rook operator creates clusters and and manages their health, triggered by a TPR. For each cluster, the operator creates a namespace and starts all the pods and services necessary to consume the storage
  • Block storage consumed by pods with the Ceph RBD volume plugin
  • Object storage can be enabled and consumed over S3 APIs
  • Shared file system can be enabled and consumed by the CephFS volume plugin
  • Rook tools pod for troubleshooting the cluster with Rook and Ceph command line tools
  • Multiple Rook clusters can be created, each in their own namespace (experimental)
  • By default, storage is simply in a directory for the lifetime of the OSD pod. To use available devices instead of a directory, set useAllDevices: true. Beware that this is overly aggressive to format and utilize the devices.
rook - v0.2.2

Published by jbw976 almost 8 years ago

Stability fixes for running reliably in a vagrant kubernetes cluster.

Package Rankings
Top 0.69% on Proxy.golang.org
Badges
Extracted from project README
CNCF Status GitHub release Docker Pulls Go Report Card OpenSSF Scorecard CII Best Practices Security scanning Slack Twitter Follow FOSSA Status