Bot releases are visible (Hide)
Published by travisn over 5 years ago
Rook v1.0.1 is a patch release limited in scope and focusing on bug fixes.
metadataDevice
for configuring OSDs (#3108, @mvollman)Published by travisn over 5 years ago
If you are running a previous Rook version, please see the corresponding storage provider upgrade guide:
v14
) is now supported by Rook and is the default version deployed by the examples.ROOK_CEPH_STATUS_CHECK_INTERVAL
env var.CephNFS
CRD will start NFS daemon(s) for exporting CephFS volumes or RGW buckets. See the NFS documentation.NoSchedule
anymore get added automatically to the existing rook cluster if useAllNodes is set.rook-version
and ceph-version
labels are now applied to Ceph daemon Deployments, DaemonSets,ceph-volume
now supports metadataDevice
and databaseSizeMB
options.1.8
and 1.9
.master
and release
.hostNetwork
and allowMultiplePerNode
are true
.common.yaml
from operator.yaml
and cluster.yaml
.common.yaml
: Creates the namespace, RBAC, CRD definitions, and other common operator and cluster resourcesoperator.yaml
: Only contains the operator deploymentcluster.yaml
: Only contains the cluster CRDrook-ceph
) is configured instead of two namespaces (rook-ceph-system
and rook-ceph
). New and upgraded clusters can still be configured with the operator and cluster in two separate namespaces. Existing clusters will maintain their namespaces on upgrade.dataDirHostPath
if no directories ormon
, mgr
, mds
, rgw
, and rbd-mirror
pods have been removed and/or changed names.mon
, mgr
, mds
and rgw
containers are now always under/etc/ceph
or /var/lib/ceph
and as close to Ceph's default path as possible regardless of thedataDirHostPath
setting.rbd-mirror
pod labels now read rbd-mirror
instead of rbdmirror
for consistency.Published by travisn over 5 years ago
Rook v0.9.2 is a patch release limited in scope and focusing on bug fixes.
Published by travisn almost 6 years ago
Rook v0.9.1 is a patch release limited in scope and focusing on bug fixes.
dataBlockPool
parameter for the storage class of an erasure-coded pool (#2370, @galexrt)server_addr
on the prometheus and dashboard modules to avoid health errors (#2335, @travisn)Published by travisn almost 6 years ago
ceph.rook.io/v1
CRD types. Please follow the instructions in the upgrade user guide to successfully migrate your existing Rook cluster to the new release.1.7
to 1.8
.ceph/ceph:v13.2.2-20181023
fsType
default for StorageClass examples are now using XFS to bring it in line with Ceph recommendations.rook-ceph-mgr
) added for the mgr daemon to grant the mgr orchestrator modules access to the K8s APIs.reclaimPolicy
parameter of StorageClass
definition is now supported.rook/ceph
image instead of creating a pod on a specialized rook/ceph-toolbox
image.ROOK_DISCOVER_DEVICES_INTERVAL
in operator.yaml.mon.count
in the cluster CRD.ceph-volume
tool when configuring devices, adding support for multiple OSDs per device. See the OSD configuration settings
nfsservers.nfs.rook.io
custom resource. See the NFS server user guide to get started with NFS.clusters.cassandra.rook.io
custom resource. See the user guide to get started.1.7
. Users running Kubernetes 1.7
on their clusters are recommended to upgrade to Kubernetes 1.8
or higher. If you are using kubeadm
, you can follow this guide to from Kubernetes 1.7
to 1.8
. If you are using kops
or kubespray
for managing your Kubernetes cluster, just follow the respective projects' upgrade
guide.kind
has been renamed for the following Ceph CRDs:
Cluster
--> CephCluster
Pool
--> CephBlockPool
Filesystem
--> CephFilesystem
ObjectStore
--> CephObjectStore
ObjectStoreUser
--> CephObjectStoreUser
rook-ceph-cluster
service account was renamed to rook-ceph-osd
as this service account only applies to OSDs.
rook-ceph-osd
service account must be created before starting the operator on v0.9.serviceAccount
property has been removed from the cluster CRD.Published by travisn about 6 years ago
Rook v0.8.3 is a patch release limited in scope and focusing on bug fixes.
Published by travisn about 6 years ago
Rook v0.8.2 is a patch release limited in scope and focusing on bug fixes.
Published by travisn about 6 years ago
Rook v0.8.1 is a patch release limited in scope and focusing on bug fixes.
min_size
set to the number of data chunks. (@galexrt)placement
specified in the cluster CRD. (@rootfs)recreate
to avoid resource contention at restart. (@galexrt)Published by jbw976 over 6 years ago
v0.8
operator and to begin using the new rook.io/v1alpha2
and ceph.rook.io/v1beta1
CRD types. Please follow the instructions in the upgrade user guide to successfully migrate your existing Rook cluster to the new release, as it has been updated with specific steps to help you upgrade to v0.8
.cluster.cockroachdb.rook.io
custom resource. See the CockroachDB user guide to get started with CockroachDB.objectstore.minio.rook.io
custom resource, follow the steps in the Minio user guide.rook/ceph
image is now based on the ceph-container project's 'daemon-base' image so that Rook no longer has to manage installs of Ceph in image. This image is based on CentOS 7.exec
of external tools.helm
or kubectl
.monCount
has been renamed to count
, which has been moved into the mon
spec. Additionally the default if unspecified or 0
, is now 3
.allowMultiplePerNode
option (default false
) in the mon
spec.nodeSelector
to Rook Ceph operator Helm chart.cluster/examples/kubernetes/ceph
. The yaml files that are provider-independent will still be found in the cluster/examples/kubernetes
folder.apiVersion
of the Rook CRDs are now provider-specific, such as ceph.rook.io/v1beta1
instead of rook.io/v1alpha1
.ceph.rook.io
apiVersion you will need to take note of the new settings structure.rook/ceph
and rook/ceph-toolbox
. The steps in the upgrade user guide will automatically start using these new images for your cluster.rook-system
and rook
, you will see rook-ceph-system
and rook-ceph
.ceph.rook.io
instead of rook.io
The REST API service has been removed. All cluster configuration is now accomplished through the
CRDs or with the Ceph tools in the toolbox.
The tool rookctl
has been removed from the toolbox pod. Cluster status and configuration can be queried and changed with the Ceph tools.
Here are some sample commands to help with your transition.
rookctl Command |
Replaced by | Description |
---|---|---|
rookctl status |
ceph status |
Query the status of the storage components |
rookctl block |
See the Block storage and direct Block config | Create, configure, mount, or unmount a block image |
rookctl filesystem |
See the Filesystem and direct File config | Create, configure, mount, or unmount a file system |
rookctl object |
See the Object storage config | Create and configure object stores and object users |
rook.io/v1alpha1
API group have been deprecated. The types fromrook.io/v1alpha2
should now be used instead.public-ipv4
in the ceph components have been deprecated, public-ip
should now be used instead.private-ipv4
in the ceph components have been deprecated, private-ip
should now be used instead.Published by jbw976 over 6 years ago
Rook v0.7.1 is a patch release limited in scope and focusing on bug fixes.
Published by jbw976 over 6 years ago
rook/rook
image now uses the official Ceph packages instead of compiling from source. This ensures that Rook always ships the latest stable and supported Ceph version and reduces the developer burden for maintenance and building.crushRoot
property, rather than always using the default
root. Configuration of the CRUSH hierarchy is necessary with the ceph osd crush
commands in the toolbox.AGENT_TOLERATION
: Toleration can be added to the Rook agent, such as to run on the master node.FLEXVOLUME_DIR_PATH
: Flex volume directory can be overridden on the Rook agent.armhf
build of Rook have been removed. Ceph is not supported or tested on armhf
. arm64 support continues.versionTag
property. The container version to launch in all pods will be the same as the version of the operator container.cluster.rook.io
finalizer. When a cluster is deleted, the operator will cleanup resources and remove the finalizer, which then allows K8s to delete the CRD.ROOK_REPO_PREFIX
env var. All containers will be launched with the same image as the operatorrook-ceph-mgr
port 9283
path /
should be used instead: https://rook.io/docs/rook/master/monitoring.html
Published by travisn almost 7 years ago
Rook v0.6.2 is a patch release limited in scope with one bug fix.
Published by jbw976 almost 7 years ago
Rook v0.6.1 is a patch release limited in scope and focusing on bug fixes.
Published by jbw976 almost 7 years ago
Rook v0.6 is a release focused on making progress towards our goal of running Rook everywhere Kubernetes runs. There is a new Rook volume plugin and Rook agent that integrates into Kubernetes to provide on demand block and shared filesystem storage for pods with a streamlined and seamless experience. We are keeping the Alpha status for v0.6 due to these two new components. The next release (v0.7) will be our first beta quality release.
Rook has continued its effort for deeper integration with Kubernetes by defining Custom Resource Definitions (CRDs) for both shared filesystems and object storage as well, allowing management of all storage types natively via kubectl
.
Reliability has been further improved with a focus on self-healing functionality to recover the cluster to a healthy state when key components have been detected as unhealthy. Investment in the automated test pipelines has been made to increase automated scenario and environment coverage across multiple versions of Kubernetes and cloud providers.
Finally, the groundwork has been laid for automated software upgrades in future releases by both completing the design and publishing a manual upgrade user guide in this release.
failureDomain
propertyrook-system
namespace
rook-system
namespace.replication
renamed to replicated
erasureCode
renamed to erasureCoded
mds_namespace
option to specify a cephFS. This is only available on Kernel v4.7 or newer. On older kernel, if there are more than one filesystems in the cluster, the mount operation could be inconsistent. See this doc.rookctl
client is being deprecated. With the deeper and more native integration of Rook with Kubernetes, kubectl
now provides a rich Rook management experience on its own. For direct management of the Ceph storage cluster, the Rook toolbox provides full access to the Ceph tools.Published by jbw976 about 7 years ago
Rook v0.5.1 is a patch release limited in scope and focusing on bug fixes and build improvements.
Published by bassam about 7 years ago
Rook v0.5 is a milestone release that improves reliability, adds support for newer versions of Kubernetes, picks up the latest stable release of Ceph (Luminous), and makes a number of architectural changes that pave the way to getting to Beta and adding support for other storage back-ends beyond Ceph.
Rook does not yet support upgrading a cluster in place. To upgrade from 0.4 to 0.5 we recommend you tear down your cluster and install Rook 0.5 fresh.
We now publish the rook containers to quay.io and docker hub. Docker hub supports multi-arch containers so a simple docker pull rook/rook
will pull the right images for any of the supported architectures. We will continue to publish quay.io for continuity.
Rook no longer runs as a single binary. For standalone mode we now require a container runtime. We now only publish containers for Rook daemons. Client tools are still released in binary form.
There is a new release site for Rook that contains all binaries, images, yaml files, test results etc.
If you shut down a Rook cluster without first unbinding persistent volumes, the volumes might be stuck indefinitely and require a host reboot to get cleared. See #376.
rook
was renamed to rookctl
rookd
to rook
rook-client
container is no longer built. Run the toolbox container for access to the rookctl
tool.amd64
, arm
, and arm64
are supported by the toolbox in addition to the daemonsPublished by jbw976 over 7 years ago
quay.io/rook/rookd:master-latest
kubeadm
supportquay.io/rook/rookd
The full set of completed issues can be found in the v0.4.0 milestone.
Published by jbw976 over 7 years ago
Published by travisn over 7 years ago
useAllDevices: true
. Beware that this is overly aggressive to format and utilize the devices.Published by jbw976 almost 8 years ago
Stability fixes for running reliably in a vagrant kubernetes cluster.