- Added a toolbox job for running a script with Ceph commands, similar to running commands in the Rook toolbox.
- Ceph RBD Mirror daemon has been extracted to its own CRD, it has been removed from the
CephCluster
CRD, see the rbd-mirror crd. - CephCluster CRD changes:
- Converted to use the controller-runtime framework
- ability to control health check as well as pod liveness probe, refer to health check section
- CephBlockPool CRD has a new field called
parameters
which allows to set any property on a given pool - OSD changes:
- OSD on PVC now supports multipath device.
- OSDs can now be provisioned using Ceph's Drive Groups definitions for Ceph Octopus v15.2.5+. See docs for more
- Added admission controller support for CRD validations.
- Support for Ceph CRDs is provided. Some validations for CephClusters are included and additional validations can be added for other CRDs
- Can be extended to add support for other providers
- OBC changes:
- Updated lib bucket provisioner version to support multithread and required change can be found in operator.yaml
- Can be extended to add support for other providers
- Updated lib bucket provisioner version to support multithread and required change can be found in operator.yaml
- CephObjectStore CRD changes:
- Health displayed in the Status field
- Supports connecting to external Ceph Rados Gateways, refer to the external object section
- The CephObjectStore CR runs health checks on the object store endpoint, refer to the health check section
- The endpoint is now displayed in the Status field
- Updated Base image from Alpine 3.8 to 3.12 due to CVEs.
- rbd-mirror daemons that were deployed through the CephCluster CR won't be managed anymore as they have their own CRD now.
To transition, you can inject the new rbd mirror CR with the desired
count
of daemons and delete the previously managed rbd mirror deployments manually. - old monitoring settings used in the
operator.yaml
:ROOK_CEPH_STATUS_CHECK_INTERVAL
,ROOK_MON_HEALTHCHECK_INTERVAL
,ROOK_MON_OUT_TIMEOUT
are now deprecated. Backward compatibility is maintained for existing deployments. These settings are now in theCephCluster
CR, refer to health check section