This is the second release candidate for Tentacle.

Ubuntu 22.04, 24.04 and CentOS 9 and Rocky 10 were built for this RC0

Feature highlights:

* RADOS:
New features include long expected performance optimizations (FastEC) for
EC pools, including partial reads and partial writes. There is a new
implementation of the     Erasure Coding I/O code which provides
substantial performance improvements and some capacity improvements. The
new code is designed to optimize performance when using Erasure Coding with
block storage (RBD) and file storage (CephFS) but will have some benefits
for object (RGW) storage, in particular when using smaller sized objects. A
new flag allow_ec_optimizations needs to be set on each pool to switch to
using the new code, existing pools can be upgraded once the OSD and MON
daemons have been updated; there is no need to update the clients.
The default erasure code plugin has been switched from Jerasure to ISA-L.
Users can also expect to see BlueStore improvements such as compression and
a new, faster WAL (write-ahead-log).
Additional improvements include switching all components to the faster OMAP
iteration interface, bypassable ceph_assert()s, fixed mclock bugs and
configuration defaults, and testing improvements for dencoding verification.
Data Availability Score: Users can now track data availability score for
each pool in their cluster. This feature is currently in tech preview.A
pool is considered unavailable if any PG in the pool is not in active state
or if there are unfound objects. Otherwise the pool is considered
available. The score is updated every one second by default. The feature is
on by default.

* MGR: Highlights include the ability to force-disable always-on modules and
  the removal of the restful and zabbix modules (both deprecated since
2020).
  Note that the dashboard module's richer and better-maintained RESTful API
can be used
  as an alternative to the restful module, and the prometheus module can be
used
  as an alternative monitoring solution for zabbix.

* RGW: Multiple fixes: Lua scripts will not run against health checks,
  properly quoted ETag values returned by S3 CopyPart, PostObject and
  CompleteMultipartUpload responses.

* RGW: IAM policy evaluation now supports conditions ArnEquals and ArnLike,
  along with their Not and IfExists variants.

* RBD: New live migration features: RBD images can now be instantly
  imported from another Ceph cluster (native format) or from a wide
  variety of external sources/formats with the help of the new NBD
  stream and an appropriately capable NBD server such as `qemu-nbd`.
  Also added support for RBD namespace remapping while mirroring
  between Ceph clusters, new `rbd group info` and `rbd group snap info`
  commands and enhanced `rbd group snap ls` command.  `rbd device map`
  command now defaults to msgr2.

* CephFS: Directories may now be configured with case-insensitive or
  normalized directory entry names. This is an inheritable configuration
making
  it apply to an entire directory tree. For more information, see
  https://docs.ceph.com/en/latest/cephfs/charmap/

* CephFS: Modifying the FS setting variable "max_mds" when a cluster is
  unhealthy now requires users to pass the confirmation flag
  (--yes-i-really-mean-it). This has been added as a precaution to tell the
  users that modifying "max_mds" may not help with troubleshooting or
recovery
  effort. Instead, it might further destabilize the cluster.

* CephFS: EOPNOTSUPP (Operation not supported ) is now returned by the
CephFS
  fuse client for `fallocate` for the default case (i.e. mode == 0) since
  CephFS does not support disk space reservation. The only flags supported
are
  `FALLOC_FL_KEEP_SIZE` and `FALLOC_FL_PUNCH_HOLE`.

* Dashboard: Added support for NVMe/TCP (gateway groups, multiple
namespaces),
  multi-cluster management, oAuth2 integration, and enhanced RGW/SMB
features
  including multi-site automation, tiering, policies, lifecycles,
  notifications, and granular replication.
* Integrated SMB support Ceph clusters now support an smb manager module
that works  similarly to the existing nfs subsystem. The new smb support
allows the ceph cluster to automatically create Samba-backed SMB file
shares connected to CephFS. The smb module can configure both basic Active
Directory domain or standalone user authentication. The Ceph cluster can
host one or more virtual smb cluster which can be truly clustered using
Samba's CTDB technology. The smb module requires a cephadm enabled Ceph
cluster and deploys container images provided by the samba-container
project. The Ceph dashboard can be used to configure smb clusters and
shares. A new cephfs-proxy daemon is automatically deployed to improve the
scalability and memory usage when connecting Samba to CephFS.

* Git at git://github.com/ceph/ceph.git
* Tarball at https://download.ceph.com/tarballs/ceph-20.1.1.tar.gz
* Containers at https://quay.io/repository/ceph/ceph
* For packages, see https://docs.ceph.com/en/latest/install/get-packages/
* Release git sha1: dd9c546413d50a90668289255a256022ea21f0c0
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to