Hello,

my cluster is currently showing a metadata imbalance. Normally, all OSDs
have around 23GB metadata (META column), but 4 OSDs out of 56 have 34 GB
metadata. Compacting reduces the data for some OSDs, but not for others.
OSDs where the compaction worked quickly grow to back to 34GB.

Our cluster configuration:
* 8 nodes, each with 6 HDDs OSDs and 1 SSD used for blockdb and WAL
* k=4 m=2 EC
* v14.2.14

Normal OSD:
ID CLASS WEIGHT   REWEIGHT SIZE    RAW USE DATA    OMAP    META
AVAIL   %USE  VAR  PGS STATUS
40   hdd 11.09470  1.00000  11 TiB 8.6 TiB 8.4 TiB 1.3 GiB   23 GiB 2.5
TiB 77.15 1.01 130     up

Big OSD:
ID CLASS WEIGHT   REWEIGHT SIZE    RAW USE DATA    OMAP    META
AVAIL   %USE  VAR  PGS STATUS
 0   hdd 11.09499  1.00000  11 TiB 8.6 TiB 8.4 TiB 1.8 GiB   30 GiB 2.5
TiB 77.59 1.02 130     up

There are 56 OSDs in the cluster, 4 OSDs of which are bigger. These OSDs
are all in different hosts.

Why is that? Is that dangerous or could lead to problems such as
performance degrades?

Thanks,

Paul
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to