Hi Eugen.

Sorry for my hasty and incomplete report. We did not remove any pool. Garbage collecion is not in progress.

radosgw-admin gc list
[]


However, I have noticed that there are several placement groups that are in the backfill_wait state for some time.


  cluster:
    id:     a12aa2d2-fae7-df35-ea2f-3de23100e345
    health: HEALTH_WARN
            3492 pgs not deep-scrubbed in time
            3830 pgs not scrubbed in time
            1 pool(s) do not have an application enabled

  services:
    mon: 3 daemons, quorum mon001-clx,mon002-clx,mon003-clx (age 10h)
    mgr: mon002-clx(active, since 10h), standbys: mon001-clx, mon003-clx
    mds: 1/1 daemons up, 1 standby
osd: 2438 osds: 2433 up (since 12h), 2361 in (since 12h); 382 remapped pgs
    rgw: 31 daemons active (4 hosts, 2 zones)

  data:
    volumes: 1/1 healthy
    pools:   42 pools, 9521 pgs
    objects: 1.87G objects, 6.9 PiB
    usage:   14 PiB used, 16 PiB / 29 PiB avail
    pgs:     1656117639/32580808518 objects misplaced (5.083%)
             9134 active+clean
             375  active+remapped+backfill_wait
             7    active+remapped+backfilling
             4    active+clean+scrubbing+deep
             1    active+clean+scrubbing

  io:
    client:   754 KiB/s rd, 380 MiB/s wr, 131 op/s rd, 358 op/s wr
    recovery: 324 MiB/s, 81 objects/s






ceph daemon mon.mon001-clx config show | grep full
    "mon_cache_target_full_warn_ratio": "0.660000",
    "mon_osd_backfillfull_ratio": "0.900000",
    "mon_osd_full_ratio": "0.950000",
    "mon_osd_nearfull_ratio": "0.850000",
    "mon_osdmap_full_prune_enabled": "true",
    "mon_osdmap_full_prune_interval": "10",
    "mon_osdmap_full_prune_min": "10000",
    "mon_osdmap_full_prune_txsize": "100",
    "osd_debug_skip_full_check_in_backfill_reservation": "false",
    "osd_debug_skip_full_check_in_recovery": "false",
    "osd_failsafe_full_ratio": "0.970000",
    "osd_pool_default_cache_target_full_ratio": "0.800000",
    "paxos_stash_full_interval": "25",


ceph osd dump | grep ratio
full_ratio 0.9
backfillfull_ratio 0.85
nearfull_ratio 0.8



Thank you

Michal



On 4/7/23 21:09, Eugen Block wrote:
Hi,

we know nothing about the cluster status, so it’s really hard to know exactly. Is backfill going on? Did you delete a pool and the garbage collection is in progress?


Zitat von Michal Strnad <michal.str...@cesnet.cz>:

Hi all,

Our SSD disks are filling up, even if there is not a single placement group on them. How is that possible please?

ID    CLASS  WEIGHT    REWEIGHT  SIZE     RAW USE  DATA     OMAP META     AVAIL    %USE   VAR   PGS  STATUS 1997    ssd   1.15269   1.00000  1.2 TiB  943 GiB  942 GiB    6 KiB 1.5 GiB  237 GiB  79.90  1.71    0         up

Cluster uses Pacific 16.2.7.

Thank you

Michal


_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to