Hello Mehmet,

On Sat, Sep 3, 2022 at 1:50 PM <c...@elchaka.de> wrote:

> Is ceph still backfilling? What is the actual output of ceph -s?
>

Yes:

[trui...@ceph02.eun ~]$ sudo ceph --cluster xxx -s
  cluster:
    id:     91ba1ea6-bfec-4ddb-a8b5-9faf842f22c3
    health: HEALTH_WARN
            1 backfillfull osd(s)
            3 pool(s) backfillfull
            Low space hindering backfill (add storage if this doesn't
resolve itself): 3 pgs backfill_toofull

  services:
    mon: 5 daemons, quorum a,b,c,d,e (age 6d)
    mgr: b(active, since 45h), standbys: a, c, d, e
    mds: registration_docs:1 {0=b=up:active} 3 up:standby
    osd: 32 osds: 32 up (since 19M), 32 in (since 3y); 12 remapped pgs
    mds: 1 daemon active (b)

  data:
    pools:   3 pools, 1280 pgs
    objects: 14.32M objects, 23 TiB
    usage:   69 TiB used, 47 TiB / 116 TiB avail
    pgs:     543262/42962769 objects misplaced (1.264%)
             1268 active+clean
             9    active+remapped+backfilling
             3    active+remapped+backfill_toofull

  io:
    client:   5.0 MiB/s rd, 296 KiB/s wr, 10 op/s rd, 0 op/s wr
    recovery: 73 MiB/s, 36 keys/s, 44 objects/s


>
> If not backfilling, it is strange that you only have 84 pgs on osd.11 but
> 93.59 percent in use...
>

This morning it wasn't backfilling, but after I did another `osd
reweight-by-utilization`, it started again.


>
> Are you able to find a pg on 11 which is too big?
> Perhaps pg query will help to find. Otherwise you should lower the weight
> of the osd...
>

It's a nautilus cluster, it looks like the pg query command doesn't exist
there. How would I find the large pg(s) on osd.11 and how could I force
them off the osd?
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to