I observe the same issue after adding two new OSD hosts to an almost empty 
mimic cluster.

> Let's try to restrict discussion to the original thread
> "backfill_toofull while OSDs are not full" and get a tracker opened up
> for this issue.

Is this the issue you are referring to: https://tracker.ceph.com/issues/41255 ?

I have a number of larger rebalance operations ahead and will probably see this 
for a couple of days. If there is any information (logs etc.) I can provide, 
please let me know. Status right now is:

[root@ceph-01 ~]# ceph status
  cluster:
    id:     e4ece518-f2cb-4708-b00f-b6bf511e91d9
    health: HEALTH_ERR
            15227159/90990337 objects misplaced (16.735%)
            Degraded data redundancy (low space): 64 pgs backfill_toofull
            too few PGs per OSD (29 < min 30)

  services:
    mon: 3 daemons, quorum ceph-01,ceph-02,ceph-03
    mgr: ceph-01(active), standbys: ceph-03, ceph-02
    mds: con-fs-1/1/1 up  {0=ceph-12=up:active}, 1 up:standby-replay
    osd: 208 osds: 208 up, 208 in; 273 remapped pgs

  data:
    pools:   7 pools, 790 pgs
    objects: 9.45 M objects, 17 TiB
    usage:   21 TiB used, 1.4 PiB / 1.4 PiB avail
    pgs:     15227159/90990337 objects misplaced (16.735%)
             517 active+clean
             190 active+remapped+backfill_wait
             64  active+remapped+backfill_wait+backfill_toofull
             19  active+remapped+backfilling

  io:
    client:   893 KiB/s rd, 6.3 MiB/s wr, 208 op/s rd, 306 op/s wr
    recovery: 298 MiB/s, 156 objects/s

Best regards,

=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to