Hi Ben,

Please see this thread 
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/PWHG6QJ6N2TJEYD2U4AXJAJ23CRPJG4E/#7ZMBM23GXYFIGY52ZWJDY5NUSYSDSYL6
 for possible workaround.

发自我的 iPad

在 2023年10月10日,22:26,Ben <ruidong....@gmail.com> 写道:

Dear cephers,

with one osd down(200GB/9.1TB data), rebalance takes 3 hours still in
progress. Client bandwidth can go as high as 200MB/s. With little client
request throughput, recovery goes at couple MB/s. I wonder if there is
configuration to polish for improvement. It runs with quincy 17.2.5,
deployed by cephadm. The slowness can do harm in peak hours of usage.

Best wishes,

Ben
-------------------------------------------------
   volumes: 1/1 healthy
   pools:   8 pools, 209 pgs
   objects: 93.04M objects, 4.8 TiB
   usage:   15 TiB used, 467 TiB / 482 TiB avail
   pgs:     1206837/279121971 objects degraded (0.432%)
            208 active+clean
            1   active+undersized+degraded+remapped+backfilling

 io:
   client:   80 KiB/s rd, 420 KiB/s wr, 12 op/s rd, 29 op/s wr
   recovery: 6.2 MiB/s, 113 objects/s
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to