thanks. I tried and it improved the situation by double the speed to 10MB/s
or something. Good catch for the fix!

It would be good to be at 50MB/s of recovery as far as cluster
infrastructure could support in my case. there may be other constraint on
resource utilization for recovery that I am not aware of?

胡 玮文 <huw...@outlook.com> 于2023年10月11日周三 00:18写道:

> Hi Ben,
>
> Please see this thread
> https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/PWHG6QJ6N2TJEYD2U4AXJAJ23CRPJG4E/#7ZMBM23GXYFIGY52ZWJDY5NUSYSDSYL6
>  for
> possible workaround.
>
> 发自我的 iPad
>
> 在 2023年10月10日,22:26,Ben <ruidong....@gmail.com> 写道:
>
> Dear cephers,
>
> with one osd down(200GB/9.1TB data), rebalance takes 3 hours still in
> progress. Client bandwidth can go as high as 200MB/s. With little client
> request throughput, recovery goes at couple MB/s. I wonder if there is
> configuration to polish for improvement. It runs with quincy 17.2.5,
> deployed by cephadm. The slowness can do harm in peak hours of usage.
>
> Best wishes,
>
> Ben
> -------------------------------------------------
>    volumes: 1/1 healthy
>    pools:   8 pools, 209 pgs
>    objects: 93.04M objects, 4.8 TiB
>    usage:   15 TiB used, 467 TiB / 482 TiB avail
>    pgs:     1206837/279121971 objects degraded (0.432%)
>             208 active+clean
>             1   active+undersized+degraded+remapped+backfilling
>
>  io:
>    client:   80 KiB/s rd, 420 KiB/s wr, 12 op/s rd, 29 op/s wr
>    recovery: 6.2 MiB/s, 113 objects/s
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to