Dear cephers,
with one osd down(200GB/9.1TB data), rebalance takes 3 hours still in
progress. Client bandwidth can go as high as 200MB/s. With little client
request throughput, recovery goes at couple MB/s. I wonder if there is
configuration to polish for improvement. It runs with quincy 17.2.5,
We noticed extremely slow performance when remapping is necessary. We didn't do
anything special other than assigning the correct device_class (to ssd). When
checking ceph status, we notice the number of objects recovering is around
17-25 (with watch -n 1 -c ceph status).
How can we increase