After letting the balancer run all night, I have recovered 35TB of
additional available space. Average used space on all osds is still 63%,
but now with a range of 61-64%, so much better. The client is reporting
144TB total space, which is closer to the 168TB I would expect (504TB
total raw spa
I should also say that I enabled the balancer with upmap mode, since the
only client (the backup server) is also running nautilus.
Seth
On 12/6/21 4:09 PM, Seth Galitzer wrote:
I'm running ceph 14.2.20 on Centos7, installed from the official
ceph-nautilus repo. I started a manual rebalance run
I'm running ceph 14.2.20 on Centos7, installed from the official
ceph-nautilus repo. I started a manual rebalance run amd will set it
back to auto once that is done. But I'm already seeing cluster score of
0.015045, so I'm not sure what more it can do.
Thanks.
Seth
# ceph osd crush rule dump
Anthony,
Thanks for the input. I've got my command outputs below. As for the
balancer, I didn't realize it was off. Another colleague had suggested
this previously, but I didn't get very far with it before. I didn't
think much about it at the time since everything automatically
rebalanced whe