[ceph-users] Re: Advice on balancing data across OSDs

2022-10-24 Thread Joseph Mundackal
Quick napkin math for your 3 way replicated pool - eg: pool 28 - you have 9.9 TB across 256 pgs ~= 10137 GB across 256 pgs ~= 39 GB per PG for 4+2 ec on pool 51 - 32 TB across 128 pgs ~= 21768 GB across 128 pgs ~= 256 GB per pg - with the 4+2 profile this should be spread across 4 parts ~= 64 GB

[ceph-users] Re: Advice on balancing data across OSDs

2022-10-24 Thread Josh Baergen
Hi Tim, Ah, it didn't sink in for me at first how many pools there were here. I think you might be hitting the issue that the author of https://github.com/TheJJ/ceph-balancer ran into, and thus their balancer might help in this case. Josh On Mon, Oct 24, 2022 at 8:37 AM Tim Bishop wrote: > > Hi

[ceph-users] Re: Advice on balancing data across OSDs

2022-10-24 Thread Tim Bishop
Hi Joseph, Here's some of the larger pools. Notable the largest (pool 51, 32 TiB CephFS data) doesn't have the highest number of PGs. POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL pool28 28 256 9.9 TiB2.61M 30 TiB 43.28 13 TiB pool29

[ceph-users] Re: Advice on balancing data across OSDs

2022-10-24 Thread Tim Bishop
Hi Josh, On Mon, Oct 24, 2022 at 07:20:46AM -0600, Josh Baergen wrote: > > I've included the osd df output below, along with pool and crush rules. > > Looking at these, the balancer module should be taking care of this > imbalance automatically. What does "ceph balancer status" say? # ceph balan

[ceph-users] Re: Advice on balancing data across OSDs

2022-10-24 Thread Joseph Mundackal
Hi Tim, You might want to check you pool utilization and see if there are enough pg's in that pool. Higher GB per pg can result in this scenario. I am also assuming that you have the balancer module turn on (ceph balancer status) should tell you that as well. If you have enough pgs in the bigger

[ceph-users] Re: Advice on balancing data across OSDs

2022-10-24 Thread Anthony D'Atri
Hey, Tim. Visualization is great to help get a better sense of OSD fillage than a table of numbers. A Grafana panel works, or a quick script: Grab this from from CERN: https://gitlab.cern.ch/ceph/ceph-scripts/-/blob/master/tools/histogram.py

[ceph-users] Re: Advice on balancing data across OSDs

2022-10-24 Thread Josh Baergen
Hi Tim, > I've included the osd df output below, along with pool and crush rules. Looking at these, the balancer module should be taking care of this imbalance automatically. What does "ceph balancer status" say? Josh ___ ceph-users mailing list -- cep