PGs are not perfectly balanced per OSD, but I think that's expected/OK
due to setting crush_compat_metrics to bytes? Though realizing as I
type this that what I really want is equal percent-used, which may not
be possible given the slight variation in disk size (see below) in my
cluster?
# ceph
On Thu, 6 Jun 2019 at 03:01, Josh Haft wrote:
>
> Hi everyone,
>
> On my 13.2.5 cluster, I recently enabled the ceph balancer module in
> crush-compat mode.
Why did you choose compat mode? Don't you want to try another one instead?
--
End of message. Next message?
what's your 'ceph osd df tree' outputs?does the osd have the expected PGs?
Josh Haft 于2019年6月7日周五 下午9:23写道:
>
> 95% of usage is CephFS. Remaining is split between RGW and RBD.
>
> On Wed, Jun 5, 2019 at 3:05 PM Gregory Farnum wrote:
> >
> > I think the mimic balancer doesn't include omap data
95% of usage is CephFS. Remaining is split between RGW and RBD.
On Wed, Jun 5, 2019 at 3:05 PM Gregory Farnum wrote:
>
> I think the mimic balancer doesn't include omap data when trying to
> balance the cluster. (Because it doesn't get usable omap stats from
> the cluster anyway; in Nautilus I
I think the mimic balancer doesn't include omap data when trying to
balance the cluster. (Because it doesn't get usable omap stats from
the cluster anyway; in Nautilus I think it does.) Are you using RGW or
CephFS?
-Greg
On Wed, Jun 5, 2019 at 1:01 PM Josh Haft wrote:
>
> Hi everyone,
>
> On my
Hi everyone,
On my 13.2.5 cluster, I recently enabled the ceph balancer module in
crush-compat mode. A couple manual 'eval' and 'execute' runs showed
the score improving, so I set the following and enabled the auto
balancer.
mgr/balancer/crush_compat_metrics:bytes # from