[ceph-users] Re: Very uneven OSD utilization

2021-05-25 Thread Sergei Genchev
Thank you Janne,
I will give upmap a shot. Need to try it first in some non-prod
cluster. Non-prod clusters are doing much better for me even though
they have a lot fewer OSDs..
Thanks everyone!

On Tue, May 25, 2021 at 12:48 AM Janne Johansson  wrote:
>
> I would suggest enabling the upmap balancer if you haven't done that,
> it should help even data out. Even if it would not do better than some
> manual rebalancing scheme, it will at least do it nicely in the
> background some 8 PGs at a time so it doesn't impact client traffic.
>
> I looks very weird to have such uneven distribution even while having
> lots of PGs (which was my first guess =)
>
> Den tis 25 maj 2021 kl 03:47 skrev Sergei Genchev :
> >
> > Hello,
> > I am running a nautilus cluster with 5 OSD nodes/90 disks that is
> > exclusively used for S3. My disks are identical, but utilization
> > ranges from 9% to 82%, and I am starting to get backfill_toofull
> > errors even though I have only used 150TB out of 650TB of data.
> >  - Other than manually crush reweighting OSDs, is there any other
> > option for me ?
> >  - what would cause this uneven distribution? Is there some
> > documentation on how to track down what's going on?
> > output of 'ceph osd df" is at https://pastebin.com/17HWFR12
> >  Thank you!
> > ___
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
>
> --
> May the most significant bit of your life be positive.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Very uneven OSD utilization

2021-05-24 Thread Janne Johansson
I would suggest enabling the upmap balancer if you haven't done that,
it should help even data out. Even if it would not do better than some
manual rebalancing scheme, it will at least do it nicely in the
background some 8 PGs at a time so it doesn't impact client traffic.

I looks very weird to have such uneven distribution even while having
lots of PGs (which was my first guess =)

Den tis 25 maj 2021 kl 03:47 skrev Sergei Genchev :
>
> Hello,
> I am running a nautilus cluster with 5 OSD nodes/90 disks that is
> exclusively used for S3. My disks are identical, but utilization
> ranges from 9% to 82%, and I am starting to get backfill_toofull
> errors even though I have only used 150TB out of 650TB of data.
>  - Other than manually crush reweighting OSDs, is there any other
> option for me ?
>  - what would cause this uneven distribution? Is there some
> documentation on how to track down what's going on?
> output of 'ceph osd df" is at https://pastebin.com/17HWFR12
>  Thank you!
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io



-- 
May the most significant bit of your life be positive.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io