Could you share the output of

    ceph osd pool ls detail

?

This way we can see how the pools are configured and help recommend if
pg_autoscaler is worth enabling.

Cheers, Dan

On Tue, Jun 16, 2020 at 11:51 AM Boris Behrens <b...@kervyn.de> wrote:
>
> I read about the "warm" option and we are already discussing this.
>
> I don't know if the pgs needs a tuning. I don't know what the impact
> is and if there will be any difference if we enable it.
>
> The, meanwhile gone, last ceph admin made a ticket, and I am not
> particularly familiar with ceph. So I need to work on this ticket and
> I try not to trash our ceph storage :-)
>
> Am Di., 16. Juni 2020 um 11:39 Uhr schrieb Dan van der Ster
> <d...@vanderster.com>:
> >
> > Hi,
> >
> > I agree with "someone" -- it's not a good idea to just naively enable
> > pg_autoscaler on an existing cluster with lots of data and active
> > customers.
> >
> > If you're curious about this feature, it would be harmless to start
> > out by enabling it with pg_autoscale_mode = warn on each pool.
> > This way you can see what the autoscaler would do if it were set to
> > *on*. Then you can tweak all the target_size_ratio or
> > target_size_bytes accordingly.
> >
> > BTW, do you have some feeling that your 17000 PGs are currently not
> > correctly proportioned for your cluster?
> >
> > -- Dan
> >
> > On Tue, Jun 16, 2020 at 11:31 AM Boris Behrens <b...@kervyn.de> wrote:
> > >
> > > Hi,
> > >
> > > I would like to enable the pg_autoscaler on our nautilus cluster.
> > > Someone told me that I should be really really careful to NOT have
> > > customer impact.
> > >
> > > Maybe someone can share some experience on this?
> > >
> > > The Cluster got 455 OSDs on 19 hosts with ~17000 PGs and ~1petabyte
> > > raw storage where ~600TB raw is used.
> > > _______________________________________________
> > > ceph-users mailing list -- ceph-users@ceph.io
> > > To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
>
> --
> Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend
> im groüen Saal.
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to