Or you could just disable the mgr module. Something like
ceph mgr module disable pg_autoscaler
Zitat von Dave Hall <kdh...@binghamton.edu>:
All,
In looking at the options for setting the default pg autoscale option, I
notice that there is a global option setting and a per-pool option
setting. It seems that the options at the pool level are off, warn, and
on. The same, I assume for the global setting.
Is there a way to get rid of the per-pool setting and set the pool to honor
the global setting? I think I'm looking for 'off, warn, on, or global'.
It seems that once the per-pool option is set for all of one's pools, the
global value is irrelevant. This also implies that in a circumstance where
one would want to temporarily suspend autoscaling it would be required to
modify the setting for each pool and then to modify it back afterward.
Thoughts?
Thanks
-Dave
--
Dave Hall
Binghamton University
kdh...@binghamton.edu
On Mon, Mar 29, 2021 at 1:44 PM Anthony D'Atri <anthony.da...@gmail.com>
wrote:
Yes the PG autoscalar has a way of reducing PG count way too far. There’s
a claim that it’s better in Pacific, but I tend to recommend disabling it
and calculating / setting pg_num manually.
> On Mar 29, 2021, at 9:06 AM, Dave Hall <kdh...@binghamton.edu> wrote:
>
> Eugen,
>
> I didn't really think my cluster was eating itself, but I also didn't
want
> to be in denial.
>
> Regarding the autoscaler, I really thought that it only went up - I
didn't
> expect that it would decrease the number of PGs. Plus, I thought I had
it
> turned off. I see now that it's off globally but enabled for this
> particular pool. Also, I see that the target PG count is lower than the
> current.
>
> I guess you learn something new every day.
>
> -Dave
>
> --
> Dave Hall
> Binghamton University
> kdh...@binghamton.edu
> 607-760-2328 (Cell)
> 607-777-4641 (Office)
>
>
> On Mon, Mar 29, 2021 at 7:52 AM Eugen Block <ebl...@nde.ag> wrote:
>
>> Hi,
>>
>> that sounds like the pg_autoscaler is doing its work. Check with:
>>
>> ceph osd pool autoscale-status
>>
>> I don't think ceph is eating itself or that you're losing data. ;-)
>>
>>
>> Zitat von Dave Hall <kdh...@binghamton.edu>:
>>
>>> Hello,
>>>
>>> About 3 weeks ago I added a node and increased the number of OSDs in my
>>> cluster from 24 to 32, and then marked one old OSD down because it was
>>> frequently crashing. .
>>>
>>> After adding the new OSDs the PG count jumped fairly dramatically, but
>> ever
>>> since, amidst a continuous low level of rebalancing, the number of PGs
>> has
>>> gradually decreased to less by 25% from it's max value. Although I
don't
>>> have specific notes, my perception is that the current number of PGs is
>>> actually lower than it was before I added OSDs.
>>>
>>> So what's going on here? It is possible to imagine that my cluster is
>>> slowly eating itself, and that I'm about to lose 200TB of data. It's
also
>>> possible to imagine that this is all due to the gradual optimization of
>> the
>>> pools.
>>>
>>> Note that the primary pool is an EC 8 + 2 containing about 124TB.
>>>
>>> Thanks.
>>>
>>> -Dave
>>>
>>> --
>>> Dave Hall
>>> Binghamton University
>>> kdh...@binghamton.edu
>>> _______________________________________________
>>> ceph-users mailing list -- ceph-users@ceph.io
>>> To unsubscribe send an email to ceph-users-le...@ceph.io
>>
>>
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
>>
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io