Hi Alex,
Switch autoscaler to 'scale-up' profile, it will keep PGs at minimum and
increase them as required. Default one is 'scale-down'.

Regards,
Yury.

On Tue, Nov 2, 2021 at 3:31 AM Alex Petty <pettya...@gmail.com> wrote:

> Hello,
>
> I’m evaluating Ceph as a storage option, using ceph version 16.2.6,
> Pacific stable installed using cephadm. I was hoping to use PG autoscaling
> to reduce ops efforts. I’m standing this up on a cluster with 96 OSDs
> across 9 hosts.
>
> The device_health_metrics pool was created by Ceph automatically once I
> started adding OSD  and created with 2048 PGs. This seems high, and put
> many PGs on each OSD. Documentation indicates that I should be targeting
> around 100 PGs per OSD, is that guideline out of date?
>
> Also, when I created a pool to test erasure coded with a 6+2 config for
> CephFS with PG autoscaling enabled, it was created with 1PG to start, and
> didn’t scale up even as I loaded test data onto it, giving the entire
> CephFS the write performance of 1 single disk as it was only writing to 1
> disk and backfilling to 7 others. Should I be manually setting default PGs
> at a sane level (512, 1024) or will autoscaling size this pool up? I have
> never seen any output from ceph osd pool autoscale-status when I am trying
> to see autoscaling information.
>
> I’d appreciate some guidance about configuring PGs on Pacific.
>
> Thanks,
>
> Alex
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to