I suggest continuing with manual PG sizing for now. With 16.2.6 we have
seen the autoscaler scale up the device health metrics to 16000+ PGs on
brand new clusters, which we know is incorrect. It's on our company backlog
to investigate, but far down the backlog. It's bitten us enough times in
the past, and even with new 16.2.6 deployments, we disable it immediately
on cluster creation. On PGs per OSD, 100 does seem about optimal in our
experience, just bear in mind how many pools you have/what type of pools
(EC vs. replicated). We generally end up around ~100 and our clusters are
fairly well balanced, but without inordinate CPU/etc.

On Mon, Nov 1, 2021 at 11:31 AM Alex Petty <pettya...@gmail.com> wrote:

> Hello,
>
> I’m evaluating Ceph as a storage option, using ceph version 16.2.6,
> Pacific stable installed using cephadm. I was hoping to use PG autoscaling
> to reduce ops efforts. I’m standing this up on a cluster with 96 OSDs
> across 9 hosts.
>
> The device_health_metrics pool was created by Ceph automatically once I
> started adding OSD  and created with 2048 PGs. This seems high, and put
> many PGs on each OSD. Documentation indicates that I should be targeting
> around 100 PGs per OSD, is that guideline out of date?
>
> Also, when I created a pool to test erasure coded with a 6+2 config for
> CephFS with PG autoscaling enabled, it was created with 1PG to start, and
> didn’t scale up even as I loaded test data onto it, giving the entire
> CephFS the write performance of 1 single disk as it was only writing to 1
> disk and backfilling to 7 others. Should I be manually setting default PGs
> at a sane level (512, 1024) or will autoscaling size this pool up? I have
> never seen any output from ceph osd pool autoscale-status when I am trying
> to see autoscaling information.
>
> I’d appreciate some guidance about configuring PGs on Pacific.
>
> Thanks,
>
> Alex
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to