Thank you very much. I've increased it to 2*#OSD rounded to the next power of 2.

Best

Ken


On 03.11.22 15:30, Anthony D'Atri wrote:
PG count isn’t just about storage size, it also affects performance, 
parallelism, and recovery.

You want pgp_num for RBD metadata pool to be at the VERY least the number of 
OSDs it lives on, rounded up to the next power of 2.  I’d probably go for at 
least (2x#OSD) rounded up.  If you have two few, your metadata operations will 
contend with each other.

On Nov 3, 2022, at 10:24, mailing-lists <mailing-li...@indane.de> wrote:

Dear Ceph'ers,

I am wondering on how to choose the number of PGs for a RBD-EC-Pool.

To be able to use RBD-Images on a EC-Pool, it needs to have an regular 
RBD-replicated-pool, as well as an EC-Pool with EC overwrites enabled, but how 
many PGs would you need for the RBD-replicated-pool. It doesn't seem to eat a 
lot of storage, so if I'm not mistaken, it could be actually a quite low number 
of PGs, but is this recommended? Is there a best practice for this?


Best

Ken

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to