[ceph-users] Number of pgs
Hi all, Pretty sure not the very first time you see a thread like this. Our cluster consists of 12 nodes/153 OSDs/1.2 PiB used, 708 TiB /1.9 PiB avail The data pool is 2048 pgs big exactly the number when the cluster started. We have no issues with the cluster, everything runs as expected
[ceph-users] Re: Error EPERM: error setting 'osd_op_queue' to 'wpq': (1) Operation not permitted
Hi Josh, Thanks a million, your proposed solution worked. Best, Nick ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io