You need to be running at least 16.2.11 on the OSDs so that you have the fix for https://tracker.ceph.com/issues/55631.
On Mon, Jan 29, 2024 at 8:07 AM Michel Niyoyita <mico...@gmail.com> wrote: > > I am running ceph pacific , version 16 , ubuntu 20 OS , deployed using > ceph-ansible. > > Michel > > On Mon, Jan 29, 2024 at 4:47 PM Josh Baergen <jbaer...@digitalocean.com> > wrote: >> >> Make sure you're on a fairly recent version of Ceph before doing this, >> though. >> >> Josh >> >> On Mon, Jan 29, 2024 at 5:05 AM Janne Johansson <icepic...@gmail.com> wrote: >> > >> > Den mån 29 jan. 2024 kl 12:58 skrev Michel Niyoyita <mico...@gmail.com>: >> > > >> > > Thank you Frank , >> > > >> > > All disks are HDDs . Would like to know if I can increase the number of >> > > PGs >> > > live in production without a negative impact to the cluster. if yes which >> > > commands to use . >> > >> > Yes. "ceph osd pool set <poolname> pg_num <number larger than before>" >> > where the number usually should be a power of two that leads to a >> > number of PGs per OSD between 100-200. >> > >> > -- >> > May the most significant bit of your life be positive. >> > _______________________________________________ >> > ceph-users mailing list -- ceph-users@ceph.io >> > To unsubscribe send an email to ceph-users-le...@ceph.io _______________________________________________ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io