If you only have one pool of significant size, then your PG ratio is around 40 
.  IMHO too low.

If you're using HDDs I personally might set to 8192 ; if using NVMe SSDS 
arguably 16384 -- assuming that your OSD sizes are more or less close to each 
other.


`ceph osd df` will show toward the right how many PG replicas are on each OSD.

> On Mar 5, 2024, at 14:50, Nikolaos Dandoulakis <nick....@ed.ac.uk> wrote:
> 
> Hi Anthony,
> 
> I should have said, it’s replicated (3)
> 
> Best,
> Nick
> 
> Sent from my phone, apologies for any typos!
> From: Anthony D'Atri <a...@dreamsnake.net>
> Sent: Tuesday, March 5, 2024 7:22:42 PM
> To: Nikolaos Dandoulakis <nick....@ed.ac.uk>
> Cc: ceph-users@ceph.io <ceph-users@ceph.io>
> Subject: Re: [ceph-users] Number of pgs
>  
> This email was sent to you by someone outside the University.
> You should only click on links or attachments if you are certain that the 
> email is genuine and the content is safe.
> 
> Replicated or EC?
> 
> > On Mar 5, 2024, at 14:09, Nikolaos Dandoulakis <nick....@ed.ac.uk> wrote:
> >
> > Hi all,
> >
> > Pretty sure not the first time you see a thread like this.
> >
> > Our cluster consists of 12 nodes/153 OSDs/1.2 PiB used, 708 TiB /1.9 PiB 
> > avail
> >
> > The data pool is 2048 pgs big exactly the same number as when the cluster 
> > started. We have no issues with the cluster, everything runs as expected 
> > and very efficiently. We support about 1000 clients. The question is should 
> > we increase the number of pgs? If you think so, what is the sensible number 
> > to go to? 4096? More?
> >
> > I will eagerly await for your response.
> >
> > Best,
> > Nick
> >
> > P.S. Yes, autoscaler is off :)
> > The University of Edinburgh is a charitable body, registered in Scotland, 
> > with registration number SC005336. Is e buidheann carthannais a th' ann an 
> > Oilthigh Dhùn Èideann, clàraichte an Alba, àireamh clàraidh SC005336.
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
> 

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to