Yeah, I see the same, 6 servers have nvme drives and today in the iops side 
were all maxed out, but I don’t understand why, the user made head operation 
like 60000 sometimes in a minute, the cluster iops was like 60-70k and how can 
this max out 6 nvme drives which should be able to server each more then this 
iops :(

Istvan Szabo
Senior Infrastructure Engineer
---------------------------------------------------
Agoda Services Co., Ltd.
e: istvan.sz...@agoda.com<mailto:istvan.sz...@agoda.com>
---------------------------------------------------

On 2021. Jul 15., at 13:03, Anthony D'Atri <anthony.da...@gmail.com> wrote:

`ceph df `?   Back when I was using SATA SSDs I saw limiting more on iops than 
space.

I’ve seen an estimate of 1% of the size of the buckets pool but this is very 
much a function of your object size distribution.  More but smaller drives 
would decrease your failure domain and speed up recovery.

On Jul 14, 2021, at 9:52 PM, Szabo, Istvan (Agoda) <istvan.sz...@agoda.com> 
wrote:

Hi,

How can I know which size of the nvme drive needed for my index pool? At the 
moment I'm using 6x1.92TB NVME (overkill) but I have no idea how is it used.

Thanks

________________________________
This message is confidential and is for the sole use of the intended 
recipient(s). It may also be privileged or otherwise protected by copyright or 
other legal rules. If you have received it by mistake please let us know by 
reply email and delete it from your system. It is prohibited to copy this 
message or disclose its content to anyone. Any confidentiality or privilege is 
not waived or lost by any mistaken delivery or unauthorized disclosure of the 
message. All messages sent to and from Agoda may be monitored to ensure 
compliance with company policies, to protect the company's interests and to 
remove potential malware. Electronic messages may be intercepted, amended, lost 
or deleted, or contain viruses.
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to