4 NVMe drives per OSD only really makes sense if you have extra CPU to spare and very fast drives.  It can have higher absolute performance when you give OSDs unlimited CPUs, but it tends to be slower (and less efficient) in CPU limited scenarios in our testing (ymmv).  We've got some semi-recent CPU-limited results from the 2021 Q3 crimson slide deck here (see slides 28-31):

https://docs.google.com/presentation/d/1eydyAFKRea8n-VniQzXKW8qkKM9GLVMJt2uDjipJjQA/edit?usp=sharing

I don't think I've seen anyone specifically test multiple OSDs per drive with SPDK.  Figure that the write path often is bottlenecked in the the kv_sync_thread and/or rocksdb.  That's where going multi-OSD gives you more parallelism.  The read path can be a bit more of a wildcard but some times can also benefit from extra parallelism.  Still, it's only really worth it if you've got the CPU to back it up.


Mark


On 2/5/22 6:55 AM, Lazuardi Nasution wrote:
Hi,

I have read some benchmarks which recommends of using 4 OSDs per NVME
drive. Until now, I'm using 4 NVME namespaces per drive for doing that way.
If I'm using SPDK, do I still need to follow 4 OSDs per NVME drive way? Is
there any benchmark related to SPDK and number of OSDs per NVME drive?

Best regards.
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to