Hi Mark,

Yes, we are aware about  kv_sync_thread scaling on non-SPDK deployments. We
have plentiful AMD cores and Micron NVME just like on the RedHat benchmark
with them. I don't know if it has the same behavior with SPDK too. I need
to know how many cores per OSD are needed to saturate Micron NVME
namespace, with single or some namespaces per drive. Maybe RedHat has some
benchmarking data with the same AMD and Micron NVME just like before but
with SPDK.

Best regards.

>
> Date: Sat, 5 Feb 2022 07:17:14 -0600
> From: Mark Nelson <mnel...@redhat.com>
> Subject: [ceph-users] Re: NVME Namspaces vs SPDK
> To: ceph-users@ceph.io
> Message-ID: <d0a3f6f1-8403-243d-e0dc-8f4f757fa...@redhat.com>
> Content-Type: text/plain; charset=utf-8; format=flowed
>
> 4 NVMe drives per OSD only really makes sense if you have extra CPU to
> spare and very fast drives.  It can have higher absolute performance
> when you give OSDs unlimited CPUs, but it tends to be slower (and less
> efficient) in CPU limited scenarios in our testing (ymmv).  We've got
> some semi-recent CPU-limited results from the 2021 Q3 crimson slide deck
> here (see slides 28-31):
>
>
> https://docs.google.com/presentation/d/1eydyAFKRea8n-VniQzXKW8qkKM9GLVMJt2uDjipJjQA/edit?usp=sharing
>
> I don't think I've seen anyone specifically test multiple OSDs per drive
> with SPDK.  Figure that the write path often is bottlenecked in the the
> kv_sync_thread and/or rocksdb.  That's where going multi-OSD gives you
> more parallelism.  The read path can be a bit more of a wildcard but
> some times can also benefit from extra parallelism.  Still, it's only
> really worth it if you've got the CPU to back it up.
>
>
> Mark
>
>
> On 2/5/22 6:55 AM, Lazuardi Nasution wrote:
> > Hi,
> >
> > I have read some benchmarks which recommends of using 4 OSDs per NVME
> > drive. Until now, I'm using 4 NVME namespaces per drive for doing that
> way.
> > If I'm using SPDK, do I still need to follow 4 OSDs per NVME drive way?
> Is
> > there any benchmark related to SPDK and number of OSDs per NVME drive?
> >
> > Best regards.
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to