Thanks kindly Maged/Bailey!  As always it's a bit of a moving target.  New hardware comes out that reveals bottlenecks in our code.  Doubling up the OSDs sometimes improves things.  We figure out how to make the OSDs faster and the old assumptions stop being correct.  Even newer hardware comes out, etc etc.

Mark


On 1/17/24 17:36, Bailey Allison wrote:
+1 to this, great article and great research. Something we've been keeping a 
very close eye on ourselves.

Overall we've mostly settled on the old keep it simple stupid methodology with 
good results. Especially as the benefits have gotten less beneficial the more 
recent your ceph version, and have been rocking with single OSD/NVMe, but as 
always everything is workload dependant and there is sometimes a need for 
doubling up 😊

Regards,

Bailey


-----Original Message-----
From: Maged Mokhtar <mmokh...@petasan.org>
Sent: January 17, 2024 4:59 PM
To: Mark Nelson <mark.nel...@clyso.com>; ceph-users@ceph.io
Subject: [ceph-users] Re: Performance impact of Heterogeneous
environment

Very informative article you did Mark.

IMHO if you find yourself with very high per-OSD core count, it may be logical
to just pack/add more nvmes per host, you'd be getting the best price per
performance and capacity.

/Maged


On 17/01/2024 22:00, Mark Nelson wrote:
It's a little tricky.  In the upstream lab we don't strictly see an
IOPS or average latency advantage with heavy parallelism by running
muliple OSDs per NVMe drive until per-OSD core counts get very high.
There does seem to be a fairly consistent tail latency advantage even
at moderately low core counts however.  Results are here:

https://ceph.io/en/news/blog/2023/reef-osds-per-nvme/

Specifically for jitter, there is probably an advantage to using 2
cores per OSD unless you are very CPU starved, but how much that
actually helps in practice for a typical production workload is
questionable imho.  You do pay some overhead for running 2 OSDs per
NVMe as well.


Mark


On 1/17/24 12:24, Anthony D'Atri wrote:
Conventional wisdom is that with recent Ceph releases there is no
longer a clear advantage to this.

On Jan 17, 2024, at 11:56, Peter Sabaini <pe...@sabaini.at> wrote:

One thing that I've heard people do but haven't done personally with
fast NVMes (not familiar with the IronWolf so not sure if they
qualify) is partition them up so that they run more than one OSD
(say 2 to 4) on a single NVMe to better utilize the NVMe bandwidth.
See
https://ceph.com/community/bluestore-default-vs-tuned-
performance-co
mparison/
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an
email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email
to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to